text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Controls on fault zone structure and brittle fracturing in the foliated hanging wall of the Alpine Fault . Three datasets are used to quantify fracture density, orientation, and fill in the foliated hanging wall of the Alpine Fault: (1) X-ray computed tomography (CT) images of drill core collected within 25 m of its principal slip zones (PSZs) during the first phase of the Deep Fault Drilling Project that were reoriented with respect to borehole tele-viewer images, (2) field measurements from creek sections up to 500 m from the PSZs, and (3) CT images of oriented drill core collected during the Amethyst Hydro Project at distances of ∼ 0.7–2 km from the PSZs. Results show that within 160 m of the PSZs in foliated cataclasites and ultra-mylonites, gouge-filled fractures exhibit a wide range of P u blis h e r s p a g e : h t t p:// dx. d oi.o r g/ 1 0. 5 1 9 4/ s e-9-4 6 9-2 0 1 8 < h t t p:// dx. d oi.o r g/ 1 0. 5 1 9 4/ s e-9-4 6 9-2 0 1 8 > Pl e a s e n o t e: C h a n g e s m a d e a s a r e s ul t of p u blis hi n g p r o c e s s e s s u c h a s c o py-e di ti n g, fo r m a t ti n g a n d Thi s v e r sio n is b ei n g m a d e a v ail a bl e in a c c o r d a n c e wit h p u blis h e r p olici e s. S e e h t t p://o r c a . cf. a c. u k/ p olici e s. h t ml fo r u s a g e p olici e s. Co py ri g h t a n d m o r al ri g h t s fo r p u blic a tio n s m a d e a v ail a bl e in ORCA a r e r e t ai n e d by t h e c o py ri g h t h ol d e r s . Introduction Conceptual models of fault zone structure in the upper crust often invoke a relatively narrow "fault core" that accommodates most displacement, surrounded by a halo of heavily fractured rock termed the "damage zone" (Caine et al., 1996;Chester et al., 1993;Chester and Logan, 1986;Faulkner et al., 2010). These models have been successfully applied in a variety of tectonic settings and for a wide range of fault displacements and exhumation depths (e.g. Choi et al., 2016;Faulkner et al., 2010;Kim et al., 2004;Mitchell and Faulkner, 2009;Savage and Brodsky, 2011). However, the term "damage zone" has been applied by geologists and geophysicists to describe a variety of fault-related features, such as fractures and faults at stepovers and bends (Chester and Chester, 2000;Kim et al., 2004;Mitchell and Faulkner, 2009; (Rattenbury and Isaac, 2012) and has been draped over a digital elevation model (Columbus et al., 2011). Wilson et al., 2003), the volume of inelastic deformation that is induced by dynamic stresses during earthquake rupture propagation (Andrews, 2005;Cowie and Scholz, 1992;Rice et al., 2005;Templeton et al., 2008;Vermilye and Scholz, 1998), and the volume of rock in which earthquake swarms or foreshock and aftershock sequences are localised (Kim and Sanderson, 2008;Savage et al., 2017;Sibson, 1989;Yukutake et al., 2011). Furthermore, though damage zones are typically reported to be < 1 km wide (Faulkner et al., 2011;Savage and Brodsky, 2011), co-seismic ground shaking can modify fracture permeability many hundreds of kilometres away from the fault source (Cox et al., 2015;Muir-Wood and King, 1993;O'Brien et al., 2016). Brittle faults often develop in mylonite sequences or other (e.g. jointed) rocks that contain compositional and mechanical anisotropies (Bistacchi et al., 2012;Chester and Fletcher, 1997;Massironi et al., 2011). Evidence from field studies (Bistacchi et al., 2010;Peacock and Sanderson, 1992), experiments (Donath, 1961;Misra et al., 2015;Paterson and Wong, 2005), and numerical modelling (Chester and Fletcher, 1997) demonstrates that such anisotropy can signif-icantly affect the orientation and density of brittle fractures. Despite this, "fault core-damage zone" models are based largely on field observations from relatively isotropic host rocks, and there have been comparatively few field studies (a notable exception being Bistacchi et al., 2010) that document the influence of mechanical anisotropy on patterns of brittle fracture damage in large-displacement faults. In this contribution, multiple datasets across a range of scales were used to analyse fracture densities, orientations, and mineral fills across the hanging wall of the Alpine Fault's central section. Measurements from within 25 m of the Alpine Fault principal slip zones (PSZs) were made from shallow (depths < 130 m) drill cores and wireline logs obtained during the first phase of the Deep Fault Drilling Project (DFDP-1). These are combined with field studies at distances < 500 m from the PSZs and analyses of drill core recovered at 0.7-2 km from the PSZs during the Amethyst Hydro Project (AHP). Results are then compared to measurements of hydraulic conductivity (Cox et al., 2015;Townend et al., 2017) and geophysical studies (Boese et al., 2012;Eccles et al., 2015) around the Alpine Fault. In doing so, we critically assess the application of "damage zone" models to an active plate-boundary-scale structure. Furthermore, the Alpine Fault rapidly exhumes ductile-to-brittle fault rock sequences from depths of up to 35 km (Little et al., 2005;Norris and Toy, 2014). Fracturing in its hanging wall therefore overprints a 1-2 km wide mylonite sequence containing a pervasive foliation Cooper, 1997, 2007;Toy, 2008), and so can provide new insights into the relationships between fracturing and mechanical anisotropy. Tectonic setting of the Alpine Fault The Alpine Fault is a crustal-scale (along-strike extent ∼ 850 km, depth ∼ 35 km) transpressive discontinuity accommodating ∼ 70 % of Pacific-Australian plate motion in the South Island of New Zealand (DeMets et al., 1994;Norris and Cooper, 2001, Fig. 1a). This study focuses on the central section between the Toaroha and Martyr rivers (Barth et al., 2013) where it currently accommodates dextral strike slip at a rate of 27 ± 5 mm yr −1 and dip slip at a rate of 6-10 mm yr −1 (Little et al., 2005;Norris and Cooper, 2001). In the central section at depths greater than 8-12 km, the Alpine Fault accommodates motion via viscous creep across a > 1 km wide ductile shear zone in which the hanging-wall "Alpine Schist" protolith is progressively mylonitised Toy et al., 2010). Shear strains increase with proximity to the Alpine Fault and are recorded by protomylonites, mylonites, and ultramylonites, which occur in spatial sequence towards the fault ( Fig. 2; Norris and Cooper, 2003;Reed, 1964;Toy et al., 2008). Foliation in the mylonite sequence is mainly defined by alternating quartzofeldspathic and mica-rich layers (Fig. 2). Bottle-green hornblende-rich metabasic mylonites and purple-dark grey mylonites that are comparatively mica rich are also present. Their presence reflects variations in protolith lithology (Cooper and Norris, 2011;Norris and Cooper, 2007;Sibson et al., 1981;Toy, 2008). As the mylonites in the hanging wall are exhumed to depths of less than 8-12 km, temperatures drop below those at which quartz plasticity occurs and brittle structures start to overprint the mylonitic shear zone Toy et al., 2010Toy et al., , 2015. This brittle overprint is reflected in the formation of a ∼ 20 m thick layer of green, indurated, and often foliated cataclasite Toy et al., 2015), and a 10-50 cm thick clay-rich PSZ that is preserved adjacent to the currently active fault trace Ikari et al., 2014;Mitchell and Toy, 2014). To the first order (i.e. at scales > 10 km), the trace of the Alpine Fault is remarkably linear, with an average strike of 055 • . On the basis of geophysical imaging and measurements of the mylonitic foliationwhich is thought to parallel the fault -it is estimated to dip at ∼ 45 • in its central section (Sibson et al., 1981;, though this may locally exceed 60 • (Little et al., 2005; Toy et al., 2017). At scales of 1-10 km, perturbations in the stress field induced by hanging-wall topography results in segmentation of the Alpine Fault. Segmentation is rooted to depths of 0.5-4 km and comprises kilometre-long, approximately E-W-striking and steeply dipping strike-slip fault strands, which adjoin NE-SW-striking, gently dipping (∼ 30 • ) thrust segments (Barth et al., 2012;Langridge et al., 2014;Cooper, 1995, 2007;Simpson et al., 1994;Upton et al., 2017). Fracture orientations from DFDP-1 drill core Hanging-wall fracture orientations immediately adjacent to the Alpine Fault's PSZ were assessed through analysis of datasets arising from the first phase of the DFDP-1 (http: //alpine.icdp-online.org, last access: 18 April 2018). DFDP-1 successfully sampled the Alpine Fault in two boreholes (DFDP-1A and DFDP-1B, Fig. 3) at depths of less than 150 m at Gaunt Creek (Fig. 1b, Sutherland et al., 2012). The geophysical properties of the DFDP-1 boreholes were characterised by a full suite of wireline logs (Townend et al., 2013). These were combined with visual descriptions of ∼ 70 m of core recovered across the two boreholes to construct a lithological classification scheme for DFDP-1 drill core (Fig. 3, Toy et al., 2015). Abundant fractures were observed in X-ray computed tomography (CT) scans of DFDP-1 drill core (Williams et al., 2016). The true orientations of these fracture were obtained by generating "unrolled" CT images of individual core sections (Mills and Williams, 2017), which are directly analogous to geographically referenced -but lower resolution -borehole televiewer (BHTV) images. Where fractures could be matched between the two images, a rotation could be derived to transform all fracture orientations in the CT scans from a local core reference frame to their true geographic orientation (Fig. 4). Depending on the number of fractures matched, the core was rotated with a high, moderate, or low degree of confidence. In DFDP-1A, the quality of BHTV imaging was insufficient to attempt matching fractures, whereas in the Alpine Fault's relatively intact footwall (Townend et al., 2013), too few fractures (less than one fracture per core section) could be imaged to attempt core reorientation. Therefore, the true orientation of fractures was only determined for the depth interval 94-126 m in the DFDP-1B borehole (Fig. 3). Given the orientation of the PSZ-1 (which separates hanging-wall and footwall cataclasite) sampled in DFDP-1 (015/43 E; Townend et al., 2013), this spans an orthogonal distance of ∼ 25 m. A full methodology is provided in Appendix A, the rotations applied to DFDP-1 core sections are listed in Table S1 in the Supplement, and a complete CT-BHTV image comparison is given in Williams et al. (2017b). Field observations of fracture orientations and densities At orthogonal distances of up to 150-250 m from the PSZs, fracture orientations and densities were measured in four creeks (Gaunt Creek, Stony Creek, Hare Mare Creek, and Havelock Creek; Fig. 1b) that cut across the hanging-wall sequence approximately perpendicular to the main fault trace (Fig. S1). Along each creek, fracture orientations and densities were measured at three or four stations. This information was also gathered at approximately 500 m from the Alpine Fault at Bullock Creek (Fig. 1b). Each creek transect cuts across a thrust segment of the Alpine Fault, so the orthogonal distance between the measuring stations and the PSZs was calculated assuming a fault dip of 30 • Cooper, 1995, 1997). The mylonite lithology for each station was classified using the scheme presented by Toy (2008). The outcrops encountered along these transects are typically subvertical and may be covered by debris except at their bases where they are frequently cleaned by flood events (Fig. S2 in the Supplement). They are therefore poorly suited for fracture density analysis using circular scanlines (e.g. Mauldon et al., 2001). Instead, the fracture density was calculated from the number and orientation of fractures that intersected a linear transect along the base of each outcrop (Priest, 1993;Schulz and Evans, 2000). This technique has the tendency to under-sample fractures oriented at low angles to the scanline. Therefore, a weighting (w) factor calculated using a modified version of the Terzaghi correction (Massiot et al., 2015;Terzaghi, 1965) was applied to each fracture, and results are shown as "corrected" fracture density. Fracture orientations in the Amethyst Hydro Project boreholes The AHP was developed to divert water from the Amethyst The drill cores are therefore at orthogonal distances of ∼ 0.7-2.0 km from the PSZs. To provide a dataset analogous to the DFDP-1 CT scans, a total of 31.9 m of drill core from the AHP boreholes was CT scanned at the Southern Cross Hospital in Wellington, New Zealand. Initial descriptions of the drill core found that the rock quality designation (RQD, the percent of intact core lengths > 100 mm per 1 m of drill core) varied considerably due to intense fracturing adjacent to the Tarpot Fault and other minor faults that intersect the AHP boreholes (McCahon, 2006;Savage, 2013). However, for practical reasons, scanning was focused on intervals of high RQD (Fig. S3). The CT scanner was operated at 100 mA and an X-ray tube voltage of 120 kVp. Slice spacing was 1.25 mm, field of view was 250 mm, and the image size was 512×512 pixels. Therefore, the size of one voxel is 0.488 × 0.488 × 1.25 mm in the x, y, and z directions, respectively. Reconstruction of two-dimensional CT slices into three-dimensional images of the drill core was performed using OsiriX Imaging Software (http://www.osirix-viewer.com/, last access: 18 April 2018). . Examples of matching structures between BHTV images and unrolled CT images. In each image, the first two columns are the BHTV amplitude image, without and with interpretations, respectively, whilst the third and fourth columns depict the unrolled CT image over the same interval, also without and with interpretations. Fractures that have been traced in red indicate those that were matched to re-orientated core. AHP drill core was not oriented. However, the orientation of the schistosity is noted in the drill-core logs to an accuracy of ±5 • (McCahon, 2006), where it is consistent with the schistosity orientation measured in the Amethyst Tunnel itself (Savage, 2013). It can thus be used as a reference to reorient drill-core CT scans back into a geographic reference frame. For BH2 and BH3 drill cores, which are vertical, this required only a single transformation. For the inclined BH1 and BH4 drill cores, this required first rotating the core with respect to the foliation. These orientations were then corrected for the inclination of the drill core using the Planes from Oriented and Navigated Drillcore (POND) Excel spreadsheet (Stanley and Hooper, 2003). Statistical analysis of fracture orientations The clustering intensity of fracture orientations was quantified using the resultant vector method of Priest (1993), where the vector for each fracture was weighted by the Terzaghi correction for misorientation bias (Massiot et al., 2015;Terzaghi, 1965). This analysis was performed only for the DFDP-1 and AHP datasets, which sampled a large population (> 100) of fractures. Field-measuring stations sampled too few (< 30) fractures to reliably perform this analysis, and so their clustering is described in a qualitative sense only. Fracture orientations in DFDP-1 drill core within 25 m of the Alpine Fault In the DFDP-1 CT images, a total of 637 fractures were rotated into their true geographic orientation where they show a weak cluster about the orientation of the foliation and Alpine Fault PSZs at Gaunt Creek (015/43 E, Fig. 5a, Appendix B; Townend et al., 2013). Features in DFDP-1B BHTV images are also aligned about this orientation but with a higher cluster intensity than fractures noted in the CT images (Fig. 5b, Table 1). This may reflect (1) features observed at the resolution of the BHTV are more likely to be aligned subparallel to the fault plane and foliation than those visible in CT, and/or (2) some of the planar features identified from the BHTV images were the mylonitic foliation itself. The clustering of fractures hosted in foliated ultramylonites and cataclasites (units 1, 2, and 4 of Toy et al., 2015) is the same as fractures hosted in relatively homogenous unfoliated cataclasites (unit 3 of Toy et al., 2015; Fig. 5c and d, Table 1). We also observed no clear relationships between fracture fill (Table 1 of Williams et al., 2016) and fracture orientation (Fig. 5a). Fracture orientations, densities, and fill in field transects within 500 m of the Alpine Fault The orientations and densities of fractures observed in the four field transects are summarised in Fig. 7. In these transects, similar fractures to those observed in the CT scans of DFDP-1 core are identified (Fig. 8). Total fracture density Figure 5. Lower hemisphere equal area stereoplots depicting orientation of fractures in DFDP-1. Contouring on stereoplots was applied to poles that are weighted depending on their orientation correction w (see Sect. 3.2) and that are rounded to the nearest whole number. Contours were then generated for the weighted poles using a probability distribution calculated by a Kernel function in the RFOC package for R (Lees, 2014). Great circle represents orientation of Alpine Fault plane and foliation at DFDP-1 site (Townend et al., 2013). (a) Orientation of all fractures that were reoriented by matching structures between unrolled CT images and BHTV images, sorted by fracture type (Williams et al., 2016). (b) Orientation of features recognised in the BHTV images over the interval of reoriented core (94-126 m in DFDP-1B). Fracture orientations extracted from reoriented DFDP-1 CT images in (c) foliated units and (d) unfoliated units, using the DFDP-1 lithological classification scheme (Toy et al., 2015). Table S2) within 100-160 m of the PSZs (Fig. 7a). The thicker gouge-filled fractures (> 1 cm) commonly juxtapose different lithologies or offset markers ( Fig. 8d-f). Thinner gouge-filled fractures (< 1 cm) are localised to within 160 m of the Alpine Fault. Open fractures ( Fig. 8g-i) are present at all stations, though are most prevalent at those furthest from the fault (Fig. 7b). The composition of the mylonites can also affect fracture density. When they are juxtaposed together, micaceous mylonites and ultramylonites are observed to contain relatively high densities of gouge-filled fractures compared to quartzofeldspathic mylonites and ultramylonites (Fig. 9). Localities that showed the widest range in fracture orientations tend to be less than 160 m from the PSZs within the ultramylonites (Fig. 7b). Within mylonite units, fracture orientations tend to be more aligned to the foliation (Fig. 7b), although gouge-filled fractures can sometimes cut across it (e.g. Bullock Creek). The AHP sampled grey, well-foliated Alpine Schist (Fig. 10), a subgroup of the Haast Schist (textural zones III-IV, Turnbull et al., 2001;Cox and Barrell, 2007). Fracture orientations are clustered about the orientation of the host rock schistosity in agreement with the findings during initial drillcore descriptions and observations within the Amethyst Tunnel itself ( Fig. 11; McCahon, 2006;Savage, 2013). The clustering of these fracture orientations is stronger than in the DFDP-1 datasets ( Table 1). Fractures that cut across the schistosity are most frequent in BH4 (Figs. 10d and 11). Though fractures are predominantly open, it is conceivable that the original fill may have been lost during the subsequent core handling processes. This means that standard schemes to differentiate between natural and induced fractures (Kulander et al., 1990;Williams et al., 2016) cannot be applied to this dataset. Nevertheless, some open fractures must be natural as they show alteration haloes (Fig. 10a) implying that they were once conduits for fluid flow. Furthermore, packer tests conducted in these boreholes indicate hydraulic conductivities of ∼ 10 −6 -10 −5 ms −1 , which is equivalent to permeabilities of 10 −13 -10 −12 m 2 (Cox et al., 2015;McCahon, 2006). No permeability measurements have been made in the schist protolith at greater distances from Alpine Fault; however, these measurements are orders of magnitude higher than what has been reported in other metamorphic rock terranes (∼ 10 −20 -10 −17 m 2 ; Manning and Ingebritsen, 1999) and for typical continental crust (∼ 10 −17 m 2 ; Townend and Zoback, 2000). Two styles of fracturing are evident in the foliated Alpine Fault cataclasite, mylonite, and schist sequence (Fig. 12). Within DFDP-1 drill core, fractures are predominantly gouge-filled and exhibit a range of orientations (Figs. 5 and 6) with only a small proportion (11 %) of fractures in foliated cataclasites and ultramylonites clearly foliation parallel (Williams et al., 2016). However, in schists sampled by the AHP drill core, the fractures are more clustered about the foliation than in DFDP-1 drill core (Fig. 11, Table 1). The difference in fracture clustering between the DFDP-1 and AHP drill cores is qualitatively replicated by the field transects, where fractures show variable orientations immediately adjacent to the Alpine Fault but are typically foliation parallel at the sites furthest from the fault (Fig. 7). Furthermore, field transects show that the variably oriented fractures have a gouge fill, whilst foliation-parallel fractures further from the fault tend to be open (Figs. 7 and 8). Experimental studies on foliated rocks demonstrate that mechanical anisotropy will exert the greatest control on rock failure when (1) the angle between the maximum principal stress (σ 1 ) and the anisotropy (α) is ∼ 30 • , (2) confining pressure is low (< 35 MPa), and (3) the mechanical "strength" of the anisotropy is high (Donath, 1961;Misra et al., 2015;Nasseri et al., 2003;Paterson and Wong, 2005). The first factor can be approximated for the Alpine Fault given the mylonite's average orientation of 055/45SE and the stress tensor orientation within the surrounding crust, determined from focal mechanisms of microseismicity in the fault's hanging wall (Boese et al., 2012). This yields a value of α of approximately 44 • , when measured in the plane containing the maximum and minimum principal stresses. This can be considered an intermediate value of α, since in deformation experiments fractures may form parallel or non-parallel to the foliation depending on the combination of confining pressure and lithology (Donath, 1961;Nasseri et al., 2003;Paterson and Wong, 2005). Foliation-parallel fractures are least common in the ultramylonites and foliated cataclasites. Indeed, in the DFDP-1 datasets, there is no difference in fracture clustering between foliated and unfoliated units (Table 1). Lithology may control mechanical anisotropy depending on mineralogy, porosity, grain size, and the nature of the foliation surfaces (Donath, 1961;Nasseri et al., 2003). It is notable that phase mixing and grain size reduction in the ultramylonites reduces the intensity of the foliation, compared to the relatively coarsegrained schists, protomylonites, and mylonites ( Fig. 2; Norris and Toy et al., 2010Toy et al., , 2008. These data suggest that this lithological change could have a marked effect on the orientation of fractures. Compositional variations between relatively quartzofeldspathic and relatively micaceous mylonites can also influence the density of frac-tures (Fig. 9). These observations highlight that fracturing in the upper crust may be influenced by lithological variations developed within an underlying linked, and synkinematic, shear zone. However, at other localities (e.g. Stony Creek, Fig. 7), variations in dominant fracture characteristics are confined within units of similar composition and texture. This suggests that variations in confining pressure may also be important in controlling the relationship between fractures and foliation, as discussed in the next section. Fracture damage around the Alpine Fault Field transects across the Alpine Fault's hanging wall show that fracture density remains roughly constant (> 3.5 fractures m −1 , corrected for orientation bias) for at least 500 m from the fault (Fig. 7a). Furthermore, the AHP (Cox et al., 2015) and DFDP-2B boreholes (Sutherland et (1) Identifies fracture cutting across foliation, (2) foliation-parallel fracture with alteration halo, (3) foliation defined by quartzofeldspathic bands that have low CT numbers. (c, d) Core-axial parallel CT image slices of AHP drill core. In panel (c), white arrows point to a "crush zone" subparallel to foliation . Panel (d) shows more variable fracture orientations identified in BH4 (section 70-4, 196.62-196.80 m). al., 2017;Townend et al., 2017) demonstrate an interval of enhanced permeability (10 −16 -10 −13 m 2 ) that extends for at least 2 km into the Alpine Fault's hanging wall. Permeability in this rock mass is controlled by open fractures (Cox et al., 2015;Sutherland et al., 2017;Townend et al., 2017) that are generally foliation parallel (Massiot, 2017), and so directly analogous to the fractures sampled in the field (Fig. 8g-i) and in AHP drill core (Fig. 10). Conventional definitions of fault structure, that use fracture density and permeability as criteria for damage zone width (e.g. Berg and Skar, 2005;Caine et al., 1996;Faulkner et al., 2010;Savage and Brodsky, 2011;Schulz and Evans, 2000), would therefore suggest that the Alpine Fault's damage zone extends for at least 500 m, and possibly 2 km, into its hanging wall. Figure 11. Equal area lower hemisphere projection of fracture orientations recognised in CT scans of AHP drill core separated by a borehole. Contours are plotted with weighted poles (see Fig. 5). Nevertheless, within the field transects we also note a distinct interval adjacent to the Alpine Fault's PSZs that contains a relatively high density of gouge-filled fractures (> 1 fracture m −1 , Fig. 7a). The width of this interval is < 147 m (i.e. station 4) from the PSZs at Gaunt Creek, < 103 m at Stony Creek (i.e. station 3), < 151 m at Hare Mare Creek (at station 2, Fig. 8c), and < 160 m at Havelock Creek (i.e. station 4). These width estimates are based on assumption that the Alpine Fault dips at 30 • below the measuring stations (see the methods section). However, the fault dip may locally vary (for example, the fault dip sampled by DFDP-1 was 43 • ; Townend et al., 2013), and there is also uncertainty in the depth extent of its near-surface segmentation (Barth et al., 2012;Norris and Cooper, 1995;Upton et al., 2017). Nevertheless, even if the fault dipped at 45 • beneath the measuring stations, the zone of higher-density gouge-filled fractures would be < 205 m wide (Table S3) and so is still appreciably closer to the Alpine Fault than the intervals sampled by the AHP and DFDP-2 boreholes. It is this ∼ 100-160 m wide interval with a high density of gouge-filled fractures that Cooper (1997, 2007) interpreted as the extent Alpine Fault's central section hanging-wall damage zone. Furthermore, the width of this zone is comparable to damage zone widths estimated elsewhere on the Alpine Fault (e.g. Barth et al., 2013, along the southern section; Wright, 1998, at the northern end of the central section, Fig. 13a) and to other crustal-scale fault zones that have accommodated hundreds of kilometres of Norris and Cooper (2007) and Sutherland et al. (2012). displacement ( Fig. 13b; Faulkner et al., 2011;Savage and Brodsky, 2011). Interpretations of damage zone width within the Alpine Fault's hanging wall may therefore differ by an order of magnitude depending on what criteria are used. To reconcile this, Townend et al. (2017) suggested that the ∼ 2 km wide interval of enhanced permeability and foliation-parallel fracturing can be considered as an "outer damage zone" (Fig. 12). Fractures within this zone may have formed by co-seismic shaking and slip on critically stressed fractures (Cox et al., 2015;Townend et al., 2017), or by the release of confining pressure (Engelder, 1985;Price, 1959;Zangerl et al., 2006) during rapid exhumation (6-9 mm yr −1 ) of the hanging wall (Little et al., 2005;Tippett and Kamp, 1995). Rare gougefilled fractures (< 1 fracture m −1 ) in this interval (e.g. Fig. 8e) may also be the structures accommodating the diffuse, low to moderate magnitude (M W < 6) seismicity that has been recorded in a ∼ 5 km wide zone within the Alpine Fault's hanging wall (Boese et al., 2012;Eberhart-Phillips, 1995). Conversely, the < 160 m wide zone with a relatively high density of gouge-filled fractures defines a narrower "inner damage zone" (Fig. 12; Townend et al. 2017). Microstructural and compositional analysis of these fractures indicates that they formed in response to wear and shearing of the wall rock and were subsequently mineralised due to circulation of hydrothermal fluids (Warr and Cox, 2001;Williams et al., 2017a). Offset markers across gouge-filled fractures (particularly those < 1 cm thick) are rarely observed in DFDP-1 core and field transects, but where they are present, reverse offset is most frequently noted ( Fig. 8d; Norris and Cooper, 1997;Toy et al., 2015). "Gouge-filled shears" that accommodated strike slip (Norris and Cooper, 1997), normal dip slip , or a combination of both (Barth et al., 2012) have also been observed. Cooper and Norris (1994) interpreted that dip-slip fractures facilitated imbrication, tectonic thickening, and rotation of Alpine Fault thrust sheets as they moved across the irregular topography of the footwall gravels. Dextral shears are interpreted to reflect the partitioning of strike-slip movement away from shallowly dipping PSZs (Barth et al., 2012). The diverse range of fracture orientations and shear senses in gouge-filled fractures therefore indicates complex internal deformation of Alpine Fault thrust sheets at shallow depths (Barth et al., 2013) and Kaka Creek (Wright, 1998). (b) Log-log plot of fault zone thickness as a function of fault displacement previously presented in Savage and Brodsky (2011), combined with estimates made for the Alpine Fault assuming footwall damage is no more extensive than in the hanging wall. Displacement for the Alpine Fault is 480 km Wellman, 1953). However, convergence along the Alpine Fault's central section requires that it erodes its own fault rocks so these points are plotted to reflect only the brittle displacement the rocks themselves have accommodated as they are exhumed through the seismogenic zone (22 km, Barth et al., 2012). Error bars reflect uncertainty in constraining fault zone width (for example, footwall damage is largely unknown), not necessarily variability in fault zone thickness. (< 500 m), as they facilitate transpressional motion under the influence of kilometre-scale along-strike variations in stress induced by the topography (Norris and Cooper, 1995;Upton et al., 2017). Fractures may have also formed due to dynamic offfault stresses (Ma, 2009;Rice et al., 2005) during M W > 7.5 Alpine Fault earthquake ruptures (Sutherland et al., 2007). The relatively thin seismogenic crust in the Alpine Fault's hanging wall (10 ± 2 km, Boese et al., 2012) will limit the generation of dynamic co-seismic damage to within ∼ 100-200 m of the fault (Ampuero and Mao, 2017). To the first order, this is comparable to the width of the inner damage zone reported here. Comparison to geophysical data A 60-200 m wide low-velocity zone (LVZ) that extends to depths of ∼ 8 km has been documented around the Alpine Fault from the detection and character of fault zone guided waves (FZGWs; Eccles et al., 2015). FZGWs are commonly regarded as an in situ indicator of fault damage zone width (Ben-Zion and Sammis, 2003;Eberhart-Phillips et al., 1995;Ellsworth and Malin, 2011;Li et al., 2014). Given the comparable widths of the Alpine Fault LVZ (60-200 m) and the inner damage zone described here (100-160 m), we speculate that the inner damage zone may trap FZGWs in the Alpine Fault hanging wall. If this is true, it implies that the inner damage zone extends to depths of ∼ 8 km, consistent with the relatively high-temperature (< 400 • C) mineralising phases (calcite and chlorite) present in the gouge-filled fractures (Williams et al., 2017a). Though the boundary between the mylonites and ultramylonites is also ∼ 100 m from the Alpine Fault Toy et al., 2008), these two units have roughly similar seismic velocities (Adam et al., 2016;Allen et al., 2017;Christensen and Okaya, 2007) and so are unlikely to channel FZGWs. We also note that since FZGWs are an indicator of total fault zone width, our interpretation implies that most of the Alpine Fault's LVZ is located in its hanging wall. Western province basement rocks to the west of the Alpine Fault are rarely exposed (Lund Snee et al., 2014;Norris and Cooper, 2007), and so it remains unknown if its footwall damage zone is indeed relatively narrow. That the FZGWs are not being channelled by the margins of the ∼ 2 km wide outer damage zone leads us to conclude that this is a near-surface feature only (i.e. fractures are not kept open at depth by pressurised fluids). If correct, this model of the Alpine Fault's hanging-wall structure conforms to the expectations of fault zone flower structure models, which predict a narrow inner damage zone that extends through the seismogenic crust, surrounded by a wider zone of fractures at shallow depths at low confining pressures (∼ < 3 km, Fig. 12; e.g. Finzi et al., 2009;Sylvester, 1988). Conclusions Fracture orientations and densities in the foliated hanging wall of the Alpine Fault's central section were quantified in drill core from the DFDP-1, field transects in four creek sections, and drill core recovered from the Amethyst Hydro Project. At distances greater than approximately 160 m from the Alpine Fault PSZs, open and foliation-parallel fractures dominate. These are interpreted to form at low confining pressures in mechanically anisotropic schist and mylonites. At distances less than ∼ 160 m from the PSZs, gouge-filled fractures with a wide range of orientations predominate. Fracture density and orientation are locally influenced by changes in host rock lithology, but overall fracture density is approximately constant at distances of up to ∼ 500 m from the PSZs (Fig. 12). Following Townend et al. (2017), we interpret that the ∼ 2 km wide zone of open foliation-parallel fractures within the hanging wall represents an outer damage zone that forms at low confining pressures and relatively shallow depths. Conversely, the 160 m wide zone of gouge-filled fractures represents an inner damage zone. The width of this zone is similar to estimates for the LVZ around the Alpine Fault made by fault zone guided waves. We therefore interpret that the inner damage zone is the geological manifestation of the LVZ, which, if true, implies that the inner damage zone also extends to depths of ∼ 8 km. Overall, our interpretations are compatible with a flower structure model for damage in the Alpine Fault's hanging wall, with a relatively narrow zone of damage extending towards the base of the seismogenic crust, which broadens upwards towards the surface. Data availability. In the Supplement, we include detailed field maps and cross sections (Fig. S1), photos of outcrops used for quantifying fracture density (Fig. S2), a cross section through the Amethyst Tunnel and location of boreholes (Fig. S3), and an example of AHP CT scans (Fig. S4). The following tables are also provided: a list of rotations applied to DFDP-1B core (Table S1), a summary of field transects including coordinates of the field-measuring stations (Table S2), and estimates of the distance of field-measuring stations from the Alpine Fault for different fault dips (Table S3). Lithological distribution and Alpine Fault location are as per the University of Otago fault zone mapping program, which is available at http://www.otago.ac.nz/geology/ research/structural-geology/alpine-fault/af-maps.html (last access: 18 April 2018). DFDP-1 and AHP CT scan "core logs" and CT-BHTV image comparison are available on the GFZ data service (https://doi.org/10.5880/ICDP.5052.004, last access: 18 April 2018). Appendix A: DFDP-1B core rotation methodology The technique employed to reorient core DFDP-1 here is similar to that described in Jarrard et al. (2001), Paulsen et al. (2002), andShigematsu et al. (2014); however, instead of comparing DFDP-1 BHTV data to DMT CoreScan system ® unrolled core scans, we compare BHTV images to "unrolled" CT core images. The acquisition and interpretation of the DFDP-1 BHTV logs have been previously described by Townend et al. (2013) and McNamara (2015). DFDP-1 CT scans consist of a stack of core-axial perpendicular image slices with a pixel size of 0.244 mm and a spacing of 1 mm. The CT stack for each core section was loaded into Fiji (http://fiji.sc/Fiji, last access: 18 April 2018) and a circle was manually defined around the irregular boundary of drill core in a core axial-perpendicular image slice using the code available at Mills and Williams (2017). This circle was then used to define the path of the image in all other slices. Generation of the unrolled images accounts for the fact that the spacing between individual CT slices (1 mm, i.e. the core-axial parallel pixel size) is greater than the pixel size within the slices (0.244 mm). Drill core outer surface images and BHTV images are reflections of each other. Therefore, the drill core images were reflected about the borehole axis so that the two images are directly comparable. This technique has benefits over methods using the DMT CoreScan system ® , since drill core does not have to be physically rotated and so can be used without the risk of damaging fragile core sections. Unrolled CT images were imported into the composite log viewing software WellCAD ® (https://www.alt.lu/software. htm, last access: 18 April 2018) along with the BHTV images, where they are placed side by side to allow matching of structures ( Fig. 4; see also Williams et al., 2017b). When correlating the two datasets, it was first necessary to account for possible depth shifts between recorded drill-core depths and BHTV imagery due to factors such as stretching of the logging cable and intervals from which no drill core was recovered (Haggas et al., 2001;Jarrard et al., 2001). In this study, a depth shift of no more than ±30 cm was allowed. The orientation of fractures in the DFDP-1 CT images had previously been measured within a local core reference frame (see Fig. 4 in Williams et al., 2016). Since the DFDP-1 boreholes were vertical, corrections to reorient the drill core back into a geographic reference frame required only a single rotation about the core axis to correct for the dip direction. When correlating structures, errors may be introduced by (1) the internal BHTV magnetometer (±2 • ), (2) the manual picking of sinusoidal curves on BHTV and unrolled CT images that can be ±10 • for shallowly dipping (< 30 • ) structures (Jarrard et al., 2001), and (3) the fact that the DFDP-1B BHTV data imaged the open borehole, which has a larger diameter (127 mm) than the drill core (85 mm). To mitigate against the cumulative effect of these errors, Jarrard et al. (2001) stitched unrolled images of many different core sections to-gether that spanned intervals of 5-30 m, prior to the matching with BHTV imagery. This meant that only a single rotation was necessary for all core sections across the entire stitched interval, which could be based on identifying ∼ 20-30 matching structures between the BHTV and unrolled core images. In DFDP-1, it was not possible to stitch unrolled CT images of core section together as no prominent reference markers across different sections were identified. Consequently, each < 1 m long core section had to be reoriented individually, within which we never identified more than three matching structures. Therefore, compared to the methodology described by Jarrard et al. (2001), the degree of confidence on the applied reorientation was strongly dependent on the quality of individual matches for each core section and the range of rotations that they indicated. We recorded this qualitatively for each core section using the scheme outlined below. -High degree of confidence: images matched with one very prominent structure (e.g. Fig. 4d) or with two or more structures whose ranges of suggested rotations are within 10 • of each other ( Fig. 4b and c). -Moderate degree of confidence: images matched with one prominent feature, two features that indicate rotations that range 10-19 • (e.g. Fig. 4a), or three features whose ranges of suggested rotations are within 20-30 • of each other. -Low degree of confidence: images matched with one feature or two features whose ranges of suggested rotations are within 20-30 • of each other. In this scheme, a core reorientation is deemed unreliable if the range of rotations suggested by different structures is ≥ 30 • , i.e. equivalent to the cumulative effect of possible errors listed above. For those core sections where more than one matching structure was identified, the rotation that was applied was derived from the average of that required for each match. If one of the matched structures was more prominent, then the applied rotation was biased towards that structure. Appendix B: DFDP-1B core rotation validity Based on the criteria presented in Appendix A, of the 40 core sections from DFDP-1B in which there was suitable quality of unrolled CT and BHTV images to attempt reorientation (Fig. 3), 31 were reoriented (Table S1). Prior to reorientation, fractures in these sections exhibit no clustering (Fig. B1a); however, a weak one does develop after reorientation (Fig. 5a). Since fractures in nature typically exhibit non-random orientations, this is evidence that the reorientation of the CT scans was successful (Kulander et al., 1990;Paulsen et al., 2002). In addition, fractures within some individual core sections (Fig. B1b) and fractures rotated based on a high degree of confidence (Fig. B1c) contain a wide range of orientations. Figure B1. Stereoplots to tests the confidence in reorientations applied to rotate DFDP-1 CT scan fracture orientations into geographic coordinates. Red great circle and diamond in each plot represent the plane and pole to the Alpine Fault orientation measured in DFDP-1B. Plotted with Kamb contours with intervals of 2 standard deviations. (a) Orientation of fractures shown in Fig. 5a before rotation, (b) orientation of reoriented fractures within a single core section (DFDP-1B 56-2), and (c) orientation of fractures in CT images from core sections that were oriented with a high degree of confidence with BHTV images. The recognition of fractures in unrolled CT images that are not observed in BHTV can be readily explained by the higher resolution of the CT images. However, structures are also observed in the BHTV logs but not interpreted as fractures in the CT images (Fig. 4). This may represent noise in the BHTV images, or in the case of foliation-parallel structures, the ultramylonitic foliation itself since it can be difficult to differentiate these structures. The subordinate north-dipping set of fractures in the BHTV images (Fig. 5b) is not recognised in the orientations gathered from CT images (Fig. 5a). A similar north-dipping fracture set was also recognised in DFDP-2B BHTV images (Massiot, 2017), and their causation and relevance are the focus of ongoing work.
9,857.4
2017-10-10T00:00:00.000
[ "Geology" ]
Do RNN States Encode Abstract Phonological Alternations? Sequence-to-sequence models have delivered impressive results in word formation tasks such as morphological inflection, often learning to model subtle morphophonological details with limited training data. Despite the performance, the opacity of neural models makes it difficult to determine whether complex generalizations are learned, or whether a kind of separate rote memorization of each morphophonological process takes place. To investigate whether complex alternations are simply memorized or whether there is some level of generalization across related sound changes in a sequence-to-sequence model, we perform several experiments on Finnish consonant gradation—a complex set of sound changes triggered in some words by certain suffixes. We find that our models often—though not always—encode 17 different consonant gradation processes in a handful of dimensions in the RNN. We also show that by scaling the activations in these dimensions we can control whether consonant gradation occurs and the direction of the gradation. Introduction Recent work on computational morphology demonstrates that neural networks can very effectively learn to inflect words, given adequate amounts of training data (Cotterell et al., 2016(Cotterell et al., , 2017. However, in computational morphology and in NLP at large, the interpretability of neural models remains a serious concern (Doshi-Velez and Kim, 2017)-it is unclear how networks trained to inflect words actually accomplish their task. It is also unclear to which extent networks are able to learn linguistic generalizations from their input data instead of simply memorizing training examples and exhibiting a kind of nearest-neighbor behavior. In this paper, we shed light on what kind of linguistic generalizations neural networks are capable of learning from data. We report on an in- vestigation into how consonant gradation, a particular morphophonological alternation which is common in Finnish and other Uralic languages, is encoded in the hidden states of an LSTM encoderdecoder model trained to perform word inflection. Specifically, we train character-based sequence-tosequence models for inflection of Finnish nouns into the genitive case, an inflection type which commonly triggers consonant gradation. Consonant gradation is a morphophonological alternation where voiceless stops p, t and k are lenited in certain positions (see Section 3 for further details). We first demonstrate that inflection networks tend to learn an abstract representation for consonant gradation, where the alternation is triggered by the same dimensions in encoder hidden states regardless of which stop p, t or k undergoes gradation. This echoes the treatment of gradation in linguistic literature (Hakulinen et al., 2004, §41) Nevertheless, we also find evidence that this behavior is not universal and that networks can sometimes fail to generalize gradation and instead learn to represent gradation using distinct dimensions for each stop p, t and k. Our second contribution is to show that networks can learn a general representation encompassing both so-called quantitative gradation and qualitative gradation (these are further described in Section 3). This presents further evidence that the phonological representations learned by encoderdecoder models can learn to group linguistic generalizations that target different sounds. As our third contribution, we show evidence of a remarkable property whereby directionality of gradation is encoded as positive or negative hidden state activations: Consonant gradation is called direct when the base form of a noun displays the strong grade (such as kk) and the genitive form displays the weak grade of a stop (such as k). In inverse (or 'strengthening') gradation, the opposite alternation occurs. We find hidden state dimensions which encode for the direction of gradation by a positive or negative activation. This behavior is demonstrated in Figure 1 where a negative activation of dimension 487 in the encoder hidden state marks inverse gradation of a stop, and positive activation instead marks direct gradation (see Section 6 for further discussion of this phenomenon). Related Work Interpretation of neural representations in recurrent neural models has been an active area of research over a long period of time starting with Elman (1990). However, representations in models of phonology have received less attention than many other subfields of NLP. Rodd (1997) (2019) present an investigation of phone embeddings learned using word2vec (Mikolov et al., 2013) for simulated data showing that phone embeddings capture phonemic and allophonic relationships. They also show that phone embeddings capture co-occurrence restrictions for vowels well, while largely failing to do this for consonants. Our encoder representations, in contrast, are able to capture these co-occurrence restrictions. Beguš (2020b) investigates representations learned by a generative adversarial network or GAN (Goodfellow et al., 2014) trained on audio recordings of speech, showing that some of the latent variables of the GAN correspond to phonological features of the speech signal: specifically the presence or absence of the fricative [s] in the output of the network and the amplitude of frication. They show that manipulation of the variables changes these features in a predictable manner. Similarly to our work, Beguš (2020b) also scales state activations and observes the effect on the output of the network. In a related investigation of reduplication, Beguš (2020a) train GAN models on speech and identify variables which trigger reduplication in the speech signal. Extensive work exists on linguistic probing experiments for neural representations (Conneau et al., 2018a,b;Clark et al., 2019). A recent probing paper by Torroba Hennigen et al. (2020) is more directly related to our work. They present a decomposable probe for finding small sets of hidden states which encode for linguistically relevant information, particularly morphosyntactic information. Our work shares the aim of not only identifying if information is present in a neural system, but also examining how it is represented. However, we additionally perform experiments on manipulating network activations and examine how such manipulations influence the outputs of the network. Our approach was inspired by the now-classic paper on visualization and interpretation of recurrent networks by Karpathy et al. (2015) in that we also seek individual interpretable dimensions. The work by Dalvi et al. (2019) on analyzing individual neurons in networks trained for linguistic tasks (POS tagging as well as semantic and morphological tagging) is more closely related to the present work. They present a general methodology for uncovering neurons which encode linguistic information by training a classifier to predict linguistic features of the input based on the representations generated by the network. They also show that it is possible to manipulate specific neurons to force the Consonant Gradation Consonant Gradation (CG), common in many Uralic languages, is a set of assimilation and lenition processes, usually targeting the final syllable in a word stem. Historically the trigger for the alternation has been purely phonological, but in Finnish, the alternation is no longer entirely predictable from the phonological structure (Karlsson, 2017). 1 The trigger for gradation is usually an affix that closes the final syllable, such as the genitive -n, e.g. katto ∼ katon ('roof' sg. nom. ∼ sg. 2 gen.). The overall process is divided into quantitative gradation where, for example, geminate pp, tt, kk alternate with their non-geminate counterparts, p, t, k, and qualitative gradation where a large variety of lenition and assimilation processes are found. For example, strong grade k can alternate with the weakened j, v, g, etc. See Table 1 for a summary of these types of gradation processes found in our data set. The lenited or elided forms are commonly called the weak grade (e.g. katon) and the alter-nant the strong grade (e.g. katto). Sometimes the weak and strong grades appear in the inverse position, i.e. the weak grade appears with open syllables as in rike ∼ rikkeen ('offense' sg. nom. ∼ sg. gen.). While quantitative gradation remains productive in the language, many stems from more recent loanwords in particular, do not tend to alternate qualitatively; for example auto ∼ auton, * audon ('car' sg. nom. ∼ sg. gen.). Speakers must therefore know the lexical status of each stem to inflect it correctly. Our data set includes both gradating and non-gradating lexemes. The advantages of studying Finnish consonant gradation in this context is that the set of sound changes is very diverse, but that the trigger for all of them is the same. Also, the Finnish writing system is very phonemic and surface-oriented and therefore no conversion to an IPA representation is necessary to reveal the sound changes that occur as a result of gradation. Of particular interest to us is that there are many similar-looking alternations in Finnish that are not a result of consonant gradation, but paradigmatic variation. For example, varis ('crow' sg. nom.) is inflected variksen in the sg. gen. form. Note the similarity of this alternation to the actual CG case of liike ('motion' sg. nom.) ∼ liikkeen (sg. gen.) which also involves a ∅ ∼ k alternation. It is therefore of some interest to observe whether neural inflection models encode the two cases differently in some respect. In total we count 17 different types of lenition or fortition falling under the rubric of consonant gradation in our data set; an example of each type is shown in Table 1. Methods This section presents our nominative → genitive inflection models and our approach to finding encoder hidden state dimensions which are associated with consonant gradation. Inflection Models As our inflection model, we use the well-known attentional BiLSTM encoder-decoder model which was presented by Bahdanau et al. (2014) and first applied to inflection by Kann and Schütze (2016). This neural model transduces a nominative input form which is represented as a sequence of characters x [1:T ] of length T into a genitive output form for every position in the input sequence. Due to the bidirectionality of the encoder, the hidden state vector is a concatenation of a forward state f t ∈ R n and a backward state b t ∈ R n . We refer to the vectors f t as hidden states and the elements in the vec- Finding Dimensions Associated with Gradation Our aim is to investigate encoder hidden state dimensions d which are associated with gradation. (1) below, be the mean activation for dimension d in a set of encoder hidden states X. For each dimension d, we extract the mean activation a G (d), where G is the set of encoder hidden states at positions where gradation occurs. As explained in Section 3, gradation applies to the final stop in word forms which undergo gradation. Usually, this would refer to position T − 1 in a string of length T as in tupa 'cottage sg. nom.', where p undergoes gradation, but can also happen at position T − 2 as in the form ratas 'wheel sg. nom.', where t undergoes gradation. |X| (1) The mean activation a G (d) is compared to the activation a N (d) of dimension d at the penultimate position T − 1 in base forms of length T which do not undergo gradation. In order to specifically capture dimensions which encode for gradation as opposed to simply encoding for consonants, we limit this examination to base forms like kana 'chicken sg. nom.' and auto 'car sg. nom.', where the penultimate character is a consonant. We retrieve the top-N dimensions d where the difference in mean activation |a N (d) − a G (d)| is maximized and consider these candidate dimensions for gradation. Data Our dataset was produced by taking the most frequent 5,000 lexemes tagged as singular nominative nouns from the Turku Dependency Treebank (Haverinen et al., 2014) and generating the singular genitive forms using the OmorFi finitestate morphological transducer (Pirinen, 2015). We excluded compound nouns (e.g. ammattikorkeakoulututkinnoista 'from the professional high-school examinations') and words marked as nouns which contained punctuation or numerals (e.g. G8-neuvottelut 'G8 negotiations', 2000luvulla 'in the 2000s',°C:ssa 'in°C' etc.). Loan words were included, both unadapted such as workshop and bungalow and partially or fully adapted such as brosyyri 'brochure' and samppanja 'champagne'. This gave a total of 4,797 nominative-genitive pairs. We randomly ordered them and then split these into disjoint sets: 90% for training (4,317 pairs) and 10% for validation. We then took the validation set (479 pairs) and annotated them for: gradation (yes, no), type of gradation (qualitative, quantitative), consonant (p, t, k) and direction (direct, inverse). This gave a total of 84 examples of nouns exhibiting consonant gradation. This set was heavily skewed towards t gradation (54 out of 84 examples). 3 So we randomly sampled another 84 words from the frequency list, which were not found in the training data or in the existing validation set and which contained p and k, and annotated them and added them to the validation set. Statistics on the composition of the hand-annotated dataset can be found in Table 2 and the full data is freely available on GitHub. 4 Experiments and Results We investigate representation of consonant gradation in encoder hidden states in the following way: As explained in Section 4.2, we identify individual dimensions in encoder hidden states which activate strongly during gradation regardless of the identity of the consonant undergoing gradation. We then investigate the association of these states using two experiments: we (1) perform significance tests on a held-out dataset to determine if the states activate significantly more strongly when gradation occurs, and (2) scale the state activations and observe the effect on the output of the network. Training Details We train ten encoder-decoder models with different random initializations for inflection using the OpenNMT toolkit (Klein et al., 2018). We use a 2-layer BiLSTM encoder with hidden dimension 3 This follows character-level frequency patterns in Finnish, e.g. in the treebank t appears 122,821 times, k appears 64,513 times and p appears 23,130 times. 4 https://github.com/mpsilfve/gradation 250. Due to the bidirectionality of the encoder this results in 500-dimensional hidden states (consisting of a forward and backward hidden state). Our model uses 500-dimensional character embeddings both in the encoder and decoder and we use an attentional decoder with 250-dimensional hidden states. The model is trained for a total of 3,000 steps using stochastic gradient descent and a batch size of 64. See Figure 3 for a plot of the development accuracy during the training process. As can be seen, changes in development accuracy are modes after training step 2,000. We report inflection accuracy for our ten inflection models measured on held-out data in Table 3. The accuracy is reported separately for forms undergoing gradation and forms not undergoing gradation. In addition, we report an overall accuracy for all forms. We can see that the mean performance is close 95% for all forms and performance tends to be higher on forms undergoing gradation than other forms. Investigation of State Activations We randomly split our development set into two disjoint parts of equal size. The first part of the development set we use to discover the top-5 encoder hidden state dimensions which are strongly associated with gradation (as described in Section 4.2). The rest of the development set is used for significance testing. We perform a two-sided t-test to check if the mean activations of our top-5 dimensions differ significantly (at the 99.5% significance level) between positions which undergo gradation Table 3: Percent inflection accuracy for 10 NOM to GEN models trained using different random seeds. The column # States refers to the number of states found in Table 4 that have significant activations for all gradation types. and positions which do not undergo gradation. As explained in Section 4.2, we limit this examination to nominative forms where the penultimate character is a consonant to better zone in on gradation. Table 4 shows the results separately for p, t and k gradation. The table also shows results for qualitative and quantitative gradation. We can see that eight of the ten models contain at least one dimension where activation is significantly stronger for all stops p, t and k undergoing gradation than other stem-final consonants indicating that these states are associated with gradation in general rather than gradation of one of the individual consonants p, t, or k. We note that these dimensions also typically activate both for qualitative and quantitative gradation indicating that the network has learned an abstraction for both types of gradation. Scaling State Activations As a direct test of the effect of hidden state dimensions on gradation, we scale the activations of dimensions which are strongly associated with gradation. Our hypothesis is that negatively scaling these dimensions will prevent forms from undergoing gradation. We experiment on a dataset consisting of all development examples which undergo gradation. For each nominative input form such as luukku, we identify the correct gold standard genitive form luukun (where kk → k alternation has applied) and an alternate output form *luukkun which is correct apart from the fact that the form has not undergone gradation. We then compute (1) the number of gold standard forms, (2) the number of alternate forms, and (3) the number of nonce forms generated by our models. Nonce forms here refer to erroneous outputs like *luukuukuukkun which do not belong in category (2). We scale the hidden state activations at positions where gradation occurs, that is at the final stop in the nominative form, before feeding the encoder hidden states into the decoder. For each input form, we scale the top-N encoder hidden states which are associated with gradation according to the mapping a → x · a where x varies between 1 and -25. The number of states which are scaled (that is N ) is tuned for maximal effect on the number of alternate forms which are generated. Figure 4 shows the results for the scaling experiment when tuning N . 5 The first graph shows that for most models the number of alternate forms first increases when the scaling factor x approaches −25, and then gradually decreases. As the number of alternate forms increases, the number of gold standard forms undergoing gradation naturally decreases as demonstrated by the second graph. We also see an increase in the number of nonce forms which do not belong to either category. This is to be expected as scaling represents a deviation from learned model weights which disturbs the network. The effect of scaling varies between models: When scaling activations for Model 9, over half of the output forms do not undergo gradation. In contrast, for Model 7, the best scaling factor only produces around 7% of non-gradating output forms. Crucially, however, we do see an effect for nearly all models (apart from model 8). Contrast this with Figure 5 which shows results when scaling a set of five random states instead of states which are associated with gradation, showing that scaling of randomly sampled states has very small if any effect on the number of alternate forms produced by the models. Based on the graphs in Figure 4, scaling has very limited effect on Model 8. Even when scaling by a = −25, there is only a small decrease in the number of gold standard forms and a corresponding small increase in nonce forms. This might be evidence of a more redundant representation of information in Model 8, whereby scaling a few states will not strongly perturb the network. Table 4: Mean differences in activation strength for dimension d where we first find the top-5 states associated with gradation using 50% of the development data and then perform significance tests using the remaining 50% of the development data. We present results for 10 different random initializations of model parameters. We compare activation when k, p or t gradation occurs to activation at -CV word endings where gradation does not occur. We also report results for qualitative and quantitative gradation irrespective of the consonant undergoing gradation. Statistically significant differences in activation strength at the 99.5% significance level are shown in bold face. Dimensions with significantly stronger association for all stops as well as qualitative and quantitative gradation are marked using a gray box . tion: dimension 487 in model 3. This dimension displays positive activation for consonants undergoing direct gradation as in laukku 'bag sg. nom.' ∼ laukun 'bag sg. gen.'. Remarkably, the state displays negative activation for consonants undergoing inverse gradation as in the example lauseke 'phrase' where k is strengthened into a geminate kk resulting in the genitive form lausekkeen 'phrase-GEN'. This effect can be seen both in forms where quantitative and qualitative gradation occurs. However, as the example basilika 'basil' in the third heat map demonstrates, dimension 487 can also activate strongly when no gradation occurs. 6 This 6 The form basilika is a loan word and would probably undergo gradation if it were a native Finnish word. It is noteworthy, however, that regardless of the strong activation of Figure 5: The amount (in %) of alternate outputs not displaying gradation when five randomly sampled encoder hidden dimensions are scaled. prompted us to investigate hidden state activations more directly using the scaling experiments described in Section 6.3. Figure 1 shows a scatter plot of two encoder hidden state dimensions (487 and 484 in model 3) which activate strongly during gradation. Each point in the plot corresponds to one example in our development dataset. Clearly, examples which do not undergo gradation cluster around (0, 0). 7 In contrast, gradation for k and p lead to a positive activation for state 484, whereas t-gradation gives a negative activation. Moreover, direct gradation results in a positive activation for state 487 and inverse gradation gives a negative activation. Examples which do not undergo gradation can also have high values for 484 (> 0.4). Many of these examples end in -jV, -vV or -mV which could actually be examples where inverse gradation occurs but it happens not to be the case for these particular ones. Examples where the activation for 484 is low (< −0.5) span a small number of forms ending -tV, -bV, and -gV. There is also a substantial number of non-gradating forms where the activation for 484 is > 0.5. Most of these fall into the linnoitus 'fortress' / linnoituksen 'fortress sg. gen.' patterns where a k is inserted in the penultimate syllable. This alternation bears great resemblance to gradation as mentioned in Section 3. There are also a few examples of the type tase 'balance sheet' / taseen 'balance sheet sg. gen.' where the stemfinal vowel is doubled displaying large activation for 484. This is perhaps somewhat harder to explain. However, note that this vowel doubling frestate 487, our model still correctly inflects basilika into basilikan instead of applying gradation, which would give a form like *basilijan or *basilian. 7 The single t at (0, 0) represents the pair olut ∼ oluen, where t → ∅. This is an extremely infrequent gradation type. quently co-occurs with gradation as in tarvike 'accessory', tarvikkeen 'accessory sg. gen.'. Discussion and Conclusions In our experiments we found that the system would sometimes output a gradated form even when the exact type of gradation was not present in the training data, for example bambu ∼ bammun ('bamboo' sg. nom. ∼ sg. gen.). Since Finnish natively lacks b and g, examples of gradation with these consonants are rare. However, it is indeed the case that loanwords that include such voiced stops do undergo gradation, e.g. dubata ∼ dubbaan ('to dub' inf. ∼ 1p sg. pres. sg.) (Voutilainen, 2008). Since native Finnish speakers seem to extend gradation from voiceless stops to their voiced counterparts in loanwords, the question whether neural models can exhibit such generalizing behavior as well is an interesting one. Our initial investigations into whether the similarity of the learned embeddings for p and b could trigger such generalizations across similar sounds failed to identify a clear reason for the behavior, and we leave a detailed study of this to future work. We have presented an investigation of encoder representations of phonological alternations, specifically consonant gradation in Finnish. We found evidence of a generalized representation of gradation covering all stops which undergo gradation and different types of gradation. We also found that scaling hidden states can "switch off" gradation, prompting the model to generate alternate forms which do not display gradation. Moreover, the direction of gradation can be encoded as positive vs. negative hidden dimension activation. A Appendix: Scaling experiments This appendix contains all results for the scaling experiment presented in Section 6. Figure 6 presents the amount of alternate forms produced by each model when 1 -5 top gradation encoding hidden state dimensions associated with gradation are scaled. Figure 7 presents results for the gold standard forms undergoing gradation. For each model, we also present results for scaling a set of five randomly selected encoder hidden state dimensions. As Figures 6 and 7 show, the effect of scaling dimensions associated with gradation has a clear positive effect on the number of output forms which do not undergo gradation. In contrast, scaling randomly selected encode hidden state dimensions has small effect overall on the number of these output forms although it does tend to reduce the number of gold standard outputs undergoing gradation. This means that the number of nonce output forms still increases when the scaling factor approaches −25 as might be expected because we are deviating from the learned models parameters. Figure 7: Results when scaling the activations for the top 1-5 (T1 -T5) encoder dimensions associated with gradation for each of out ten models M1 -M10. These graphs show the amount of gold standard outputs undergoing gradation which are produced when encoder dimensions are scaled. As comparison, the green TR graph shows the effect of scaling 5 randomly selected encoder dimensions.
5,910.8
2021-06-01T00:00:00.000
[ "Linguistics", "Computer Science" ]
Narrow-Band Thermal Photonic Crystal Emitter for Mid-Infrared Applications Mid-infrared (MIR) on-chip sensing on Si has been a progressive topic of research in the recent years due to excitation of vibrational and rotational bands specific to materials in this range and their immunity against visible light and electromagnetic interferences. For on-chip applications, integration of all the optical components including the MIR source is crucial. In this work, we introduce a slab photonic crystal (PhC) thermal source where the birthplace and the filtering of the photons occur in the same region. Due to the forbidden frequency bands and high density of states in the band edge, it provides electric efficiency and filtering performance. Introduction Silicon photonics technology, where the optical circuits are realized in CMOS technology and mainly in the telecommunication spectra, has opened up a new venue in the past two decades for achieving on-chip optical devices. Advances in this domain at the telecommunication wavelengths have laid the groundwork of utilizing this technology in other spectral ranges, such as visible and MIR. For sensing applications, the MIR domain is of a great interest, since the absorption peak due to photon-phonon coupling of the many simple molecules falls within this regime [1]. It has been shown that high performance though expensive and bulky spectroscopy can be reached with a sensitivity of down to part per trillion [2,3]. However, reaching such a performance in fully integrated system on chip device is far from trivial [4,5]. One of the challenges that had to be addressed in this domain is to provide on-chip and low cost optical sources. It has been shown that group III-V compounds interband cascade and quantum cascade devices can provide MIR sources [6][7][8][9][10]. However, these compounds are not compatible to be efficiently grown onto the silicon platform which make the device manufacturing complicated and expensive. In this work, a thermal source in a photonic crystal lattice filter is studied for an on-chip gas sensing application based on Si photonics [11,12]. Such a system is fully compatible with the routines of CMOS fabrication plants and can result in functional chips with a reduced price. Our approach is based on controlled thermal generation of photons in the wavelength of interest in a series of slab PhCs with different lattice parameters. The forbidden bandgaps of the PhCs are designed and arranged so that only photons at a narrow-band window of interest are allowed to be emitted or leave the emitter to the rest of the on-chip devices. In this work, optical analysis were performed with finite element time domain (FDTD) method using the commercially available package of R-Soft and the thermal property of the system was analyzed using COMSOL Multiphysics. System under Study A system of PhC holes on a Si slab has been developed to serve as a source for a Si slab waveguide. This system will be later adapted to an evanescent-field silicon waveguide platform for sensing CO2 gas; and therefore, it is designed to emit at a wavelength of 4.26 µm [11]. A PhC lattice with a band gap region can be considered as a band-stop filter. By combining two PhC filters with different band stops, a narrow passage within a certain frequency window can be designed. In this work, our working window is 0.2 < /2 < 0.5, where c is the speed of light in the free space, ω is the radial frequency and a is the lattice constant. Figure 1a shows the arrangement of the proposed thermal emitter. The system comprises four main regions based on PhCs with various dimensions: filters 1 and 2, a mirror and a matching interface. The choice for the dimensions of the two filters and the mirror section is based on a gap map of Figure 1c. In this diagram, the first transverse magnetic (TM) gap of a hexagonally-arranged photonic crystal lattice is extracted with respect to the variation of the hole radius to the lattice constant / . Filter 1, with / = 0.45, is considered to be doped Poly-Si with a concentration of 1 × 10 20 cm −3 to serve both as the emitter and filter. In other words, the birth place of the thermally generated photons is a lattice which suppresses the emission of those photons with an energy lying within their band gap. Based on the gap map of Figure 1c, filter 1 acts as a low pass filter in our working window, and suppresses photon of a normalized energy higher than /2 = 0.295. Being conductive, filter 1 carries an electric current in order to heat the region up to relatively high temperatures. The high temperature of this region is shown in Figure 1b as a function of input electric power density, induces Planck's black body radiation. Since the generation of these photons occurs in the filters 1, no photon with energy within the corresponding bandgap has the chance of propagating in the crystal. As a result, the emission from such structure is already selectively filtered around the working frequency band. The second filter, with / = 0.3, acts as a high pass filter within our working area and blocks the propagation of the photons with a normalized energy lower than /2 = 0.27. The mirror region, with / = 0.35, has a band gap covering our frequency of interest, and therefore, it reflects back the photons of the energy 0.286 < /2 < 0.296 travelling in (-x) direction into the guiding area. Finally, the matching section provides an adiabatic change of the PhC mode to the slab mode reducing unwanted reflection due to momentum mismatch. The center of the holes of the adiabatic region has been considered to follow the position of a lattice with a constant of the one for filter 2, while the radii of its hole would decrease following the below formula: where L is the length of the matching part, r0 is the radius of holes in filter 2, rf is the smallest radius of the holes and β = 0.5, 1, or 1.5 is a damping coefficient which determines whether the transition is concave, linear or convex, respectively. Figure 1d shows three graphs for which the transition has changed from concave to linear and convex. The damping rate α for which the intensity drops exponentially throughout the propagation with a factor of − , for the three different kinds of matching with similar length L is a = 1.2 × 10 −4 µm −1 , 8.4 × 10 −4 µm −1 and 7.7 × 10 −4 µm −1 , respectively. At the free space wavelength of 4.26 µm, according to this calculation, the concave matching provide less in-plane loss for this structure, which suggests that a concave transition is a more suitable choice for our design. Results and Discussion To model the emittance of PhC emitter we use Kirchoff's law, stating that emittance and absorptance are equivalent [13]. The reflection and/or transmission spectra of each individual parts of the system are shown in Figure 2a-c. For the simulation, a TM mode is first excited in a slab and its reflection and transmission were recorded. The structures were terminated with a perfect matching layer (PML) in order to prevent unwanted reflections. Figure 2a shows the reflection from the filter1 and 2. The narrow wavelength range with low reflection for both filters corresponds to the emittance wavelength range of the emitter. Figure 2b shows a close to 100% reflection of the mirror throughout our working wavelength window. Figure 2c shows that almost 80% intensity of the light produced in the emitter passes through matching layer and is converted to slab mode. The absorption and energy density spectra of the whole system for the case of filter 1 being doped and undoped, respectively, are shown in Figure 2d, based on which the system provides well-defined absorption at around the wavelength of our interest (λ0 = 4.26 [µm]), and suppresses the other wavelengths around this region. Small mismatched between energy density and absorption spectra in Figure 2d is because of the difference in the refractive index of the doped and undoped silicon. Since the ensemble emissivity of such system is equal to its absorbance, the result of Figure 2d also represents its emitting performance. Conclusions In this work, we presented the design of an on-chip Si-based thermal emitter based on PhC band engineering. This emitter tends to be used as a mid-infrared source for CO2 gas sensing. The system is based on filtering out the broadband Planck's black body radiation so that the sensing path only carries the photons with high absorption probability for CO2 and it prevent interaction of light with other molecules in other wavelengths. The implementation of the proposed structure can be easily adapted to the standard MEMS technology and the dimensions can be updated based on the frequency range of interest. Author Contributions: B.A. and, R.J. developed the concept, performed the computations and wrote the manuscript; B.J. supervised the project and contributed in writing the manuscript.
2,093
2018-11-22T00:00:00.000
[ "Physics" ]
Enhanced Frequency Stability of SAW Yarn Tension Sensor by Using the Dual Differential Channel Surface Acoustic Wave Oscillator This paper presents a 60 MHz surface acoustic wave (SAW) yarn tension sensor incorporating a novel SAW oscillator with high-frequency stability. A SAW delay line was fabricated on ST-X quartz substrate using the unbalanced-split electrode and bi-directional engraving slots. The dual differential channel delay linear acoustic surface wave oscillator is designed and implemented to test yarn tension, which can effectively remove the interference of temperature, humidity, and other peripheral factors through differential design. The yarn tension sensor using the surface acoustic wave has high-precision characteristics, and the SAW delay line oscillator is designed to ensure the test system’s stable operation. The effect of time and tension on oscillator frequency stability is studied in detail, and the single oscillator and the dual differential channel system were tested, respectively. After using the dual differential channel system, the short-term frequency stability from is reduced from 1.0163 ppm to 0.17726 ppm, the frequency accuracy of the tension sensor is improved from 134 Hz to 27 Hz, and the max frequency jump steady is reduced from 2.2395 ppm to 0.45123 ppm. Introduction The yarn tension is an essential indicator of measuring the quality of the yarn product, which directly affects the balance and stability of the product quality, the production efficiency, and the subsequent processing [1]. Appropriate yarn tension makes the winding bobbin roll efficient, increasing the production efficiency of winding, twisting, weaving, and knitting [2][3][4]. Therefore, selecting an appropriate detection circuit is necessary to ensure the precision and stability of the yarn tension sensor's output signal [5]. At present, the commonly used sensors for detecting yarn tension are based on phase detection systems [6], amplitude detection systems [7], the phase lock loop (PLL) detection method [8], direct digital synthesizer (DDS) scanning detection [9], and mixing detection method [10]. Among them, phase detection and amplitude detection systems are used in the largest amounts; however, both of them are low sensitivity [11]. The way to select the PPL and DDS circuit makes the detection circuits very complicated [12]. In addition, to simplify the detection circuits and obtain high precision and stable detection signals, the mixing detection circuit is presented [13]. The yarn tension sensor is based on the surface acoustic wave device and oscillation circuit. Two factors affect the frequency stability of the SAW oscillator (SAWO): one is the performance of the SAW device, and the other is the oscillation circuit's noise [14][15][16]. This paper focused on the frequency stability of SAWO and solved the following two critical problems: (1) The following measures are taken to minimize the second order of the surface acoustic wave devices. First, this is accomplished by reducing the electrode reflection by using the unbalanced-split electrode [17]. Second, this is accomplished by reducing the sound-electricity reclamation (SER) by choosing the ST-X quartz as the piezoelectric substrate [18]. Third, reducing the interference of bulk acoustic wave (BAW) by engraving bi-directional engraving slots of the piezoelectric substrate [19]. (2) To solve the temperature interference, the dual differential channel SAWO is designed. The dual differential channel SAWO consists of the double delay line oscillators, an integrated mixer circuit, and the LC low pass filter. At the same time, the signals source circuit and voltage regulator circuits are designed, and then Agilent E5061A ENA-L was used to test and analyze the PCB board. This paper is organized as follows. After this introductory Section 1, Section 2 presented the principle of the SAWO dual differential channel circuit. The design and preparation of the SAW yarn tensor sensor are shown in Section 3. In Section 4, the test and application of the dual differential channel of the SAWO are given. Conclusions are drawn in Section 5. Figure 1 shows the dual differential surface acoustic wave oscillator system. It can be divided into three parts: Principle of the SAWO Dual Differential Channel Circuit (1) Two identical SAWOs. One of the SAWOs applies yarn tension on the piezoelectric substrate, namely, the detection channel; another SAWO is the reference channel. (2) An integrated mixer. When the output frequency signals of the two SAWO are added to the mixer, the output signal of the mixer is the frequency sum and frequency difference. (3) LC low-pass filter. When the output signal of the mixer passes through the low-pass filter, the output signal of the low-pass filter is the frequency difference (that is, the frequency difference in the output signal of the mixer). (1) The following measures are taken to minimize the second order of the surface acoustic wave devices. First, this is accomplished by reducing the electrode reflection by using the unbalanced-split electrode [17]. Second, this is accomplished by reducing the sound-electricity reclamation (SER) by choosing the ST-X quartz as the piezoelectric substrate [18]. Third, reducing the interference of bulk acoustic wave (BAW) by engraving bi-directional engraving slots of the piezoelectric substrate [19]. (2) To solve the temperature interference, the dual differential channel SAWO is designed. The dual differential channel SAWO consists of the double delay line oscillators, an integrated mixer circuit, and the LC low pass filter. At the same time, the signals source circuit and voltage regulator circuits are designed, and then Agilent E5061A ENA-L was used to test and analyze the PCB board. This paper is organized as follows. After this introductory Section 1, Section 2 presented the principle of the SAWO dual differential channel circuit. The design and preparation of the SAW yarn tensor sensor are shown in Section 3. In Section 4, the test and application of the dual differential channel of the SAWO are given. Conclusions are drawn in Section 5. Figure 1 shows the dual differential surface acoustic wave oscillator system. It can be divided into three parts: Principle of the SAWO Dual Differential Channel Circuit (1) Two identical SAWOs. One of the SAWOs applies yarn tension on the piezoelectric substrate, namely, the detection channel; another SAWO is the reference channel. (2) An integrated mixer. When the output frequency signals of the two SAWO are added to the mixer, the output signal of the mixer is the frequency sum and frequency difference. (3) LC low-pass filter. When the output signal of the mixer passes through the low-pass filter, the output signal of the low-pass filter is the frequency difference (that is, the frequency difference in the output signal of the mixer). The schematic of SAW yarn tension sensor with dual differential circuit system. In Figure 1, SAWO1 generate the sine wave signal V1, and the output frequency is . SAWO2 create the sine wave signal V2, and the output frequency is . Signals V1 and V2 entered into the integrated mixer AD835, and the signal output frequency is 3, where 3 Figure 1. The schematic of SAW yarn tension sensor with dual differential circuit system. In Figure 1, SAWO1 generate the sine wave signal V 1 , and the output frequency is f 1 . SAWO2 create the sine wave signal V 2 , and the output frequency is f 2 . Signals V 1 and V 2 entered into the integrated mixer AD835, and the signal output frequency is V 3 , where V 3 is the frequency sum and frequency difference of V 1 and V 2 (V 1 ± V 2 ). After the mixed output, signal V 3 is input to the LC low pass filter, the frequency sum of f 1 and f 2 are filtered out, and only the frequency difference of f 1 and f 2 is left. In this system, SAW1 is used to measure the yarn tension, and SAW2 is used as a reference. The output signals V 1 of the SAWO1 can be expressed as: where U 1 is the effective value of the output voltage of SAWO1, f 1 is the frequency of the output signal of the SAWO1, and ϕ 1 is the initial phase angle of the output signal of SAWO1. The output signal V 2 of the SAWO2 can be expressed as: where U 2 is the effective value of the output voltage of SAWO2, f 2 is the frequency of the output signal of the SAWO2, and ϕ 2 is the initial phase angle of the output signal of SAWO2. The mixing circuit adopts an integrated mixer AD835 as the core signal processing circuit. AD835 is a voltage output multiplier produced by analog devices, which can complete the function of W = XY + Z. The X and Y input signals range from −1 V to +1 V, and the bandwidth is up to 250 MHz. It is suitable for mixing the output signals of dual SAWO. When the output signals V 1 and V 2 pass through the integrated mixer AD835 circuit, the signals of the two SAWOs will be mixed, and the signal V 3 after mixing is: In Equation (3), the mixing signal contains f 1 − f 2 and f 1 + f 2 signals. According to Figure 1, the mixer output signal V 3 passes through the low-pass filter, and the output signal V 4 is: where K L is the magnification of the low pass filter. The simultaneity of the input supply voltage Vi applied to the dual surface acoustic wave oscillator system and the symmetry of the system, concerning the initial phase of the SAWO1 and the SAWO2, should be: By substituting Equation (5) into Equation (4), the output signal V 4 is converted into: where ∆ f LPF is the frequency difference of SAWO1 and SAWO2. According to Equation (6), when the dual differential SAWO is affected by external interference, the output signal is still ∆ f LPF = f 1 − f 2 . So, the design of a dual channel can suppress external interference to a certain extent and realize the stability and antiinterference of the system. However, in practice, because the device structure cannot be completely symmetric, the component distribution is somewhat different, and the device parameters cannot be entirely consistent, so the difference between the frequency f 1 of the SAWO1 and the f 2 of the SAWO2 will not be utterly equal to zero. Therefore, before the test, it is necessary to determine the essential calibration value f 00 of the detection and the reference channels. SAW Delay Line As the core of SAWO, the frequency response of SAW will directly affect the performance of the whole circuit. For the excellent frequency characteristics of SAW devices, it is necessary to select a piezoelectric substrate with a high electromechanical coupling coefficient and improve the side lobe and bulk acoustic wave suppression ability. Therefore, the SAW devices are optimized from the following three aspects. (1) Choosing the unbalanced-split electrode to solve the electrode reflection and side lobe of the SAW devices, as shown in Figure 2. Single electrode IDT is arranged in a periodic λ/2 (Figure 2a), and the regenerated waves (RW) are caused by the metal electrode. In Figure 2b, MEL1, MEL2, MEL3, and MEL4 are the mass/electrical load reflection reflected by the edge of each metal electrode, which is of the same phase. Therefore, the electrode reflection received by the centre of the transducer is the sum of all electrode reflections, as shown in Figure 3c. The unbalanced-split electrode width is λ/16 and 3λ/16, with an interval of 2λ/16, as shown in Figure 2d. Figure 2e is the phase synthesis diagram of the unbalanced-split-electrode interdigital transducers (IDT). The total phase of the regenerated reflection wave and mass load feedback is close to 180 • (Figure 2f), effectively reducing the in-band ripple effect characterised by the sensor frequency response. The electromechanical coupling coefficient k 2 determines the suitable material for the SAW sensor. k 2 represents the energy conversion degree of SAW piezoelectric material, which is related to SER [20]. The larger the k 2 is, the stronger the SER is. Therefore, the minor k 2 material is chosen to ensure the IDTs of the SAW sensor can perform well. The common materials of the piezoelectric substrate and their k 2 values are listed in Table 1. (2) Choosing the ST-X Quartz as the piezoelectric substrate to decrease SER. (3) Engraving bi-directional slots on the back of the piezoelectric substrate to solve the interference of BAW in SAW devices, as shown in Figure 3. The engraved bi-directional slots on the back of the substrate can block the propagation path of BAW to a certain extent, reducing the influence of BAW propagation and suppressing the out-of-band suppression of the frequency response. In Figure 3, the way of 1 is the electrode-A's BAW, the way of 1' is the reflection of DBAW before the slotting, and the way of 1' (red dotted line) is the reflection of DBAW after the slotting. After slotting, the thickness of the piezoelectric substrate changes (from 0.5 mm to 0.45 mm, slot depth of 0.05 mm), which leads to the most potent reflection of DBAW falls on the end of the central area output IDT, thus significantly weakening the influence of DBAW. The SAW design parameters are shown in Table 2 and fabricated on ST-X quartz substrates, as shown in Figure 4. Dual Differential Channel Circuit There are two kinds of structures of the surface acoustic wave oscillator constructed by transistor: Pierce type and Colpitts type low noise oscillation circuit [21]. The crystal and the inductor of the Pierce oscillator are connected in series to form a series resonant circuit, which works on the series resonance [22]. The Colpitts oscillator is a parallel resonant circuit with large total impedance and low-frequency stability [23]. This paper used the surface acoustic wave device as the oscillator frequency output device, and the working frequency is high. Therefore, choosing the Pierce oscillator can improve the frequency stability of the surface acoustic wave device. The dual differential surface acoustic wave oscillator system is shown in Figure 5. The power supply voltage of the oscillating circuit is +12 V, and the positive and negative power supply voltages of the mixing module AD835 are, respectively, ±5 V. +5 V is Dual Differential Channel Circuit There are two kinds of structures of the surface acoustic wave oscillator constructed by transistor: Pierce type and Colpitts type low noise oscillation circuit [21]. The crystal and the inductor of the Pierce oscillator are connected in series to form a series resonant circuit, which works on the series resonance [22]. The Colpitts oscillator is a parallel resonant circuit with large total impedance and low-frequency stability [23]. This paper used the surface acoustic wave device as the oscillator frequency output device, and the working frequency is high. Therefore, choosing the Pierce oscillator can improve the frequency stability of the surface acoustic wave device. The dual differential surface acoustic wave oscillator system is shown in Figure 5. The power supply voltage of the oscillating circuit is +12 V, and the positive and negative power supply voltages of the mixing module AD835 are, respectively, ±5 V. +5 V is generated by +12 V voltage through LM7805. Figure 5a shows the PCB layout of the circuit of the yarn tension sensor. The PCB size is 100 mm × 100 mm. In Figure 5b, C 1X and C 2X is the adjustable capacitor whose purpose is to determine the basic calibration value of the detection and the reference channels. AD835 is a mixer of ADI Semiconductor company. Its operating frequency is 250 MHz, which meets the frequency requirement of 60 MHz of SAW device design. generated by +12 V voltage through LM7805. Figure 5a shows the PCB layout of the circuit of the yarn tension sensor. The PCB size is 100 mm × 100 mm. In Figure 5b, C1X and C2X is the adjustable capacitor whose purpose is to determine the basic calibration value of the detection and the reference channels. AD835 is a mixer of ADI Semiconductor company. Its operating frequency is 250 MHz, which meets the frequency requirement of 60 MHz of SAW device design. Test of SAW Delay Line Agilent E5061A ENA-L radio frequency network analyzer was used to test the acoustic surface wave devices' frequency response characteristics. The frequency response of a group of acoustic surface wave devices was measured, as shown in Figure 6. Figure 6a shows the frequency characteristics of SAW-1, whose centre frequency is 59.836190 MHz. Figure 6b shows the frequency characteristics of SAW-2, whose centre frequency is 59.836494 MHz. Test of SAW Delay Line Agilent E5061A ENA-L radio frequency network analyzer was used to test the acoustic surface wave devices' frequency response characteristics. The frequency response of a group of acoustic surface wave devices was measured, as shown in Figure 6. Figure 6a shows the frequency characteristics of SAW-1, whose centre frequency is 59.836190 MHz. Figure 6b shows the frequency characteristics of SAW-2, whose centre frequency is 59.836494 MHz. The Basic Calibration Value of the Detection Channel and Reference Channel KEYSIGHT 4000X high-performance hybrid digital oscilloscope (4 channels 200 MHz) is used to test the frequency of SAW devices, and the oscillation waveform is shown in Figure 7. In Figure 7, the yellow curve is the detection channel's waveform, and the green curve is the reference channel's waveform. Since the circuit parameters of the two channels are the same, the oscillation frequency of both channels is 59.8 MHz (shown in the red box of Figure 7). The Basic Calibration Value of the Detection Channel and Reference Channel KEYSIGHT 4000X high-performance hybrid digital oscilloscope (4 channels 200 MHz) is used to test the frequency of SAW devices, and the oscillation waveform is shown in Figure 7. In Figure 7, the yellow curve is the detection channel's waveform, and the green curve is the reference channel's waveform. Since the circuit parameters of the two channels are the same, the oscillation frequency of both channels is 59.8 MHz (shown in the red box of Figure 7). When the environment temperature changes, the frequency of the output signal of surface SAWO2 is: is the centre frequency of SAWO2; ∆ is the frequency variation of surface When the environment temperature changes, the frequency of the output signal of surface SAWO2 is: where f 02 is the centre frequency of SAWO2; ∆ f T2 is the frequency variation of surface SAWO2 caused by temperature change. At the same time, the frequency of the output signal of SAWO1 is: where f 01 is the centre frequency of SAWO1; ∆ f T1 is the frequency variation of SAWO1 caused by temperature change. ∆ f F is the centre frequency change in SAWO1 when yarn tension F = 0. The output frequency of the low-pass filter is: By substituting Equations (7) and (8) into Equation (9), one obtains: where f 00 is a constant, f 00 = f 01 − f 02 , and ∆ f T is the frequency difference between the two oscillators caused by temperature. To determine f 00 in Equation (10), the basic calibration values of the detection channel and the reference channel should be tested. The determination method is given below. Adjust the capacitor Cx (in Figure 5b) to offset the oscillation frequency of the detection channel and the reference channel. This offset is a fixed value and does not change with environmental conditions. In this experiment, this offset value is defined as the basic calibration value η of the frequency difference output, which is the difference between the oscillation frequencies of the yellow curve and the green curve in Figure 8 (shown in the red box). where is a constant, = − , and ∆ is the frequency difference between the two oscillators caused by temperature. To determine in Equation (10), the basic calibration values of the detection channel and the reference channel should be tested. The determination method is given below. Adjust the capacitor Cx (in Figure 5b) to offset the oscillation frequency of the detection channel and the reference channel. This offset is a fixed value and does not change with environmental conditions. In this experiment, this offset value is defined as the basic calibration value of the frequency difference output, which is the difference between the oscillation frequencies of the yellow curve and the green curve in Figure 8 (shown in the red box). When the yarn tension F is loaded into the detection channel, the difference frequency output of the yarn tension sensor circuit is subtracted from the basic calibration value η. That is, the frequency difference component ∆ is obtained. Output Signal of Low Pass Filter The mixing waveform is measured after the mixer AD835, as shown in Figure 9. Due to the complex harmonic components, disorderly waveforms, and broad-spectrum range in the waveform shown in Figure 9, differential frequency components with lower fre- When the yarn tension F is loaded into the detection channel, the difference frequency output of the yarn tension sensor circuit is subtracted from the basic calibration value η. That is, the frequency difference component ∆ f LPF is obtained. Output Signal of Low Pass Filter The mixing waveform is measured after the mixer AD835, as shown in Figure 9. Due to the complex harmonic components, disorderly waveforms, and broad-spectrum range in the waveform shown in Figure 9, differential frequency components with lower frequencies need to be extracted. Test and Analysis of Dual Differential Channel Circuit Stability The stability of the oscillating circuit is an essential factor in determining the performance of a surface acoustic wave sensor. The frequency stability of SAWO refers to the random frequency variation value within a specific sampling time, which can be divided into long-term, medium-term, and short-term frequency stability. There are two ways to express short-term frequency stability: one is the time domain representation, which is Figure 10 shows the output waveform after the second-order low-pass filter is passed. The difference frequency signal is a sine wave signal, and the oscillation frequency is 610 kHz (as shown in the red box in Figure 10), which is the difference frequency value of the two waveforms in Figure 8 (as shown in the red box). Therefore, the basic calibration value of the measured detection channel and reference channel η = 610 kHz. That is, f 00 = 610 kHz, and Equation (10) can be changed to: Sensors 2023, 23, x FOR PEER REVIEW 11 of 16 function of the difference circuit is realized, and the yarn tension sensor can eliminate the influence of environmental interference. Test and Analysis of Dual Differential Channel Circuit Stability The stability of the oscillating circuit is an essential factor in determining the performance of a surface acoustic wave sensor. The frequency stability of SAWO refers to the random frequency variation value within a specific sampling time, which can be divided into long-term, medium-term, and short-term frequency stability. There are two ways to Figure 10. Test waveform diagram of differential frequency signal of low-pass filter. In summary, after the complex harmonic signal output by the mixer passes through the low-pass filter, the high-frequency harmonic in the signal is significantly attenuated, and only a single component of the difference frequency signal is left. Thus, the detection function of the difference circuit is realized, and the yarn tension sensor can eliminate the influence of environmental interference. Test and Analysis of Dual Differential Channel Circuit Stability The stability of the oscillating circuit is an essential factor in determining the performance of a surface acoustic wave sensor. The frequency stability of SAWO refers to the random frequency variation value within a specific sampling time, which can be divided into long-term, medium-term, and short-term frequency stability. There are two ways to express short-term frequency stability: one is the time domain representation, which is generally expressed by Allen variance; the other is frequency domain representation, which can be represented by phase noise. The Allen variance is commonly used to describe short-term frequency stability, which is defined as: where τ is the sampling interval, f k is the frequency point, and N is the total of samples. In the experiment, the interval sampling is k times, and short-term frequency stability can be defined as: where f M is the average frequency. Equation (13) K is used to estimate the short-term frequency stability of the oscillating circuit. Frequency Stability of SAWO1 Output Signal with Loaded Tension The output of SAWO1 with different tension from 0 to 100 cN is tested at intervals of 10 cN. A continuous test was carried out for one hour, and the frequency fluctuation was recorded between 600 s and 3600 s. As shown in Figure 11, ten groups of data ranging from 10 cN to 100 cN are obtained from 600 s to 3600 s. In Figure 11, the X-axis is the measuring period from 600 s to 3600 s, frequency sampling is conducted every 100 s, and the Y-axis is the value of the frequency fluctuation near the center frequency. According to the test data in Figure 11 and the centre frequency of SAW-1 ( f S1−M = 59.836190 MHz in Figure 6a), the short-term frequency stability can be obtained as: Table 3 shows the data with the largest frequency fluctuation among the ten data groups, measured when F = 100 cN tension. According to the test data in Table 3, the max frequency jump of the detection circuit can be stable at 134 Hz (time at 700 s) from 600 s to 3600 s (Figure 11 in red box), so the max frequency jump is steady at: Frequency Stability of Low Pass Filter Output Signal with Loaded Tension The output of low pass filter with different tension from 0 to 100 cN also is tested at intervals of 10 cN. A continuous test was carried out for one hour, and the frequency fluctuation was recorded between 600 s and 3600 s. As shown in Figure 12, ten groups of data ranging from 10 cN to 100 cN are obtained from 600 s to 3600 s. Table 4 shows the data with the largest frequency fluctuation among the ten data groups, measured when = 90 tension. According to the test data in Table 4, the max frequency jump of the detection circuit can be stable at 27 Hz (time at 3600 s) from 600 s to 3600 s ( Figure 12 in red box), so the max frequency jump is steady at: Figure 12. Frequency stability curve of low-pass filter output loaded with tension (the red box is the maximum data for all the tests). In Figure 12, the X-axis is the measuring period from 600 s to 3600 s, frequency sampling is conducted every 100 s, and the Y-axis is the value of the frequency fluctuation near the center frequency. According to the test data in Figure 12 and the centre frequency of SAW-2 ( f S2−M = 59.836494 MHz in Figure 6b), the short-term frequency stability can be obtained as: Table 4 shows the data with the largest frequency fluctuation among the ten data groups, measured when F = 90 cN tension. According to the test data in Table 4, the max frequency jump of the detection circuit can be stable at 27 Hz (time at 3600 s) from 600 s to 3600 s (Figure 12 in red box), so the max frequency jump is steady at: Conclusions This paper presents a design for the dual differential channel SAWO to enhance the SAW yarn tension sensor's frequency stability. The method of surface acoustic wave devices is optimized from three aspects. First, this involves designing the unbalanced-split electrode to reduce the electrode reflection and side lobe of the oscillator. Second, this involves choosing the ST-X Quartz as the piezoelectric substrate to reduce the SER. Third, this involves engraving the back grooving of the piezoelectric substrate to reduce the interference BAW. The dual differential channel circuits are designed and manufactured by using an AD835 mixer. The output of SAWO1 and low pass filter with different tension from 0 to 100 cN are tested at intervals of 10 cN. A continuous test was carried out for one hour, and the frequency fluctuation was recorded between 600 s and 3600 s. The conclusions were shown as follows: (1) The dual differential channel SAWO can enhance the frequency stability of the SAW yarn tension sensor. (2) Using the dual differential channel SAWO can reduce the short-term frequency stability from 1.0163 ppm to 0.17726 ppm.
6,602.4
2023-01-01T00:00:00.000
[ "Engineering", "Materials Science" ]
Strange and heavy hadrons production from coalescence plus fragmentation in AA collisions at RHIC and LHC In a coalescence plus fragmentation approach we study the pT spectra of charmed hadrons D0, Ds up to about 10 GeV and the Λ+c /D0 ratio from RHIC to LHC energies. In this study we have included the contribution from decays of heavy hadron resonances and also that due to fragmentation of heavy quarks that are left in the system after coalescence. The pT dependence of the heavy baryon/meson ratios is found to be sensitive to the heavy quark mass. In particular we found that the Λc/D is much flatter than the one for light baryon/meson ratio like p/π and Λ/K. Introduction Ultra-relativistic Heavy Ion Collisions (uHIC) can be used to probe the properties of the Quark-Gluon Plasma (QGP).In the studies of the QGP created in HIC is necessary to take into account that partonic behavior is not directly projected on the observables measured in uHIC.Thus the choice of the model for hadronization process is a crucial point in order to have a comparison with experimental data.The study of heavy baryons like D mesons or Λ c baryons and the systematic study of the baryon over meson ratio for different species from light to heavy flavor are important in order to understand the hadronization mechanism.For light and strange hadrons it has been shown an enhancement of the baryon over meson ratio compared to the one for p-p collision.Recent experimental results from STAR collaboration have shown that similar enhancement of the baryon/meson ratio is expected in the heavy flavor sector [1].The coalescence hadronization process can explain both this features that appears at intermediate p T .Hadronization via coalescence lead to a modification of the relative abundance of the different heavy hadron species produced.In particular this can manifests in a baryon-to-meson enhancement for charmed hadrons.The Λ C /D 0 enhancement due to coalescence was first suggested in [2] where based on di-quark or three-quark coalescence mechanism with a full thermalized charm quarks and the predicted Λ c /D 0 ratio is found to be comparable to the recent experimental data.Other recent calculations on the Λ c /D 0 can be found in [3,4]. Hadronization via Coalescence and Fragmentation There are two hadronization mechanisms to produce heavy flavor hadrons: one is the fragmentation where high momentum quarks fragment directly in high momentum hadrons and another one is the e-mail<EMAIL_ADDRESS>where heavy quarks hadronize by recombination with light quarks.At increasing p T the probability to coalescence decreases and the standard fragmentation takes over. Fragmentation The hadron momentum spectra via fragmentation of minijet parton spectra is given by: dN had /d 2 p T dy = jet dz dN jet d 2 p T dy where z = p had /p jet is the fraction of minijet momentum carried by the hadron and Q 2 = (p had /2z) 2 is the momentum scale for the fragmentation process. We employ the Peterson fragmentation function . The c is a free parameter fixed to c = 0.06 by experimental data on D mesons production in p + p collisions [6]. In the absence of the p + p data for the Λ c the value has been fixed to be a factor two larger than the D meson one [4].For light quarks we employ the distribution function that at high p T > p 0 ∼ 2 − 3 GeV is evaluated in next-to-leading order (NLO) in a pQCD scheme including the modification due to the jet quenching mechanism.The charm distribution function for both RHIC and LHC have been taken in accordance to the Fixed Order+Next-to-Leading Log (FONLL), as given in Ref. [7,8]. Coalescence The spectrum of hadrons formed by coalescence of quarks as developed in [9,10] can be written as: where dσ i denotes an element of a space-like hypersurface, g H is the statistical factor to form a colorless hadron from quarks and antiquarks with spin 1/2 while f q i are the quark (anti-quark) phase-space distribution functions for i-th quark (anti-quark).For n = 2 Eq. 1 describes meson formation while for n = 3 the baryon one.For D mesons the statistical factors is g D = 1/36 while for baryons i.e.Λ c the statistical factors is g Λ = 1/108.Finally f H (x 1 ...x n , p 1 ...p n ) is the Wigner function and following Ref.[2,10] we assume for f H a Gaussian shape in space and momentum relative coordi- with A H a normalization constant for the Wigner function.While σ i are the covariant widths and are related to the size of the hadron and they can be evaluated from the charge radius of the hadrons according to quark model [11,12].For D meson the widths are fixed by the mean square charge radius of D + which is given by r 2 ch = 0.184 f m 2 and it correspond to σ = 0.283 GeV while for Λ + c we have r 2 ch = 0.15 f m 2 and the corresponding widths are σ 1 = 0.18 GeV and σ 2 = 0.342 GeV.The overall normalization factor A H is fixed by requiring that the total recombination probability P coal = 1 for a p = 0 heavy quark that undergoes to all possible heavy flavor hadron channels.From P coal we can assign a probability of fragmentation as P f rag = 1 − P coal and the charm distribution function undergoing to fragmentation is evaluated by convoluting the momentum of heavy quarks which do not undergone to coalescence.For the bulk we assume a thermalized system of gluons and u, d, s quarks and anti-quarks where at τ = 4.5 f m/c (τ = 7.8 f m/c) at RHIC (LHC) with a temperature of T C = 165 MeV.The longitudinal momentum distribution is assumed to be boost-invariant in y ∈ (−0.5, +0.5).For the quark-gluon plasma collective flow, we assume a radial flow profile as β T (r T ) = β max r T R , where R is the transverse radius of the fireball.For partons at low transverse momentum, p T < 2 GeV we consider a thermal distribution while p T > 2 GeV, we consider the minijets that have undergone the jet quenching mechanism.The parametrization and the fireball parameters are the same used in [13].For heavy quarks we use the transverse momentum distribution obtained by solving the relativistic Boltzmann equation that give a good description of R AA and v 2 of D mesons [6]. Charmed hadrons transverse momentum spectra and ratio In this section, we show results for the transverse momentum spectra of D 0 and D s mesons and for Λ c .In all the spectra the contribution that comes from resonances decay are included with their decay channel and the pertinent branching ratios as given in Particle Data Group [14].For example for D 0 meson a dominant contribution comes from D * + and D * 0 where D * + → D 0 π + (68 %) and D * 0 → D 0 π 0 (62 %) and D * 0 → D 0 γ (38 %).The inclusion of resonances improves the description at low p T but does not affect significantly the intermediate p T .As shown in Fig. 1 a) and b) the contribution of both hadronization mechanism (black solid and red dashed lines) is similar for p T 2.5 GeV while at higher p T the fragmentation becomes dominant mechanism.The contribution of coalescence plus fragmentation shown by orange lines give a good description of the experimental data.For the spectra of D + s we observe that at low p T Fig. 1 b) the coalescence is the dominant mechanism while fragmentation play a role at p T 4GeV.This is related to the fact that the fragmentation fraction for D + s are small and less than about 8% of produced total heavy hadrons as obtained in [17].Again the comparison with the experimental data shows that the description by both hadronization mechanisms appear to be quite good.In Fig. 1 c) it is shown the spectrum for D 0 meson for (0 − 20%) centrality at LHC energies.As shown the total spectrum (coalescence plus fragmentation) shown by orange line is again in a good agreement with the experimental data.We notice that at LHC energies the fragmentation is the dominant hadronization mechanism to produce the D 0 meson.This is due to the fact that at high energies the coalescence is less significant because the effect of the coalescence depends on the slope of the charm quark momentum distribution.In fact for an harder charm quark distribution like at LHC the gain in momentum reflects in a smaller increase in the slope compared to the one at RHIC energies [6].In Fig. 1 d)it is shown the comparison between RHIC and LHC energies of the Λ + c to D 0 ratio as a function of p T .As shown the coalescence by itself predicts a rise and fall of the baryon/meson ratio.The inclusion of fragmentation reduce the ratio and we can see that in the peak region we get a quite good agreement with the only experimental data.Notice that in our calculation we obtain similar baryon/meson ratio to the one predicted in [2].Compared with measured light baryon/meson ratios like p/π − and Λ/K 0 S ratios (see [18][19][20]), the obtained Λ + c /D 0 ratios has a different behavior it is thus much flatter and for p T → 0 hadronization by coalescence and fragmentation predict Λ + c /D 0 0.8 which is much larger respect to the one measured for light baryon/meson ratio with p/π − 0 for p T → 0 [13].This behavior comes from the large mass of heavy quarks.Finally, we observe that at LHC energies coalescence plus fragmentation predict a smaller Λ + c /D 0 .This is due to the fact that at LHC the hadronization by fragmentation becomes the dominant hadronization mechanism with the effect to reduce the baryon/meson ratio. Conclusions We have studied the transverse momentum spectra of charmed hadrons in heavy ion collisions from RHIC to LHC energies.The results obtained for the spectra are in good agreement with the recent experimental data from RHIC to LHC in central collisions.We have also studied the p T dependence of the Λ c /D 0 ratio for different energies.The comparison with the light baryon/meson ratio shows that the Λ c /D 0 ratio have a weaker dependence on the transverse momentum due to the massive charms quarks inside heavy hadrons.We have found that the Λ c /D 0 1.5 at the peaks at p T 2.5 GeV at RHIC energies and Λ c /D 0 0.8 in the p T 0 region.Similar results have been predicted in [2].We observe that at LHC energies due to the fact that fragmentation start to be the dominant hadronization mechanism for the D meson production this ratio is predicted to be smaller at higher energies. Figure 1 . Figure 1.Left panel:p T spectra for D 0 and D s for Au−Au collisions at √ s = 200 GeV and for (0−10%) centrality.Green dashed line refers to the charm spectrum.Black solid and red dashed lines refer to the D 0 spectrum via only coalescence and only fragmentation respectively while orange solid line refers to the fragmentation plus coalescence processes.Data from [15].Middle panel: p T spectra for D 0 for Pb − Pb collisions at √ s = 2.76 T eV and for (0 − 20%) centrality.Data from [16].Right panel:Λ + c /D 0 ratio as a function of p T .Solid line refer to Au − Au collisions at √ s = 200 GeV while dashed line to Pb − Pb collisions at √ s = 2.76 T eV.Data from [1].
2,755.2
2018-02-01T00:00:00.000
[ "Physics" ]
X-ray Spectroscopic Evidence of Charge Exchange Emission in the Disk of M51 In the disks of spiral galaxies, diffuse soft X-ray emission is known to be strongly correlated with star-forming regions. However, this emission is not simply from a thermal-equilibrium plasma and its origin remains greatly unclear. In this work, we present an X-ray spectroscopic analysis of the emission from the northern hot spot; a region with enhanced star-formation off the nucleus of M51. Based on the high spectral resolution data from XMM-Newton/RGS observations, we unambiguously detect a high $G$ ratio ($3.2^{+6.9}_{-1.5}$) of the OVII He$\alpha$ triplet. This high $G$ ratio is also spatially confirmed by oxygen emission-line maps from the same data. A physical model consisting of a thermal plasma and its charge exchange (CX) with neutral cool gas gives a good explanation for the $G$ ratio and the entire RGS spectra. This model also gives a satisfactory characterization of the complementary Chandra/ACIS-S data, which enables a direct imaging of the diffuse emission, tracing the hot plasma across the galaxy. The hot plasma has a similar characteristic temperature of ~0.34 keV and an approximately solar metallicity. The CX contributes ~50% to the diffuse emission in the 0.4-1.8 keV band, suggesting an effective hot/cool gas interface area about five times the geometric area of the M51 disk. Therefore, the CX appears to play a major role in the soft X-ray production and may be used as a powerful tool to probe the interface astrophysics, important for studying galactic ecosystems. Introduction Diffuse hot plasma at temperature > ∼ 10 6 K in galactic disks is believed to be chiefly due to stellar feedback in forms of fast winds and supernova explosions of massive stars (e.g., McKee & Ostriker 1977;Li & Wang 2013). Such plasma, if sufficiently energetic, can reshape the interstellar medium (ISM) and transport the feedback energy and chemically enriched matter into galactic halos. However, how the plasma may be characterized is still under debate, largely because its interplay with the cool ISM remains uncertain. Currently, the diagnostics of the diffuse hot plasma are basically based on X-ray spectral observations with CCD energy resolution (∼100 eV), which is not refined enough to measure many individual emission lines in the soft X-ray band. As a result, spectral analysis of diffusion X-ray emission is typically based on ad hoc models with varied complexities, assuming an optically thin thermal plasma under the collisional ionization equilibrium (CIE). One often needs 2007). But the reality is likely more complicated, since one would also expect the hot outflows from a disk into a galactic halo (Hodges-Kluck et al. 2018) and their adiabatic cooling (Breitschwerdt & Schmutzler 1999). To better understand the origin of the diffuse X-ray emission, as well as the physical and chemical properties of the hot plasma in galaxies, high-resolution spectroscopic data are needed. The Reflection Grating Spectrometer (RGS) on board XMM-Newton has much higher spectral resolution (∼2 eV), ∼50 times better than the CCD energy resolution, and can be used to better constrain hot plasma properties or even reveal different radiative mechanisms other than the CIE thermal emission. For example, RGS observations have been used to show that the charge exchange (CX) at interfaces between hot plasma and neutral gas may contribute a significant portion to the diffuse X-ray emission in the starburst galaxy M82 (Zhang et al. 2014) and that this situation is likely prevalent in nearby galaxies (Wang & Liu 2012). One indicator of the existence of CX is the high G ratio of O VII Heα triplet lines [= (f + i)/r > ∼ 1.4, where the resonance (r) line is at 21.602Å (or 574 eV), the intercombination (i) line is at 21.804Å (or 569 eV), and the forbidden (f) line is at 22.098 A (or 561 eV)]. In these studies, the RGS spectra tend to be dominated by strong emissions from galactic central regions, which complicates the interpretation of the observed high G ratios, which could, in principle, arise from past active galactic nuclei (AGN; Zhang et al. 2019) or jet interactions with circumnuclear gas (Yang et al. 2020), for example. Here we present an investigation of diffuse X-ray emission from a well-isolated region in the galactic disk of M51 (also known as M51a, NGC 5194, or the Whirlpool galaxy), based on XMM-Newton/RGS observations and Chandra/ACIS observations. M51 is a nearly face-on galaxy at the distance of 8.58 Mpc (McQuinn et al. 2016) and has a mean surface rate of star formation ∼ 0.015 M yr −1 kpc −2 (Calzetti et al. 2005). The galaxy is normally classified to be "quiescently star-forming." Nevertheless, there is a particular striking recent star-forming region at the northern east part of the M51 disk, as evidenced by the presence of multiple massive young stellar clusters (Kaleida & Scowen 2010), as well as enhanced far UV and Hα emission (Thilker et al. 2000;Owen & Warwick 2009). We call this region as the northern hot spot (NHS). The intense star formation there might be triggered by the collapse of molecular clouds due to the tidal compression of the companion galaxy NGC 5195 (Dobbs et al. 2010). The NHS has an angular size of ∼ 2 (∼ 5 kpc) and is ∼ 2 away from the galactic nucleus or the X-raybright companion galaxy NGC 5195. This relative compactness and isolation of the NHS makes it possible to study its X-ray emission effectively with the slitless spectroscopic capability of the RGS. We use archival XMM-Newton/RGS observations covering the NHS, complemented by high spatial Notes. t eff denotes the effective exposures after filtering out time intervals with strong background flares, while P.A. stands for the position angles of the RGS observations. resolution X-ray observations from Chandra/ACIS, to probe the origin and properties of the X-ray emission observed over much of the galactic disk, as well as the NHS. The Paper is organized as follows. We describe the XMM-Newton and Chandra X-ray observations and data reduction, as well the use of a complementary H I image, in Section 2. The X-ray data analysis and results are presented in Section 3. We discuss the implications of our results in Section 4, and summarize the work and our conclusions in Section 5. XMM-Newton/RGS Data Our study relies chiefly on RGS observations. Table 1 lists five XMM-Newton/RGS observations that cover the NHS of the M51 disk. Our reduction process follows the standard procedure provided by the science analysis system (SAS; version19). This includes the removal of time intervals with strong background flares, and to produce the RGS spectra and the auxiliary files, using the pipeline command "rgsproc." The total effective exposure is 354 ks. However, our spectral analysis of the NHS uses only the first four observations. Observation ID (ObsID) 0852030101 was used only in the emission-line mapping of O VIII and O VII f, because the observation had a bad column right at the key O VII r line. We use the spectral extraction regions as illustrated in Figure 1a, to minimize the contamination from both the nucleus of M51 and the companion galaxy NGC 5195 and set the reference point for "rgsproc" to R.A.:13 h 30 m 00. s 890, decl:+47 • 13 44. 0. The background spectrum is derived from blank-sky spectral templates, according to the background level indicator as the count rate of the off-axis region on CCD 9. However, the off-axis region contains two AGNs of M51 and NGC 5195, and its count rate is even larger than that of the on-axis NHS region. Therefore, the model background could be overpredicted, and empirically we scale down its flux by 2/3 to get the continuum spectrum between 28 and 29Å larger than zero. This scaling of the background has little effect on our measurements of emission lines. It should be noted that the CCD 7 (10.6-13.8Å) in RGS1 and CCD 4 (20.0-24.1Å) in RGS2 were nonoperational during the observations. Furthermore, the CCD 6 (13.8-17.1Å) in RGS1 contains bad columns and the chip gap right around the prominent Fe XVII lines, and therefore is also excluded. We combine the RGS1 and the RGS2 spectra separately using the "rgscombine" script. The two combined spectra are grouped with a bin size of 0.05Å, which is close to the RGS spectral resolution of 0.07Å. A few bins that encounter severe bad columns are ignored, such as the bin at 21.80Å. The RGS data are also used to reconstruct the oxygen emission-line maps that cover the northern part of M51. Each RGS observation contains 1D spatial information in the cross-dispersion direction with a total width of ∼ 5 , where the FWHM of the line spread function is about 22 around 20Å. In its dispersion direction, the line broadening profile is determined mainly by the spatial extension of the X-ray line emission, following the relation ∆λ = 0.138 ∆θ. Therefore, if the Doppler effect can be neglected, as is the case here, the spatial information in the dispersion direction can be deduced from the observed line profile, reaching a spatial resolution of about 30 . The feasibility of producing RGS line intensity images has been demonstrated previously (van der Heyden et al. 2003;Bauer et al. 2007). In the present work, we need to combine RGS observations with different dispersion directions to maximize the signal-to-noise ratio (S/N) to map the distributions of the O VIII Lyα and O VII f or r lines. These are the strongest lines in the RGS spectra and are relatively isolated from other lines. Appendix A details the procedure for constructing monochromatic intensity maps of the emission lines. Chandra/ACIS-S data We use Chandra/ACIS-S data to decompose the diffuse and discrete source contributions and to determine the overall spectral shape and hence the properties of the plasma. We use all the 10 Chandra observations taken between 2000 June and 2012 October (ObsID: 354, 1622, 3932, 13812, 13813, 13814, 13815, 13816, 15496, and 15553). The total effective exposure time of these observations is 827 ks, after removing strong background flare periods. We process the data, using the standard Chandra interactive analysis of observations (CIAO; version: 4.8). The processing starts from the Level-1 event files through "chandra repro." After reprojecting the individual event files to the same tangent point, we merge all of them and create the count flux map in the 0.5-1.2 keV band, as shown in Figure 1a & b. For most of the observations, the separation of the aim points is no more than 1 , which makes the point-spread-function of each point source similar. Thus we detect pointlike sources in the 0.3-7.0 keV band, based on the merged data with the CIAO script wavdetect, and remove the source ellipses with major and minor axes twice of the 90% energy encircled values. We extract the X-ray spectra from the NHS region (the yellow elliptical region in Figure 1b) and the disk region without NHS (the cyan annular region with the yellow NHS region excluded), as well as the corresponding response files, from each observation, using the CIAO script specextract. We use dmextract and the blank-sky datasets to estimate the background spectral contributions. We finally merge the spectral files of individual observations together, using combine spectra. H I Image We adopt an H I map of M51 from the THINGS project ( Figure 1c; Walter et al. 2008) to trace neutral gas and its potential interplay with the hot plasma. The integrated H I emission also helps us to sketch out the positions of the grand spiral arms and other major cool gas features, which are used as references for multiwavelength comparisons. In particular, a roughly vertical segment of prominent H I emission, marked as "A" in Figure 1c, appears on the western side of the NHS region. For ease of reference, Figure 1c also includes a yellow curve that encloses an apparent X-ray emission enhancement associated with NGC 5195. Spectral Analysis and Results We use pyXspec within Xspec v12.10 1 to perform the spectral analysis. The fitting procedure is mainly Markov Chain Monte Carlo based, together with the Cash statistic. The quoted errors are at the 90% confidence level for one free parameter case. RGS spectra analysis and results We start with the fitting to the RGS spectra of the NHS with a fiducial model, consisting of a power law and a single-temperature APEC plasma, assuming optically thin and CIE. In addition to a Galactic foreground absorption with the column density fixed to the observed H I value of 1.8 × 10 20 cm −2 (Table 2; Kalberla et al. 2005), we include a fitting absorption for the power-law component, representing the contribution from discrete sources in the field, which tend to be embedded in the dense ISM. We assume no Xray absorption intrinsic to M51 for the plasma component, characterizing the diffuse soft X-ray emission, which arises primarily from an H I cavity with insignificant column density ( Figure 1c). The line broadening in the plasma emission The same Chandra count intensity image of M51, but with detected discrete sources removed. The yellow elliptical region is defined as the NHS region, while the cyan annular region with the elliptical region excluded is defined as the disk region without NHS. The image is overlaid with white curves sketching out prominent neutral atomic gas features (mainly spiral arms) as seen in panel (c), where the field is seen in the H I 21 cm line emission in units of Jy beam −1 · m s −1 . While the sketch is in pale violet red in panel (c), a vertical arm segment "A" is highlighted with the red color. The cross symbols ("×") in all the three panels mark the galactic center and the reference point for the spectral extraction. is accounted for by the inclusion of the Xspec convolution model 'rgsxsrc', together with the 0.5-1.2 keV intensity image of the diffuse X-ray emission of the NHS. We generate the image from the Chandra data by filling the holes left from the removal of detected sources via the interpolation from their surrounding source-free areas. Figure 2a shows the best-fit fiducial model to the RGS spectra, while the fitted parameters are listed in Table 2. Although the model represents well such moderately ionized lines as O VIII, Fe XVII, and Ne IX lines, the Ne X Lyα lines are not fitted properly even allowing for an abnormally large neon abundance. Also underfitting are the lines from highly ionized magnesium, indicating the need for a higher temperature plasma. More indicative is a clear mismatch between the data and model at the O VII Heα triplet, where the O VII f line is higher than the model prediction while the r line is lower. This mismatch strongly suggests that the line emission in the spectra cannot simply arise from a CIE plasma as assumed. In fact, a two-temperature CIE model does not improve the fit to the O VII triplet at all. We calculate the G ratio of the O VII Heα triplet to diagnose the nature of the diffuse X-ray emission. To do so, we use the 'mdefine' command in Xspec to define a O VII Heα triplet model consisting of three Gaussians, representing the three lines. Their known energies are fixed to their rest-frame values, while their Gaussian widths to an insignificantly small value (0.001 keV). We further fix the normalization ratio of the weak i line to the f line to 1/4.44, a rather good approximation for an optically thin thermal CIE plasma. As a result, this model has only two fitting parameters: the normalization of the triplet and the G ratio. With the continuum component set to the best-fit fiducial model, the model fits the RGS1 spectrum well in the 20.8-22.9Å range. The fitted G ratio (3.2 +6.9 −1.5 ) is significantly higher than the value ( < ∼ 1.4) expected for a CIE plasma. We conclude that this is the strong X-ray spectroscopic evidence for a significant CX contribution to the diffuse soft X-ray emission, while alternative scenarios are discussed in Appendix B and are not favored. Accordingly, we systematically include the CX contribution in the modeling of the RGS spectra. Specifically, we account for the CX contribution, using the second version ACX 2 model (Smith et al. 2012). This version includes velocitydependent reaction cross sections (Mullen et al. 2017), while the previous version used a simple empirical formula for the CX reaction rates. We assume that the encounter velocity Notes. The columns are: (1) the name of free parameters of the models; (2) between hot ions and cool atoms as 280 km s −1 (close to the sound speed of a 0.3 keV plasma). The temperature and metal abundances of the hot plasma in the ACX and APEC model components are linked. Therefore, compared to the fiducial model, the new APEC+ACX model adds only one more fitting parameter, the normalization of the ACX model. The quality of the fit to the RGS spectra is improved considerably, reducing the Cash statistic from 725 to 709 (Table 2). Figure 2b shows that the O VII Heα triplet is now well fitted. So is the Ne X Lyα. The CX contributes about a half of the thermal emission flux in the 7-30Å range. Accounting for the CX results in considerable changes in the characterization of the thermal and chemical properties of the plasma ( Table 2). The best-fit temperature ∼ 0.39 keV in the APEC+ACX model is significantly higher than 0.25 keV in the fiducial APEC only fit. This is because CX tends to reduce the ionization level of the ions and emit softer line emission than the plasma itself. The accounting for the line emission by the CX via the inclusion of the ACX model component also reduces the need for high metal abundances. The kT ∼ 0.39 keV plasma contributes little to the O VII line, but nearly dominates those highly ionized lines from Mg XI and Fe XVII. The emissivity of Ne-like Fe XVII also increases several times with the increased temperature, causing the decrease of the fitted iron abundance. The CX contributes little to Fe XVII lines, which need Fe 17+ ions that occupy a small ion fraction in the plasma. The abundance reduction for other elements are due to the large CX contribution to the relevant line emission: e.g., accounting for the majority of the O VII f line, producing the high G ratio, as well as significant fractions of the O VIII Lyα to Lyδ lines. The Ne IX f line is also enhanced by the CX, although the limited spectral quality of the data does not allow for a meaningful constraint on the G ratio of the Ne IX triplet. As a result, the APEC+ACX model fit requires only solar-like abundances for the elements, except for the iron (Table 2). We next check how a temperature distribution of the plasma, which should be more realistic , may affect the results. A simple extension from the singletemperature assumption is the use of the plasma model vltnd instead of APEC. This model adopts a log-normal temperature distribution of x = lnT for the plasma emission measure and has the meanx and the dispersion σ x as two fitting parameters. Adopting vltnd, we also fit the temperature parameter in the ACX model to characterize the thermal property of ions undergoing CX. Figure 2c shows the best-fit vltnd+ACX model. The quality of the fit is hardly changed because the fitted σ x is small, while the fittedT and metal abundances are also very similar to the best-fit values obtained in the above APEC+ACX model. The temperature of the CX ions is a bit less than 0.25 keV, which decreases the CX contribution to one-third of the plasma emission flux. So broadly speaking, the APEC+ACX model represents a simple, self-consistent characterization of the plasma emission. Chandra/ACIS spectral analysis and results Similarly, we analyze the Chandra/ACIS spectra extracted from the NHS region and the M51 disk region without NHS, using various combinations of the CIE plasma and CX models. We find that the two-temperature APEC CIE plasma gives a reasonable characterization of the overall spectral shapes. The fitted temperatures are about 0.2 and 0.5 keV, similar to the results (0.24 and 0.64 keV) from the analysis of XMM-Newton/EPIC spectral data on M51 (Owen & Warwick 2009). The quality of the fits is not as good for the log-normal temperature model with one fewer fitting parameter. Nevertheless, the high O VII G ratio from the RGS data rejects the scenario of a combination of thermal components with different temperatures. The simple APEC+ACX model fits the spectra well (Figure 3). The fits give similar spectral parameters for the two regions: their temperatures are both ∼0.34 keV, while the metal abundances are consistent with being solar within a factor of 2 ( Table 2). The thermal emission peaks around 17Å and contributes mainly to Fe XVII lines, whereas the CX emission largely accounts for the double peaks at 14Å and 22Å, as in the APEC+ACX fit to the RGS spectra of the NHS. The high spatial resolution of the ACIS data, together with the consistent APEC+ACX model characterization of the spectral properties, allows us to improve the estimation of the CX contributions. In the 7-30Å range, the flux contribution from the CX is even higher than 50% of the diffuse emission, either in the NHS region or in the disk region without NHS ( Table 2). The areas of the two extraction regions of the Chandra spectra are 5.45 and 19.78 arcmin 2 , respectively. Therefore, the derived surface densities of either the thermal or the CX emission in the NHS region is about twice that in the disk region without NHS. Figure 4 presents the line emission maps, while Figure 5 shows the S/N maps for the two stronger lines. The constructions of these maps are detailed in Appendix A. Most features seen in the line emission maps at the spatial resolution of ∼ 0.5 seem reliable, especially in places where the S/N ratio is greater than ∼ 3. The counting statistics of the O VII line maps are not as good as those of the O VIII map; the exposure time of the O VII lines is only about half of the O VIII. In the following, we compare the line RGS maps with the Chandra/ACIS diffuse soft X-ray emission image (Figure 1b), focusing on a few key features. Spatial distributions of the oxygen line emission The O VIII map and the Chandra image look broadly similar. The M51 core region is prominent in the O VIII emission, though its intensity morphology in the map is distorted because of the edge effect of the RGS CCDs. The line intensity distributions in both NGC 5195 and the NHS regions are consistent with those seen in the Chandra image, suggesting that the RGS reconstructed line map represents the spatial distribution of the diffuse X-ray emission well. At the M51 core, both O VII r and f lines are prominent, although the latter is significantly stronger than the former, consistent with the result from the RGS spectral analysis (Yang et al. 2020). In the NGC 5195 region, the r line is generally brighter than the f line; the latter appears stronger at the southern end, where an arc-like structure is observed in the diffuse soft X-ray emission and is believed to be due to photoionization by an early AGN of the galaxy (Schlegel et al. 2016). We find that the RGS spectrum of this structure does show a higher O VII G ratio than that of the entire NGC 5195 region, consistent with the AGN ionization scenario. In the NHS region, the overall O VII f emission is obviously stronger than the r line emission, which is consistent with the high G ratio obtained from the RGS spectral fitting. The region shows up as a local enhancement in the S/N maps, indicating a reliable detection of the O VIII and O VII f lines. We find that the O VIII and O VII f line fluxes estimated from the reconstructed RGS line maps, after accounting for the RGS effective areas at the corresponding wavelengths, Figure 2. RGS1 and RGS2 spectra of the NHS (colored coded in blue and green, respectively) in comparison with fitted models: (a) singletemperature APEC thermal plasma (red line) for the diffuse X-ray emission; (b) APEC+ACX; (c) vltnd+ACX (Table 2). In these panels, while the red histogram represents the total model spectra, the black dashed, orange dotted, and magenta dotted-dashed curves represent the point source, thermal plasma, and CX contributions, respectively. are well consistent with those from the best-fit APEC+ACX model of the ACIS spectrum extracted from the same region. Although the limited counting statistics of the RGS data do not allow for a reliable 2D structure determination of the line emission in the region, the distribution of O VII f appears to be different from that of O VIII or the diffuse soft X-ray emission seen in the Chandra image. The O VII f emission appears relatively brighter on the west side of the H I segment "A", consistent with the slight offset of the O VII f line peak in the RGS spectra to the blue side of the model line (Figure 2b or c). In contrast, the O VIII emission resembles the diffuse soft X-ray emission in Figure 1b. Discussion The above results demonstrate the apparent success of the thermal plasma plus CX modeling of the line emission from the M51 disk and further show differential distributions of the thermal and CX contributions. In the following, we further explore how the hot plasma interacts with the surrounding cool gas. The diffuse soft X-ray enhancement observed at the NHS is clearly a result of the energetic mechanical feedback expected from massive stars. Their locations are closely aligned with the H I segment (Egusa et al. 2017), whereas the X-ray enhancement is lopsided toward the east (Figure 1b). This structure is a natural manifestation of the high-pressure diffuse hot plasma, heated by the stellar feedback, expanding preferentially toward interarm regions of relatively low ISM density. This expansion may also explain the apparent bifurcation of the grand spiral arm at the location of the NHS, as seen in the H I map (Figure 1c). The CX is expected to be more significant in places with abundant H I, hence along the segment. With the thermal plasma+CX modeling for the diffuse soft X-ray emission, we may now estimate the physical properties of the plasma and the effective area (A ) of its interface with cool gas. The Chandra image (Figure 1a) shows many fine structures in the NHS region caused by the absorption of dust lanes. So it is reasonable to assume that the plasma is largely contained in a thick galactic disk with an effective half-height of 0.5 kpc. We infer from the 'norm' value of the APEC component (Table 2) the mean density as n ≈ 0.0066 cm −3 in the NHS region or 0.0046 cm −3 in the disk region without NHS. Similarly, we may infer the effective area from the 'norm' of the ACX component, as defined in the note to Table 2. In the definition, the volume of the interface layer V i is approximately the product of A and the mean free path of hot ions, l = (σn d ) −1 , where the CX cross section is typically σ ∼ 3 × 10 −15 cm −2 . Accordingly, the 'norm' of ACX can then be expressed as Taking n r ∼ n = 0.0066 or 0.0046 cm −3 , we estimate A = 1.9 × 10 45 cm 2 or 6.5 × 10 45 cm 2 for the NHS or the other portion of the galactic disk, respectively. Although the diffuse X-ray surface intensity is higher in the NHS than in the other portion of the disk, the ratio between the estimated A value and the physical area of the spectral extraction region (with the 20 • inclination angle taken into account) hardly changes: 5.5 and 5.2 for the two regions, respectively. This probably indicates that the CX effectiveness does not change much across much of the galaxy. The presence of the CX may indicate significant effect on the cooling of the hot plasma. The radiative cooling time of the tenuous plasma in the disk is on the order of 10 9 yr. However, under the effective interplay between the hot plasma and the cold ISM, as revealed by the large CX interface area, the plasma in the disk plane may be well mixed with the cold gas and cool down much quicker. The CX emission itself is from the ionization energy of ions, and it contributes energy similar to that of the thermal emission, mainly in the UV and 2.89e-08 4.79e-08 6.70e-08 8.60e-08 1.05e-07 1.24e-07 1.43e-07 1.62e-07 X-ray bands. A prominent coolant line of the CX is He II Lyα line (303.8Å) in the UV band, whose photon flux could be 20 times stronger than the total radiating photons in the RGS band. Summary In this work, we have investigated the diffuse X-ray emission from the galactic disk of the star-forming galaxy M51. In particular, we have conducted a spectroscopic analysis of the emission from the NHS, an enhanced star-formation region off the galactic nucleus, based on the high spectral resolution data from XMM-Newton/RGS. The same data have also allowed us to reconstruct 2D oxygen line maps, which cover much of the galactic disk . Furthermore, we have used all the available complementary Chandra/ACIS-S data to perform imaging and spectral analyses of the emission across the entire galactic disk. Our main results and conclusions are as follows: 1. The RGS spectrum of the NHS shows a high G ratio of the O VII triplet (∼ 3.2). The O VII f emission appears significantly stronger than the r, which is also confirmed by the RGS O VII emission-line mapping. This high G ratio is inconsistent with the interpretation of the line emission purely originating from CIE plasma, even when various possible temperature distributions are considered. 2. The oxygen line maps from the RGS data share similarities with the diffuse soft X-ray image obtained from the ACIS-S data. The O VII f emission in the companion galaxy NGC 5195 field is only prominent around a southern arc, suggesting the presence of an AGN photoionization remnant. In the NHS region, the O VIII emission appears relatively bright to the east of the H I arm segment "A", where the star formation is intense. In contrast, the O VII f line is more luminous on the west side of the segment where the H I gas is more abundant. 3. The CX naturally explains both the high O VII G ratio and the differential spatial distributions of the O VIII and O VII f line intensities in the NHS region, because the CX tends to occur in and near star-forming regions where both hot and cool gases are abundant. In this scenario, the O VII f line preferentially traces the CX distribution, while the O VIII emission is contributed by both the thermal plasma and the CX. The thermal emission from the hot plasma is expected to be more extended, as is the case in the NHS region. No other simple mechanism can simultaneously explain all the spectral and spatial characteristics observed in the region. 4. The spectral modeling of the ACIS data with the inclusion of the CX contribution further enables us to estimate the properties of the hot plasma in the M51 disk. The spectra of the plasma can be well characterized by a single APEC with a temperature ∼0.34 keV and solar-like metal abundances, which seem to be only weakly dependent of the soft X-ray surface intensity. 5. The CX contributes about a half of the diffuse X-ray emission in the 7-30Å band, which again seems to weakly depend on the surface intensity. We estimate the effective interface area to be about five times the geometric area in the galactic disk, suggesting that the CX may represent a potentially important cooling mechanism for diffuse hot plasma in star-forming galaxies. S.N.Z. acknowledges the support from the NSFC grant 11573070 and the China Scholarship Council. This work has made use of the data from XMM-Newton and Chandra, and the fits image from "The H I Nearby Galaxy Survey." We thank the anonymous referee for the constructive comments and suggestions. While this S/N map can simply be obtained from the ratio: line count map/ √ total count map, we rebin the count maps to a bin size of 25 × 25 , comparable to the RGS spatial resolution, to increase the counting statistics sufficiently ( Figure 5). Appendix B. Consideration of alternative explanations for the large G ratio observed in the RGS spectra In the main text, we regard the large G ratio of the O VII Heα triplet as the evidence for the CX contribution to the RGS spectra of the NHS. Here we consider various potential alternative explanations, which to the end are not favored. First, some rare discrete sources (e.g., supersoft ultraluminous sources) can show strong emission lines, which could contaminate the calculation of the G ratio; but no such source is found inside the RGS extraction region (Terashima & Wilson 2004). Second, we investigate other competing scenarios with the possibility to have a high O VII G ratio, including the photoionization or the nonequilibrium ionization (NEI) processes that can increase the f line emission, and the resonance scattering (RS) process that has a chance to reduce the r line emission. For the photoionization scenario, the production of a high O VII G ratio is possible through the recombination process, but it requires a powerful ionizing source such as a current or a past AGN to photoionize a plasma to form O 7+ or O 8+ ions. However, the NHS is too far away (> 5 kpc) from the nuclear center of either M51 or NGC 5195. Even a bright AGN (e.g. 10 46 erg s −1 ) cannot photoionize the plasma in the NHS to such a highly ionized state, especially after the strong absorption by the arms in the disk. In situ, some ultra-luminous X-ray sources can also ionize surrounding materials and produce diffuse nebular emission (e.g., Simmonds et al. 2021). However, in the NHS region, only two sources are more luminous than 10 39 ergs s −1 (Terashima & Wilson 2004), and their contribution to the O VII f line is negligible compared to the estimated value of 1.25 × 10 −5 photon cm −2 s −1 in the NHS region according to the RGS map. The NEI scenario includes "overionized" and "underionized" cases. If the hot plasma has cooled substantially due to quick adiabatic expansion to an electron temperature lower than 10 6 K, while the ions are still in the "overionized" state, the recombining spectrum could show a high O VII G ratio. In an adiabatic model of the winds from spiral galaxies, cooling becomes noticeable at distances >3 kpc from the disk (Breitschwerdt & Schmutzler 1999), though it takes tens of million years for the hot plasma to arrive there. However, the density of the hot plasma in the disk is as low as n = 0.005 cm −3 (see Sec. 4), which may decrease to one-tenth of this value after the expansion. Plus the low emissivity in the recombining process, a large volume of plasma is required to produce the observed O VII f line emission, for example with a height of a few hundred kiloparsecs over the disk. In this case, the line emission should be very diffuse over the disk, but in our map the O VII f enhancement is still mainly confined in the NHS region. In the case of the "underionized" nonequilibrium, the lithium-like oxygen ions may produce the O VII f line emission. In some special case such as shock heated gas in supernova remnants, where the hydrogen has already been fully ionized and the ionization of heavier elements reaches only intermediate levels, the high O VII G ratio may exist. However, the lithium-like oxygen ions will turn into the heliumlike ions in a short timescale, and thus have a low possibility to produce the observed flux of the O VII f line. In addition, under this situation, the fitted temperature according to metal lines of the moderately ionized ions should be lower than 0.1 keV. But there are no hints of this, even according to the lognormal model. Resonance scattering is another possible scenario to generate the high O VII G ratio, since the O VII r line photons may be scattered to a more diffuse state and outside the spectral extraction region. However, the r line map that covers a large enough area is just generally weak in the NHS region compared to the f line map. Besides, this scenario requires a large column density of O 6+ ions over the disk, which is not achievable with the current best-fit model of the plasma. We conclude that these competing scenarios are unlikely the reasons of the high O VII G ratio in the NHS. The CX emission is generally preferred.
8,681.4
2022-03-14T00:00:00.000
[ "Physics" ]
Inorganic speciation of dissolved elements in seawater: the influence of pH on concentration ratios Assessments of inorganic elemental speciation in seawater span the past four decades. Experimentation, compilation and critical review of equilibrium data over the past forty years have, in particular, considerably improved our understanding of cation hydrolysis and the complexation of cations by carbonate ions in solution. Through experimental investigations and critical evaluation it is now known that more than forty elements have seawater speciation schemes that are strongly influenced by pH. In the present work, the speciation of the elements in seawater is summarized in a manner that highlights the significance of pH variations. For elements that have pH-dependent species concentration ratios, this work summarizes equilibrium data (S = 35, t = 25°C) that can be used to assess regions of dominance and relative species concentrations. Concentration ratios of complex species are expressed in the form log[A]/[B] = pH - C where brackets denote species concentrations in solution, A and B are species important at higher (A) and lower (B) solution pH, and C is a constant dependent on salinity, temperature and pressure. In the case of equilibria involving complex oxy-anions (MOx(OH)y) or hydroxy complexes (M(OH)n), C is written as pKn = -log Kn or pKn* = -log Kn* respectively, where Kn and Kn* are equilibrium constants. For equilibria involving carbonate complexation, the constant C is written as pQ = -log(K2lKn [HCO3-]) where K2l is the HCO3 - dissociation constant, Kn is a cation complexation constant and [HCO3-] is approximated as 1.9 × 10-3 molar. Equilibrium data expressed in this manner clearly show dominant species transitions, ranges of dominance, and relative concentrations at any pH. Introduction Solution speciation exerts important controls on chemical behavior. Speciation is known to influence solubility, membrane transport and bioavailability, adsorptive phenomena and oceanic residence times, volatility, oxidation/reduction behavior, and even physical properties of solutions such as sound attenuation. In recognition of such influences, substantial efforts have been made to characterize the chemical speciation of elements in seawater. While assessments of organic speciation have dominantly been obtained using modern voltammetric procedures and, as such, have a relatively short history, assessments of inorganic speciation typically involve a wide variety of analytical procedures that have been employed over many decades. Assessments of the inorganic speciation of seawater began with attempts 1 to determine dominant chemical forms in seawater based on available thermodynamic data. Early compilations of Principal Species 2 dominantly involved (a) simple hydrated cations and anions (e.g. Na 1 , Ca 21 , Cl 2 , F 2 ), (b) ion pairs with sulfate (e.g. MgSO , HgCl 4 22 ). While it was noted 1,2 that hydroxide complexes were important for all ions with oxidation numbers greater than two, hydroxide complexes were notably absent in Principal Species tabulations until the following decade. The thermodynamic data compilations of Sillén and Martell 3,4 catalyzed rapid advances in equilibrium models of seawater speciation. These works were followed by additional compilations [5][6][7] that were critically important to modern seawater speciation assessments. In view of these developments, and additional extensive experimental analyses appropriate to seawater, Principal Species assessments ten to fifteen years after the pioneering work of Sillén demonstrated a much improved awareness of the importance of hydrolysis in elemental speciation. [8][9][10] An additional major speciation assessment 11 provided a greatly improved, comprehensive view of inorganic complexation in seawater. Based on the analogous characteristics of metal complexation by carbonate and oxalate, Turner et al. 11 concluded that rare earth element complexation in seawater is dominated by carbonate. Subsequently, as the result of approximately twenty years of progress in seawater speciation, the Principal Species assessment of Bruland 12 listed seventeen elements with carbonate-dominated Principal Species. Speciation calculations Based on currently available data, Principal Species for a substantial portion of the periodic table (through atomic number 103) are thought to be controlled or influenced by pH. The main objective of the present work is a review of Principal Inorganic Species for the elements in seawater. The principal focus of this work is an assessment of the influence of pH on inorganic speciation. The Principal Species assessment in this work differs from previous presentation formats in its objective of providing a simple quantitative means of assessing Principal Species variations with changes in pH. Stepwise equilibrium constants provide a simple means of assessing species concentration ratios as a function of pH. In the case of equilibria involving simple protonation of complex anions, MO x (OH) y n2 , stepwise equilibrium constants are expressed in the form whereupon, In the case of equilibria involving carbonate, due to the near constancy of HCO 3 2 concentrations in seawater, equilibria can be conveniently written in the following form 2{ 3 Using the dissociation constant of HCO 3 2 in seawater (K 2 / ), and K n values appropriate to various carbonate complexation equilibria in seawater, the relative concentrations of M(CO 3 ) n and M(CO 3 ) n21 can be written as log½M(CO 3 ) n =½M(CO 3 ) n{1 ~{pQ n zpH (6) where log Q 5 log(K n K 2 / [HCO 3 2 ])~2pQ n and [HCO 3 2 ] is assumed to be well approximated as 1.9 3 10 23 M (i.e. log[HCO 3 2 ]~22.72). Based on equilibrium data compilations including Smith and Martell, 5,6 Martell and Smith, 7 Baes and Mesmer, 13 Turner et al., 11 Byrne et al., 14 and Liu and Byrne, 15 Table 1 provides a compilation of pK n , pK n * and pQ n data, and equilibrium speciation schemes appropriate to seawater (S~35) at 25 uC. The first two columns of Table 1 provide each element's atomic number and identity. The third column provides either (a) each element's dominant forms and speciation, or (b) the chemical species whose relative concentrations are to be evaluated using the data in column 4. As an example of the use of Table 1, the entry for Be indicates that the concentrations of Be 21 and BeOH 1 are equal in seawater (25 uC) at pH 5.69 and the concentrations of Be(OH) 1 Table 1 indicates that the elements in group 1 (H, Li, Na, K, Cs, Rb) exist prominently as free hydrated cations. About 1% or less of each metal is ion paired with sulfate ([MSO 4 2 ]/[M 1 ] y 0.01). Hydrogen ions are an exception to this generalization. The HSO 4 2 /H 1 concentration ratio in seawater is approximately 0.3. Discussion Group 2 elements (Be, Mg, Ca, Sr, Be) are more strongly ion paired with SO 4 22 than most of the group 1 ions. Mg 21 is approximately 10% ion paired with SO 4 22 and the extent of SO 4 22 ion pairing increases somewhat for the heavier members of the group. Be 21 is the only member of group 2 with an ionic radius sufficiently small to induce extensive hydrolysis. The pK* values listed for Be in Table 1 indicate that BeOH 1 is the dominant form of beryllium except at high pH. With a normal seawater pH range between approximately 7.4 and 8.35 (on the free hydrogen ion concentration scale) the Be(OH) 1 /Be 21 concentration ratio is never smaller than fifty. The pK* and pQ compilations in Table 1 demonstrate that all group 3 elements (Sc, Y and La through Lu) are strongly complexed in seawater. Sc is the only group 3 metal that is strongly hydrolyzed. At pH 8.0 (i.e., 1.6 pH units above the Sc pK 3 * and 1.6 units below pK 4 *) the dominant form of Sc III is Sc(OH) 3 4 0 is the dominant form of Ti over a wide range of pH (pH w 2.5). The speciation characteristics of Zr and Hf are very similar. Zr(OH) 5 2 is the dominant form of Zr IV above pH 5.99 and Hf(OH) 5 2 is the dominant form of Hf IV above pH 6.19. Thus, for both Zr IV and Hf IV , the uncharged species, M(OH) 4 0 is a significant but minor species. At the lowest pH of seawater the Zr(OH) 4 0 /Zr(OH) 5 2 and Hf(OH) 4 0 /Hf(OH) 5 2 concentration ratios are approximately 0.04 and 0.06 respectively. Group 5 elements (V, Nb, Ta) are strongly hydrolyzed. With the smallest ionic radius of these three elements, V V is very strongly hydrolyzed. VO 3 (OH) 22 is the dominant form of V V above pH 7.4. Since the pK 3 value for VO 3 (OH) 22 /VO 4 32 is nearly two units higher than that for HPO 4 22 /PO 4 32 at zero ionic strength, only VO 2 (OH) 2 2 and VO 3 (OH) 22 appear to be relatively abundant within the normal pH range of seawater. Nb(OH) 6 2 is the dominant form of Nb V above pH 7.4, and since the Nb(OH) 4 1 /Nb(OH) 5 0 concentration ratio in seawater is smaller than 10 28 only Nb(OH) 5 0 and Nb(OH) 6 2 are significant species in seawater. Ta(OH) 5 0 is the dominant form of Ta V in seawater. The Ta(OH) 6 2 /Ta(OH) 5 0 concentration ratio is only on the order of 0.06 at the highest pH of seawater and, as is the case for Nb, cationic species are unimportant (Ta(OH) 4 1 /Ta(OH) 5 0 v 10 28 ). Thus, dominant forms for the group 5 elements are VO 3 (OH) 22 , Nb(OH) 6 2 and Ta(OH) 5 0 . Group 6 elements (Cr, Mo, W) are all strongly hydrolyzed. . This conclusion is somewhat controversial because (a) iron biogeochemistry is important and intensively investigated, and (b) only one somewhat problematic analytical procedure (solubility analysis) has been extensively used to investigate the Fe(OH) 2 1 /Fe(OH) 3 0 and Fe(OH) 3 0 /Fe(OH) 4 2 transitions. The characteristics of Ru and Os speciation in seawater are very poorly understood. It is probable that both elements are very strongly hydrolyzed. Based on available data, Principal Species for Ru and Os are tentatively assigned as Ru(OH) n 42n and OsO 4 0 . Elements in group 9 (Co, Rh, Ir) have generally complex chemistries and are, perhaps, only slightly better understood than the group 8 elements. The dominant oxidation number for Co in seawater is II. Co II exists predominantly as Co 21 and ion pairs with Cl 2 . Rh III is strongly complexed by chloride and is also strongly hydrolyzed. Investigations in 0.5 M NaCl (Miller and Byrne, in progress) indicate that Rh III forms a complex array of mixed ligand complexes (RhCl a (OH) b 32(a1b) ). These investigations are challenging due to slow ligand exchange kinetics. Ir III forms strong chloride complexes and, as in the case of Rh III , has slow ligand exchange rates. In analogy with Rh III the Principal Species of Ir III are tentatively assigned as IrCl a (OH) b 32(a1b) . Both Rh and Ir occur in the IV oxidation state but the solution chemistries of Rh IV and Ir IV are very poorly understood. Group The solution chemistries of group 11 elements (Cu, Ag, Au) in oxidation state I are similar. Cu I , Ag I and Au I are strongly complexed with Cland hydrolysis is insignificant. While Ag solely exists as Ag I , Cu occurs dominantly as Cu II in oxygenated seawater and oxidation number III may be important for Au. Cu II chemistry is dominated by carbonate complexation, while Au III speciation in seawater appears (tentatively) to be dominated by mixed-ligand chlorohydroxy complexes. The speciation of group 12 metals (Zn, Cd, Hg), all in the II oxidation state, involves a progression from very weak to very strong complexation. Zn II occurs in seawater principally as Zn 21 and ZnCl 1 , Cd II is moderately complexed (CdCl 1 , CdCl 2 0 and CdCl 3 2 ) and Hg II is complexed very strongly as HgCl 4 22 and HgCl 3 2 . Group 13 elements (B, Al, Ga, In, Tl) in oxidation state III are very strongly hydrolyzed. The Principal Species of these elements are B(OH) 3 0 , Al(OH) 4 2 , Ga(OH) 4 2 , In(OH) 3 0 and, tentatively, Tl(OH) 3 0 . The speciation of each of these elements is significantly pH dependent. For B, Al, Ga and In, each metal is partitioned between uncharged and anionic forms. In contrast, Tl III appears to be partitioned between Tl(OH) 3 0 and either TlCl 4 2 or mixed chlorohydroxy species. Tl is unique among the group 13 metals in having a significant, and perhaps dominant, I oxidation state. In this form Tl I occurs principally as the free hydrated Tl 1 ion. Group 14 elements (C, Si, Ge, Sn, Pb) have diverse speciation characteristics. C is partitioned dominantly between CO 3 22 and HCO 3 2 , while for both Si and Ge, uncharged forms are dominant (Si(OH) 4 0 and Ge(OH) 4 0 ) with lesser concentrations (¡15%) of SiO(OH) 3 2 and GeO(OH) 3 2 . The sparse data available for assessment of Sn IV speciation indicate that Sn(OH) 4 0 is dominant over a wide range of pH. The speciation of Pb is apparently unique among seawater constituents in that Pb II is partitioned between chloride complexes and carbonate complexes. 17 The latter are dominant above pH 7.85. Group 15 elements (N, P, As, Sb, Bi) in oxidation states V and III are strongly hydrolyzed in seawater, and oxidation number V is favored relative to III for all group 15 elements except Bi. Bi is present in seawater dominantly as Bi(OH) 3 0 . N V and N III exist solely as unprotonated NO 3 2 and NO 2 2 . The NH 4 1 /NH 3 0 ratio in seawater is significantly pH dependent and is always larger than y10. The dominant forms of P V and As V in seawater are HPO 6 2 . The speciation of Sb III is similar to that of Bi III in that Sb(OH) 3 0 is dominant over a wide range of pH. The As(OH) 3 0 /As(OH) 4 2 ratio in seawater is pH dependent and generally larger than six. Group 16 elements include O, S, Se, Te and Po. O 22 and OHare found in association with elements in every group of the periodic table except 1, 2 and 18. Dissolved O 2 is very important in seawater because of its strong influence on the oxidation/reduction behavior of solutions. The dominant form of OH 2 in seawater is MgOH 1 . S exists in oxygenated seawater as SO 4 22 and ion pairs with group 1 and group 2 elements, and is not significantly protonated except at very low pH. Se exists in seawater as both Se VI and Se IV . In the higher oxidation state Se exists as SeO 4 22 with protonation characteristics very similar to SO 4 22 (pK y 1). As Se IV , selenium is partitioned between HSeO 3 2 and SeO 3 22 with the former dominant at low pH and the latter dominant at high seawater pH. Te also exists in seawater in the VI and IV oxidation states. In the case of Te VI , since pK c 7.35 for the Te(OH) 6 0 /TeO(OH) 5 2 transition, TeO(OH) 5 2 is the predominant species. For Te IV , pK c 8.85 for the TeO(OH) 3 2 /TeO 2 (OH) 2 22 partition and TeO(OH) 3 2 is thereby predominant. Little is known about Po equilibria in solution. The group 17 elements (F, Cl, Br, I and At) exist with 2I and V oxidation numbers and the -I state is predominant for the lighter elements. Foccurs in seawater as an approximately equimolar mixture of Fand MgF 1 . Cl and Br occur dominantly as unassociated Cland Br -. The predominant oxidation number of I is V. I V occurs as IO 3 2 and is, to a small extent, ion paired with Mg 21 . Iis found in seawater at substantially lower concentrations than IO 3 2 . Little is known about the solution chemistry of highly radioactive At. Overview of speciation in seawater The results shown in Table 1 indicate that only a relatively small number of elements have major species that do not involve hydrolyzed forms or carbonate complexation. Such elements typically have oxidation numbers I, II and 2I and are found in groups 1, 2, 11, 12 and 17, and in period 4 (groups 7-10). Only eight to nine elements have speciation schemes that strongly involve chloride complexation. Such elements are found in groups 9 (Rh III and Ir III ), 10 (Pd II and Pt II ), 11 (Cu I , Ag I , Au I ), 12 (Cd II , Hg II ) and 14 (Pb II ). However, of these elements both Rh and Ir are importantly influenced by hydrolysis, Pd II and Pt II are significantly influenced by hydrolysis, and Pb II is strongly influenced by carbonate complexation. Of the very large number of hydrolyzed elemental forms in seawater, approximately 17 have speciation schemes that are strongly pH dependent. These elements include Be II , Sc III (groups 2 and 3), V V , Nb V and Ta V (group 5), Cr III (group 6), Fe III and Ru III (group 8), Rh III and Ir III (group 9), B III , Al III , In III , Tl III (group 13), C IV , Si IV and Ge IV (group 14), P V and As V (group 15), Se IV and Te IV (group 16). As such, no elements with oxidation numbers 1, 2 and 6 (other than Be II ) have hydrolyzed forms whose relative concentrations are strongly pH dependent within the normal pH range of seawater. Approximately 17 elements with atomic numbers less than 92 (Cu, Pb, Y, and the lanthanides) have speciation schemes that strongly involve or are dominated by carbonate complexation. With the inclusion of U VI and the 9 actinides with oxidation number III (Am-Lr), it is seen that carbonate complexation is important for a large portion of the periodic table. Altogether, including elements strongly complexed by carbonate and seventeen or more elements with pH dependent, hydrolyzed major species, it is seen that the seawater speciation of more than forty elements is strongly influenced by pH. Elements having exceptionally poorly understood speciation schemes in seawater include Ru and Os (group 8), Rh and Ir (group 9), Pt (group 10) and Au (group 11). Speciation of the latter four elements may be dominated by a complex array of chlorohydroxy complexes and, perhaps, a variety of types of halides. It should also be anticipated that, for metals forming strong covalently bonded species, complexation by ligands containing reduced sulfur may dramatically change future Principal Species assessments. This concern is particularly relevant to Rh, Ir, Pd, Pt, Au, Hg, and Tl. Appreciation of the role of carbonate in seawater complexation has grown steadily over the past forty years. Experimental difficulties have impeded the progress of investigations involving the complexation of strongly hydrolyzed metals by carbonate ions. Using new technologies, however, future improvements in carbonate complexation assessments are probable and the perceived role of carbonate in trace element complexation may significantly expand. In view of the importance of pH dependent speciation schemes for a wide variety of elements in seawater, it is important to note that substantial uncertainties remain in the equilibrium characterizations presented in Table 1. In many cases, estimated speciation schemes must be based on data obtained using a single analytical technique. It is, furthermore, particularly problematic when speciation characterizations of strongly hydrolyzed metals at high pH are based solely on solubility analyses. Among other complicating factors, the experimental solutions used in solubility analyses generally have much higher metal concentrations than are observed in the open ocean. Consequently, solubility analyses are conducive to the formation of a more complex set of hydrolyzed species (e.g., polymers and colloids) than are generally found in the oceans. Deconvolution of the data generated in such analyses can be challenging.
4,702.6
2002-01-22T00:00:00.000
[ "Chemistry", "Environmental Science" ]
Sea Anemone Heteractis crispa Actinoporin Demonstrates In Vitro Anticancer Activities and Prevents HT-29 Colorectal Cancer Cell Migration Actinoporins are the most abundant group of sea anemone cytolytic toxins. Their membranolytic activity is of high interest for the development of novel anticancer drugs. However, to date the activity of actinoporins in malignant cells has been poorly studied. Here, we report on recombinant analog of Hct-S3 (rHct-S3), belonging to the combinatory library of Heteractis crispa actinoporins. rHct-S3 exhibited cytotoxic activity against breast MDA-MB-231 (IC50 = 7.3 µM), colorectal HT-29 (IC50 = 6.8 µM), and melanoma SK-MEL-28 (IC50 = 8.3 µM) cancer cells. The actinoporin effectively prevented epidermal growth factor -induced neoplastic transformation of JB6 Cl41 cells by 34% ± 0.2 and decreased colony formation of HT-29 cells by 47% ± 0.9, MDA-MB-231 cells by 37% ± 1.2, and SK-MEL-28 cells by 34% ± 3.6. Moreover, rHct-S3 decreased proliferation and suppressed migration of colorectal carcinoma cells by 31% ± 5.0 and 99% ± 6.4, respectively. The potent anti-migratory activity was proposed to mediate by decreased matrix metalloproteinases-2 and -9 expression. In addition, rHct-S3 induced programmed cell death by cleavage of caspase-3 and poly (ADP-ribose) polymerase, as well as regulation of Bax and Bcl-2. Our results indicate rHct-S3 to be a promising anticancer drug with a high anti-migratory potential. Introduction Cancer is a major public burden with tens of millions people being diagnosed around the world every year. Eventually, more than half of the patients succumb to their disease despite new developments [1]. In 2018, 9.6 million people died from cancer according to the World Health Organization [2], with increasing numbers in developing countries. Lung, breast, colorectal, and prostate cancer belong to the most frequently diagnosed malignancies worldwide [2]. These diseases claim the lives of more than a million people annually. Although scientific and technological progress allowed the development of new approaches such as gene-therapy [3], stem cell transplantation [4], immunotherapy [5], and therapy by nanoparticles [6] Hct-S3 (177 amino acid residues) is one of the most represented isoforms belonging to the multigene family of H. crispa actinoporins. A recombinant analog of Hct-S3 (rHct-S3) was expressed in Escherichia coli strain Rosetta (DE3) as fusion protein containing glutation-S-transpherase, polyhistidine tag, enteropeptidase cleavage site, and actinoporin. In order to avoid the denaturation of the target polypeptide and increase its yield, we applied a high-pressure homogenization approach using a French-press homogenizer for the cell destruction instead of ultrasonication. The fusion protein with a molecular mass of~50 kDa was isolated using a metal-affinity chromatography and cleaved by enteropeptidase. Next, the targeted actinoporin was purified on a soybean trypsin inhibitor -affinity column and further desalted (Figure 1a). The final yield of rHct-S3 after high-pressure homogenization was 1 mg/L of cell culture in contrast to 0.2 mg/L yield after ultrasonication. The molecular mass of the polypeptide was determined by MALDI-TOF/TOF mass-spectrometry as 19,393 Da (Figure 1b), which is consistent with the predicted molecular mass (19,390 Da). The N-terminal amino acid sequence (15 aa) determined by the automated Edman degradation matched well with the amino acid sequence deduced from cDNA earlier. The Effect of rHct-S3 on Cell Viability In The Effect of rHct-S3 on Cell Viability In order to determine the cytotoxic effect of rHct-S3, the panel of human cancer cell lines HT-29 (colorectal carcinoma), MDA-MB-231 (triple negative breast cancer), SK-MEL-28 (malignant melanoma), as well as normal mouse epidermal JB6 Cl41 cells and human embryonic kidney HEK 293 cells were treated by rHct-S3 at a concentration range 0.01 µM-10 µM for 24 h and cell viability was estimated by MTS assay. rHct-S3 had comparable effects on viability of cell lines with an IC 50 of 8.6 µM for JB6 Cl41 cells (Figure 2a The Effect of rHct-S3 on Cell Viability In order to determine the cytotoxic effect of rHct-S3, the panel of human cancer cell lines HT-29 (colorectal carcinoma), MDA-MB-231 (triple negative breast cancer), SK-MEL-28 (malignant melanoma), as well as normal mouse epidermal JB6 Cl41 cells and human embryonic kidney HEK 293 cells were treated by rHct-S3 at a concentration range 0.01 µ M-10 µ M for 24 h and cell viability was estimated by MTS assay. rHct-S3 had comparable effects on viability of cell lines with an IC50 of 8.6 µ M for JB6 Cl41 cells (Figure 2a The results are expressed as the percentage of inhibition that produced a reduction in absorbance by rHct-S3 treatment compared the non-treated cells. Results are expressed as the mean ± standard deviation (SD). The Effect of rHct-S3 on EGF-Induced Neoplastic Transformation of Normal Cells and Colony Formation of Cancer Cells The effects of rHct-S3 on neoplastic transformation of JB6 Cl41 cells induced by EGF, colony formation and growth of cancer cells were studied by soft agar assay, which is considered to be the most accurate type of in vitro test for detecting malignant transformation of cells [29]. The actinoporin was found to inhibit the EGF-induced neoplastic transformation of JB6 Cl41 cells by 10% ± 5.0, 23% ± 2.5, and 34% ± 0.2 at subtoxic concentrations of 1, 2, and 4 µM, respectively (Figure 3a,b). Moreover, rHct-S3 decreased the number of colonies of HT-29 cells by 25% ± 1.8, 33% ± 0.1, and 47% ± 0.9, at concentrations of 1, 2, and 4 µM, respectively, compared to non-treated cells (control) (Figure 3c). At the same doses, rHct-S3 inhibited the colony formation of MDA-MB-231 cells by 17% ± 2.4, 20% ± 2.5, 37% ± 1.2 and SK-MEL-28 cells by 18% ± 1.5, 24% ± 0.5, 34% ± 3.6, respectively (Figure 3d,e). Because activity of rHct-S3 was most pronounced in colorectal carcinoma HT-29 cells, further experiments were carried out with this cell line. It should be noted that chemotherapeutic drug, cisplatin, used as a positive control in this study, inhibited colony formation of HT-29, MDA-MB-231, and SK-MEL-28 cells by 46%, 75%, and 39% at a non-cytotoxic dose of 3 µM, respectively ( Figure 3). These results indicate that the actinoporin has a promissing anticancer potential. The Effect of rHct-S3 on Migration of Colorectal Carcinoma HT-29 Cells We investigated the effect of rHct-S3 on the migration of colorectal carcinoma HT-29 cells with high metastatic potential, using a scratch assay. It was demostrated that rHct-S3 supressed migration of HT-29 cells by 33% ± 10.2, 50% ± 7.5, and 99% ± 6.4, respectively, at concentrations of 1, 2, and 4 µM, compared to the control group (Figure 4a,b). In order to reveal the impact of inhibition of proliferation by rHct-S3 on migration, the antiproliferative activity of rHct-S3 against HT-29 cells was checked in 24, 48, 72, and 96 h of treatment (Supplementary Figure S1). It was found that rHct-S3 at concentrations 1, 2, and 4 µM slightly (not more than 10%) decreased the proliferation rate of HT-29 cells after 24 h and 48 h of treatment, while it inhibited cells proliferation by 11% ± 3.0, 26% ± 1.2, and 31% ± 5.0, respectively, after 96 h of treatment. These results indicate that rHct-S3 possess a moderate antiproliferative activity. To elucidate the potential mechanism of this anti-migratory activity, we evaluated the effect of rHct-S3 on the expression level of the matrix metalloproteinases (MMP)-2 and MMP-9, playing a pivotal role in cancer cell invasion and metastasis. Indeed, actinoporin effectively inhibited the expression of MMP-2 and MMP-9 ( Figure 4c) at a concentration of 2 µM. In addition, we estimated whether rHct-S3 affect the activation of caspase-3, a known executor of apoptosis. The upregulation of cleaved caspase-3 was detected in HT-29 cells treated with rHct-S3. Additionally and in line with this, we have detected a degradation of poly (ADP-ribose) polymerase (PARP) as well as Bcl-2 down-regulation and Bax up-regulation ( Figure 4d). Thus, rHct-S3 decreases the migratory activity of colorectal carcinoma HT-29 cells by the inhibition of MMP-2 and MMP-9 and induces the apoptosis via the activation of caspase-3. expression of MMP-2 and MMP-9 ( Figure 4c) at a concentration of 2 μM. In addition, we estimated whether rHct-S3 affect the activation of caspase-3, a known executor of apoptosis. The upregulation of cleaved caspase-3 was detected in HT-29 cells treated with rHct-S3. Additionally and in line with this, we have detected a degradation of poly (ADP-ribose) polymerase (PARP) as well as Bcl-2 downregulation and Bax up-regulation ( Figure 4d). Thus, rHct-S3 decreases the migratory activity of colorectal carcinoma HT-29 cells by the inhibition of MMP-2 and MMP-9 and induces the apoptosis via the activation of caspase-3. Discussion Actinoporins are the major components of sea anemone venom, which disrupt cell membranes by pore formation [30]. H. crispa venom contains numerous actinoporin isoforms, encoded by the multigene family [28]. Hct-S3 is one of the isoform of H. crispa actinoporins belonging to Hct-S group with Ser at N-terminus ( Figure 5). Earlier, the recombinant analog of Hct-S3 was obtained [26,31] and its hemolytic activity was comparable with well-characterized actinoporins such as RTX-A from H. crispa [32], EqII from A. equina [33] and StnII from S. helianthus [34]. Comparative analysis of amino acid sequences of known actinoporins and Hct-S3 revealed that Hct-S3 shared 87-89% identity with Gigantoxin-4 from S. gigantea, and RTX-A and StnI from S. helianthus, which possess anticancer activity ( Figure 5). However, their anticancer mechanism has not been studied in detail. We attempted to elucidate the mechanism of action of actinoporins, in particular, Hct-S3, in different human cancer cells. The recombinant analog of Hct-S3 was obtained using a previously developed scheme [26] with a changing of cell disruption approach. The lack of carbohydrates and disulfide bridges simplifies the production of recombinant actinoporins by heterologous expression in E. coli. However, there are some difficulties with the isolation of soluble actinoporins due to the protein aggregation as inclusion bodies during cells' ultrasonication. It is known that ultrasonic homogenization is a high-energy process of cell disruption. This fact may lead to the samples heating and result in the denaturation of proteins. Therefore, to minimize the protein denaturation we used a high-pressure homogenization of E. coli cells that allow us to increase the yield of soluble form of rHct-S3 by five times. crispa [32], EqII from A. equina [33] and StnII from S. helianthus [34]. Comparative analysis of amino acid sequences of known actinoporins and Hct-S3 revealed that Hct-S3 shared 87-89% identity with Gigantoxin-4 from S. gigantea, and RTX-A and StnI from S. helianthus, which possess anticancer activity ( Figure 5). However, their anticancer mechanism has not been studied in detail. We attempted to elucidate the mechanism of action of actinoporins, in particular, Hct-S3, in different human cancer cells. The recombinant analog of Hct-S3 was obtained using a previously developed scheme [26] with a changing of cell disruption approach. The lack of carbohydrates and disulfide bridges simplifies the production of recombinant actinoporins by heterologous expression in E. coli. However, there are some difficulties with the isolation of soluble actinoporins due to the protein aggregation as inclusion bodies during cells' ultrasonication. It is known that ultrasonic homogenization is a high-energy process of cell disruption. This fact may lead to the samples heating and result in the denaturation of proteins. Therefore, to minimize the protein denaturation we used a high-pressure homogenization of E. coli cells that allow us to increase the yield of soluble form of rHct-S3 by five times. Cytotoxic effects of rHct-S3 were studied in normal mouse epidermal, human embryonic kidney cells and human colon carcinoma, breast cancer, and melanoma cells. rHct-S3 exhibited cytotoxic activity against all tested cell lines with comparable IC50 values (Figure 2), which were 100-1000-fold higher than those found for other actinoporins [17,18,35]. Previously, StnI and hemolytic fraction of S. helianthus were shown to possess cytotoxic activities against colorectal cancer or breast cancer cells, respectively, while RTX-A demonstrated potent cytotoxic activity against both tested cancer cell lines [16]. Carcinogenesis is known to be a multistage process, which includes the initiation (transformation of normal cells into cancer cells), development (formation of colonies of cancer cells) and progression (growth of colonies of cancer cells) of cancer. Cancer prevention is gaining increasing attention because it may be a promising alternative to cancer treatment sparing complications caused by advanced diseases. The involvement of multiple factors and developmental stages and our increased understanding of cancer at the epigenetic, genetic, molecular, and cellular levels is opening up enormous opportunities to interrupt and reverse the initiation and progression of the disease and Cytotoxic effects of rHct-S3 were studied in normal mouse epidermal, human embryonic kidney cells and human colon carcinoma, breast cancer, and melanoma cells. rHct-S3 exhibited cytotoxic activity against all tested cell lines with comparable IC 50 values (Figure 2), which were 100-1000-fold higher than those found for other actinoporins [17,18,35]. Previously, StnI and hemolytic fraction of S. helianthus were shown to possess cytotoxic activities against colorectal cancer or breast cancer cells, respectively, while RTX-A demonstrated potent cytotoxic activity against both tested cancer cell lines [16]. Carcinogenesis is known to be a multistage process, which includes the initiation (transformation of normal cells into cancer cells), development (formation of colonies of cancer cells) and progression (growth of colonies of cancer cells) of cancer. Cancer prevention is gaining increasing attention because it may be a promising alternative to cancer treatment sparing complications caused by advanced diseases. The involvement of multiple factors and developmental stages and our increased understanding of cancer at the epigenetic, genetic, molecular, and cellular levels is opening up enormous opportunities to interrupt and reverse the initiation and progression of the disease and provide scientists with numerous targets to arrest by physiological and pharmacologic mechanisms, with the goal of preventing end-stage, invasive disease and impeding or delaying the development of cancer [36]. One of the promising strategies for combating carcinogenesis is to search for substances that can prevent the transformation of normal cells into cancer cells induced by various stimulating factors, e.g., epidermal growth factor (EGF), triphorbol ether (TPA), ultraviolet radiation (UV), etc. The promotion-sensitive mouse epidermal JB6 Cl41 cells are known to respond irreversibly to tumor promoters such as epidermal growth factor (EGF) with induction of anchorage-independent growth in soft agar [37]. Therefore, this well-established culture system was used to study the cancer-preventive activity of rHct-S3. Indeed, the actinoporin delayed the EGF-induced neoplastic transformation of JB6 Cl41 cells (Figure 3a,b) and suppressed colony formation of all cancer cell lines (Figure 3c-e), with the inhibition level of HT-29 and SK-MEL-28 cells comparable to cisplatin. Similar activity was previously demonstrated for RTX-A [16]. This polypeptide prevented malignant transformation of JB6 P + Cl41 cells and suppressed the growth of HeLa cell colonies at nanomolar concentrations [16]. The most significant cancer-preventive activity of rHct-S3 was found in colon cancer cells. Therefore, we examined the effects of rHct-S3 on the migration of colon cancer cells, as well as their proliferation, in order to incorporate the influence of cell proliferation in the interpretation of the results of migration assays. In fact, tumor cell migration essentially contributes to invasion and metastatic spread, ultimately resulting in progression of disease. More than 30% of patients with colorectal carcinoma have clinically detectable metastases at the time of primary diagnosis [38]. Since the most serious complication and the main cause of death of patients with colorectal carcinoma are distant metastases, the evolution of antimetastatic activity of potential therapeutic agents continues to be an important and urgent task. The mechanism of metastases formation is complex and not fully understood. The migration, intravasation, extravasation of cancer cells and formation of a new vessels (neoangiogenesis) to consolidate a secondary tumor at a distant site are the most important steps of the metastasis process [39]. Remarkably, rHct-S3 almost completely suppressed the migration of HT-29 cells at a concentration of 4 µM (Figure 4a,b). Moreover, the actinoporin possessed a moderate antiproliferative activity (Supplementary Figure S1), but its impact on the anti-migratory activity of rHct-S3 was not significant. During metastasis, the degradation of extracellular matrix (ECM) and components of the basement membrane by proteases facilitates the detachment of cancer cells, their crossing of tissue boundaries, and invasion into adjacent tissue compartments [40]. In recent years, the importance of cancer-associated proteases such as matrix metalloproteinases MMP-2 and MMP-9 in invasion and metastasis has been reported for a variety of solid malignant tumors [41]. Indeed, the actinoporin was found to effectively inhibit an expression of MMP-2 and MMP-9 (Figure 4c) that apparently resulted in the decrease in HT-29 cell migration. Recently, it was shown that caspase-3 is also able to influence the migration and invasion of colorectal cells [42]. In addition, caspase-3 is a key executioner of programmed cell death. In fact, rHct-S3 treatment cleavage of total caspase-3, followed by PARP cleavage, mediate both anti-migratory activity and induction of apoptosis in HT-29 cells (Figure 4d). In line with the pro-apoptotic activity of rHct-S3, an up-regulation of pro-apoptotic Bax and suppression of anti-apoptotic Bcl-2 were observed. In conclusion, H. crispa actinoporin shows promising anticancer activity with a strong inhibiting effect on the migratory potency of cancer cells. We revealed for the first time that the actinoporin was able to inhibit cancer colony formation and cell migration via suppression of MMP-2 and MMP-9 expression and induce cell apoptosis via activation of caspase-3, cleavage of PARP, activation of Bax and suppression of Blc-2 expressions. The results indicate a high potential of the actinoporin to prevent cancer disease progression. Deep investigations of the underlying mechanism of the effect on apoptotic PI3K/AKT/mTOR, and of cell adhesion signaling pathways, are still to be performed. Expression and Isolation of Recombinant Hct-S3 The recombinant plasmid obtained earlier was transformed into E. coli strain Rosetta (DE3) (Novagen, Merck KGaA, Darmstadt, Germany). Transformed cells were cultured at 37 • C in 1 L of Luria-Bertani medium containing 50 µg/mL kanamycin (Gibco, Thermo Fisher Scientific, Gaithersburg, MD, USA) until the optical density (OD 600 )~0.5 was reached. After induction with IPTG at a concentration of 0.2 mM, the cells were incubated at 19 • C for 18 h at 180 rpm, centrifuged for 6 min at 6000 rpm at 4 • C, and supernatant was removed. The presence of rHct-S3 was determined in 12% polyacrylamide gel by Laemmli's SDS-PAGE method [43]. Precipitate was resuspended in the start buffer for affinity chromatography (400 mM NaCl, 20 mM Tris-HCl buffer, pH 8.0) and disrupted by French-press homogenizer (Thermo Fisher Scientific, Waltham, MA, USA) using the mini-cell (3.7 mL). Lysed cells were centrifuged for 10 min at 10,000 rpm to remove all insoluble particles. Supernatant was applied to a Ni-NTA agarose (Qiagen, Venlo, Netherlands), the fusion protein was purified with 5 volume of wash buffer (400 mM NaCl, 50 mM imidazole, 20 mM Tris-HCl buffer, pH 8.0) and 5 volume of start buffer. The fusion protein was cleaved by enteropeptidase (1 unit/mg protein) at room temperature for 18 h at 80 rpm. The recombinant actinoporin were purified from enteropeptidase on soybean trypsin inhibitor-agarose (Sigma-Aldrich, St. Louis, MO, USA) and desalted on a centrifugal filter tube (Millipore, Lenexa, KS, USA) with capacity < 3000 Da. The molecular masses of the purified rHct-S3 were analyzed by Ultra Flex III MALDI-TOF/TOF mass spectrometer (Bruker, Bremen, Germany). The amino acid sequence of rHct-S3 were determined on an automated sequencer protein Procise 492 Clc (Applied Biosystems, Foster City, CA, USA). The purified rHct-S3 was dissolved in PBS, filtered by 0.22 µm "Millipore" membranes (Billerica, MA, USA) and used for the bioactivity experiments. MTS Assay To determine cytotoxic activity of rHct-S3, cells (1.0 × 10 4 ) were seeded in 96-well plates ("Jet Biofil", Guangzhou, China) and cultured in 200 µL of complete culture medium for 24 h at 37 • C in a 5% CO 2 incubator. The cell monolayer was washed with PBS and treated either with PBS (control) or various concentrations of rHct-S3 (0.01 µM-10 µM) in fresh appropriate culture medium for 24 h. Subsequently, the cells were incubated with 15 µL MTS reagent ("Promega", Madison, WI, USA) for 3 h, and the absorbance of each well was measured at 490/630 nm using Power Wave XS microplate reader ("BioTek", Wynusky, VT, USA). To determine the antiproliferative activity of rHct-S3, cells (0.7 × 10 4 ) were seeded in 96-well plates and cultured in 200 µL of complete culture medium for 24 h at 37 • C in a 5% CO 2 incubator. The cell monolayer was washed with PBS and treated either with PBS (control) or various concentrations of rHct-S3 (1, 2, and 4 µM) in fresh appropriate culture medium for 24, 48, 72, 96 h. Subsequently, the cells were incubated with 15 µL MTS reagent ("Promega", Madison, WI, USA) for 3 h, and the absorbance of each well was measured at 490/630 nm using Power Wave XS microplate reader ("BioTek", Wynusky, VT, USA). Scratch-Wound Assay Cell migration assay was performed as previously described [44]. Briefly, HT-29 cells (3 × 10 5 cells/mL) were seeded into 6-well plates and 24 h later the culture medium was removed and straight scratch was created using a 200 µL sterile pipette tip. Cells were washed twice with PBS to remove cellular debris, replaced with appropriate complete culture media containing rHct-S3 (1, 2, and 4 µM) and incubated for 96 h. All experiments were repeated at least three times in each group. For the image analysis, cell migration into the wound area was photographed at the stages of 0 and 96 h using a microscope, Motic AE 20, and ImageJ software. The cells migration distance was estimated by measuring the width of the wound and expressed as a percentage of each control for the mean of the wound closure area. Statistical Analysis All assays were performed in at least three independent experiments. Results are expressed as the mean ±standard deviation (SD). Student's t test was used to evaluate the data with the following significance levels: * p < 0.05, ** p < 0.01, *** p < 0.001.
5,054
2020-12-01T00:00:00.000
[ "Biology" ]
A Bayesian approach to identify Bitcoin users Bitcoin is a digital currency and electronic payment system operating over a peer-to-peer network on the Internet. One of its most important properties is the high level of anonymity it provides for its users. The users are identified by their Bitcoin addresses, which are random strings in the public records of transactions, the blockchain. When a user initiates a Bitcoin transaction, his Bitcoin client program relays messages to other clients through the Bitcoin network. Monitoring the propagation of these messages and analyzing them carefully reveal hidden relations. In this paper, we develop a mathematical model using a probabilistic approach to link Bitcoin addresses and transactions to the originator IP address. To utilize our model, we carried out experiments by installing more than a hundred modified Bitcoin clients distributed in the network to observe as many messages as possible. During a two month observation period we were able to identify several thousand Bitcoin clients and bind their transactions to geographical locations. Introduction Bitcoin is the first widely used digital currency, developed by Satoshi Nakamoto after the beginning of the financial crisis in 2009 [1]. A distinctive feature of Bitcoin is that there is no central authority overseeing transactions, users are connected via a peer-to-peer network where they announce any transaction they wish to make. Transactions can then be validated by anyone using the publicly available list of transactions, the blockchain, which is in turn generated in a proof-of-work system. Cheating (e.g. including invalid transactions in the blockchain) thus would require one entity to control more than 50% of the computing power that users dedicate to generating the blockchain. In accordance with the decentralized nature of the system, the specifications of the network protocol is publicly available, while several opensource client programs implementing the protocol exist [2]. One of the key characteristics of Bitcoin is the high amount of anonymity it provides for its users [3]. Although one can learn the details of the transactions via the blockchain, it is still unknown who the users initiating those transactions are. This is possible since as there is no authority overseeing the operation of the system, users do not need to provide any form of identification to join; anyone with an Internet connection can download a client program, which then allows them to generate any number of Bitcoin addresses that they can use in the transactions to send or receive Bitcoins. This results in that the identity of Bitcoin users is only revealed if they publish their Bitcoin address or this information is intercepted in some way outside the Bitcoin system. While anonymity is not among the main design goals of the Bitcoin system [3], Bitcoin is widely considered as a highly anonymous way of performing financial transactions and is often utilized for illegal uncontrolled payments [4], along legal uses where the involved parties do not wish to disclose their identities to controlling entities in the traditional financial system, e.g. banks or governments. In the paper, we present a probabilistic model based on the information propagating over the Bitcoin network, which gives the possibility of identifying the users initiating the transactions. In this case, identification means binding the transactions to the IP addresses where they were created. The basic idea consists of two main steps. First, the probability is determined for each transaction that a specific client (identified by its IP address) created it. Assuming that the creator of the transaction controls the Bitcoin addresses from which money is sent in it, this step then results in possible IP address-Bitcoin address pairings. Next, the most likely Bitcoin address -client pairings are identified by combining the probabilities in the list of pairings compiled in the previous step. This is further elaborated by grouping Bitcoin addresses that belong to the same user with high probability based on the transaction network. Finally, the geographical localization of the IP addresses opens the door for a large scale analysis of the distribution and flow of Bitcoin. The rest of the paper is structured as follows. Section 3 discusses the relevant characteristics of Bitcoin and provides the necessary background for the further sections. In section 4, the mathematical model used for the deanonymization is explained. The data collection is described in section 5. Section 6 presents the results of the application of the model. Finally the method described in this study is compared to the related works of the topic in section 2. Related work In accordance with the innovative ideas behind it and the high amount of interest it generated, there has been a significant amount of work focusing on Bitcoin and cryptocurrencies. Works focusing on privacy and anonymity show that the statistical processing of a large amount of seemingly insignificant information can take the attacker closer to reveal the identity of people using Bitcoin. Androulaki, et al. [5] evaluated the privacy of Bitcoin by analyzing the system using a simulator. After grouping Bitcoin addresses they used behavior-based clustering techniques (K-Means and Hierarchical Agglomerative Clustering algorithms) to bind the Bitcoin addresses to real users. Reid and Harrigan [6] used mainly offline data processing of the blockchain to analyze the transaction graph. They identified its clusters and components, and analyzed the degree distribution of the user network. They also showed that the analysis of publicly available data from social websites and forums can also reveal the Bitcoin addresses of some users, similary to previous work in different contexts [7]. Biryukov, Khovratovich and Pustogarov provided a method which connects the users to IP addresses [8]. They connected to all publicly available Bitcoin nodes (servers) and listened the messages they were relaying. They used Bitcoin's peer discovery mechanism to link transactions to their originators even if these do not accept incoming connections: the servers that broadcast the newly connected clients' IP addresses were assumed to be the same set of servers which first relayed their transactions. The difficulty of this method is that a lot of connections have to be established to reach good results as the number of the servers increases. On the other hand, it promises results for Bitcoin clients which monitoring nodes cannot directly connect to (e.g. because they are protected by firewalls and connect to a limited). In contrast, our methodology requires direct connections to the originators and we thrive to achieve this by running a lot of Bitcoin clients accepting a large number of incoming connections. While they use a fixed number of message relays to infer the local network of the originator, we use a short initial time span for message broadcasts to infer the actual originator. A further main difference in our methodology is combining information from many transactions and linking addresses based on the blockchain to provide more transactions per Bitcoin user in that step. Our probabilistic approach could be combined with the methodology in [8] in order to identify the "hidden" Bitcoin nodes with higher probability. This also allows linking Bitcoin address groups belonging to the same user based on them being originated from the same client, even if these addresses would not be possible to link based only on the blockchain. Koshy et al. also monitored the messages about the transactions and they classified the transactions to distinct relay patterns [9]. After applying heuristics to determine the possible owner IP addresses of the transaction, they computed simple aggregate statistics to filter out the correct Bitcoin address-IP address pairings for both input and output addresses. Venkatakrishnan et al. proposed a new message relay mechanism called dandelion, which could prevent the nodes to be deanonymized with a high probability [10]. They proposed that the message propagation should have two phases: first, the message is sent to exactly one randomly chosen connected client for a random number of hops by every client, and after the first phase the message could be further broadcast with a Poisson process from the nodes that received the transaction. The authors also highlighted that requirements of the high level of anonimity and low latency are properties that can only be improved at each others expense. Decker et al. [11] and Donet Donet et al. [12] were among the first to empirically study the peer-to-peer network that enables Bitcoin to operate, characterizing some of the important properties of information propagation. Neudecker et al. [13] present a method for inferring the topology of the peer-to-peer network based on the observation of the message propagation process. Their methodology is similar to ours as they connect to a large number of Bitcoin nodes and observe messages received announcing new transactions. The main difference is in the focus of the study: while our goal is to identify the first node announcing a transaction without the knowledge of the underlying peer-to-peer network topology, the authors in Ref. [13] aim to reconstruct the network topology but do not consider linking Bitcoin addresses to nodes in the network. In further recent work, Goldfeder et al. [14] show that online merchants accepting cryptocurrencies potentially leak substantial information that would allow third-parties to identify the transactions in the blockchain, thus linking information collected via tracking and cookies to Bitcoin addresses; Miller et al. [15] focus on Monero, a cryptocurrency with stronger anonymity features and show that under certain circumstances it might still be possible to defeat Monero's mixing mechanism and thus its enhanced anonymity as well. Finally, we note that a good review of recent developments, including research focusing on security and privacy in Bitcoin can be found in the work of Conti et al. [16]. In their recent study, Wang and Postugarov [17] also investigate transaction information propagation in the Bitcoin network. They attempt to characterize clients that are operating behind TOR, NAT or firewalls as well. From their seven days data they report 50 unreachable clients are involved in 43% of transaction propagation in the network, and that many unreachable clients are running in public cloud services, potentially to crawl the Bitcoin network. Although Bitcoin lacks a material basis and has no territorial currency zone Pel in his thesis [18] shows its geographical manifestation. Mining patterns, user procurement of the cryptocurrency and Bitcoin related startups are examined in [18]. These peripheral processes can naturally be linked to geographic space and strong correlations are found between the Bitcoin volume and financial activity, density of computer center and location of mining pools. In a recent thesis Brown [19] studies the geographical distribution of Bitcoin and Ethereum mining pools and find connection to electricity pricing, TOR exit node distribution. Basically the following common methods are used to reveal the identities of Bitcoin users: 1. analysis of transactions with multiple input and grouping the input Bitcoin addresses of the same transactions; 2. analysis of Bitcoin flow in the transaction graph using clustering techniques; 3. analysis of propagating network-layer information to bind their content to the users, 4. and finally using publicly available information (e.g. in forums) to connect Bitcoin addresses to identities. The methodology presented in our work combines the 1 and 3 types of approaches, mainly based on statistical processing of network propagation properties. The main characteristics of Bitcoin In order to use Bitcoin one has to connect to the Bitcoin network using an open-source client program [2]. In this work, we concentrate on the Bitcoin Core client [2], whose source code we inspected and modified for the purpose of data collection. By default, this client establishes eight connections to other clients. If there is a link between two clients, they are connected. Clients exchange information of different types, e.g. the transactions they know about, their state, cryptographic signatures and others through the network. This is necessary for the validation of the transactions as it is done by the entire network. In case of Bitcoin transactions, Bitcoin addresses play similar role as the bank account numbers in regular currency transactions. However, there are two major differences: • each user may have as many Bitcoin addresses as they would like to • and multiple source and destination Bitcoin addresses can be involved in a single transaction. In case of the Bitcoin Core client program that was in operation at the time of the measurement, when a user initiates a transaction, the client program (the originator) relays a message to a randomly chosen connected client in every 100ms time interval. This method is referred to as trickling, and its goal is to hide the source of the transaction. The clients receiving this message (which are not the originators of the transaction) use a slightly more complex algorithm to further send the information. Besides trickling, they also relay the message to the other clients with a probability of 1/4 (in every 100ms). We expect that other types of clients apply the same mechanisms to protect the privacy of the users. We note that as of today, the previously described mechanism for relaying transactions has been changed in the case of the Bitcoin Core client. Currently every client maintains a queue for the messages to be relayed for each connected clients and relays them according to a Poisson process. The parameter of the process is 5 sec for incoming connections and 2.5 sec for outgoing connections. In this work, we consider the previously described method which was in use during the time of our data collection; we believe that our model could be used for the latter case as well with minor modifications. In accordance with the previously described methodology, the network relies on clients relaying transactions to have them spread throughout the entire network. As a consequence an arbitrarily chosen client is not necessarily directly informed about the transaction by the originator (see Fig 1). As no state, bank, institute or organization controls or ensures the validity of Bitcoin transactions, cryptographic methods are used by the whole Bitcoin community for this purpose. The security of Bitcoin is based on the blockchain. In this study the source Bitcoin addresses, the destination Bitcoin addresses, the timestamps and the transferred volume of Bitcoin is extracted from the blockchain for each transaction. If the owners of the Bitcoin addresses were known, the blockchain would reveal all of the transactions of each Bitcoin user. The open nature of the system mitigates this concern, as anyone can generate any number of Bitcoin addresses without having to reveal their identity. Nevertheless, if a Bitcoin address can be linked to someone (either because they share it in order to receive Bitcoins or by any other method), the transaction history of that Bitcoin address can be trivially retrieved from the blockchain. Thus, keeping the association between Bitcoin addresses and real identities in secret is crucial for users who wish to maintain their privacy. Fig 1. A new transaction is initiated by the client "S". At first it informs the clients denoted by "I" (they are informed directly from the originator, so only the trickling method is used for the relay). Then, these clients relay the transaction further-among possibly other I type clients-to the ones denoted by "II". https://doi.org/10.1371/journal.pone.0207000.g001 A Bayesian method for the identification of Bitcoin users In this section we present the methodology to assign probabilities to the distinct IP addressuser pairings, which consists of three main steps. An overview of the process is illustrated in Fig 2. First, the propagating messages are observed and recorded by several monitoring clients in order to cover as great part of the network as possible. For each transaction, monitoring clients record the list of clients who relayed the transaction in the first time segment (see the definition in the next subsection). They are the possible originators of the transaction. After some theoretical considerations, we assign probabilities to each client that show the probability of them being the originator, separately for each transaction that we recorded. Next, the blockchain is used to group the Bitcoin addresses owned by the same user. Additionally, the blockchain also enables to calculate the balances of the users for further analysis. Last, by having possibly several transactions of the same Bitcoin address and the grouping of Bitcoin addresses by user allows us to combine measurements from multiple transactions to identify users with higher confidence. By combining the probabilities from the first step, the users (and their balances) are paired with the clients that are most likely the originators of their transactions. The clients can be geographically localized through their IP addresses, which allows the determination of the geographical distribution and flow of Bitcoins. Step 1: Individual probabilities Let us consider a single transaction observed by one monitoring client. A monitoring client connected to the originator does not necessarily receive the message from the originator first, because in some cases it can be relayed faster through a mediator client (Fig 3). One iteration of sending messages happens every 100 ms time interval. Let us first calculate the probability that the originator relays the message to the monitoring client in one specific iteration. If the originator has c orig clients connected to it (among which one is the monitoring client), then in every iteration there is 1/c orig probability, that it relays to the monitoring client. In case of the mediator client, it relays the transaction to the monitor client with a probability of 1/c med because of trickling, and it relays with a probability of 1/4 if the other mechanism is used. The probability that a specific client relays the transaction in the k-th iteration, follows a geometric distribution. Let us consider the route on Fig 3 when the originator sends the message to the mediator, and then the mediator further relays it to the monitoring client. To calculate the distribution of the iterations for the route through the mediator client, the sum of the two random variables has to be considered. This can be derived from the discrete convolution of the above two distributions. As every ordinary client initiates 8 outgoing connections when connecting to the network, the number of connections is estimated to be 16 (taking into account the incoming connections as well). If the two routes on Fig 3 are considered independent from each other, the probability of the direct route being shorter (i.e. requires less iterations) is 0.5785. Here we have not taken into account the network delay, and that multiple indirect route can exist from the originator to the monitoring client possibly consisting of more steps. With this model however, we can approximate the probabilities. The goal is to determine a time frame that the monitoring client has to wait after receiving the message first until surely receiving it directly from the originator, if they are directly connected. If this waiting time is defined to be 2 sec, the above model gives a probability of 0.8841 for the direct route taking less iterations. Since the time of our data collection, the trickling mechanism has been changed in the case of the Bitcoin Core client, so that relaying can be described by Poisson processes. In this case a similar calculation could be utilized but the waiting time will need to be adjusted to a value which maintains a high probability for the direct route. However, this does not change the further steps of our model except from the derivation of the above probabilities. To successfully relay a transaction to another client, three messages have to be exchanged. First, the sender informs the receiver about the transactions it knows about ("INV" message). Then the receiver asks for the new, unknown transactions in the answer ("GETDATA" message). Finally, the actual information is sent to the receiver ("TX" message). As three messages need to be exchanged sequentially, the delay of the network plays an important role in the message propagation. The more mediator clients are involved in the transmission of the transaction, the longer the time it takes for the message to get to the monitoring client from the originator. If we take into account the network delay, and that the above described "worst case scenario" (i.e. that we are connected to the originator, and an indirect route consisting of one mediator exists) is unlikely, we can neglect the probability that a message is received from an indirect route earlier than two seconds before receiving it from the originator. This assumption is experimentally verified in [8]. We call this time interval the first time segment of the transaction and denote it by t 1 = 2 sec. If the monitoring client is not connected directly to the originator, it will only receive the transaction via possibly multiple indirect routes. Nevertheless, it will be true with high probability that connected clients that do not belong to this first time segment are not the originators of the transaction. We then proceed with this assumption to estimate the probabilities of a client being the originator of the transaction based on each received transaction. As of today, this mechanism has changed, so that the relay can be described by Poisson processes. In this case a similar calculation can be utilized, except from the fact that the probability of not belonging to the first time segment can not be neglected. From the perspective of a monitoring client, the other Bitcoin clients can be classified to sets based on each transaction according to Fig 4. Some of the Bitcoin clients relay the message to it in the first time segment. This constitutes a subset of the Bitcoin clients to which the monitoring client is connected to at the time of the transaction. Only active Bitcoin clients are connected to the network, but not all of the clients are working at the examined moment. Before the transaction, no information is known, thus the best estimate we can make is that each Bitcoin client has equal probability of being the originator of the transaction, resulting in a uniform probability distribution among the active clients (left side of Fig 4). After the transaction, each Bitcoin client in the first time segment can be either the real originator of the transaction or a client relaying it (via several hops). Furthermore, the real originator can also be among the rest of the network, not connected to our monitoring client. On the other hand, based on the previous arguments, we presume that clients not relaying the transaction in the first time segment are certainly not the originators of the transaction. Thus, the probability of the first time segment clients increases while the connected clients not belonging to the first time segment will have zero probability (right side of Fig 4). Still nothing is known about the clients not connected to the monitoring client, therefore their probabilities will not change. Also, clients belonging to the same subsets can not be distinguished. Let us calculate the probabilities of being the originator for clients in each set. The Roman font type notations of Fig 4 are used for the sets. The number of elements in the sets is denoted by |�|. C denotes that the monitoring client is connected to the originator of the transaction, O denotes that the originator relays the message in the first time segment to the monitoring client and F means that a randomly chosen client from the first time segment is actually the originator of the transaction. Using these notations, we have that as inactive clients can not be the originator of the transaction. If the monitoring client is connected to the originator, it is going to inform the monitoring client in the first time segment. At this time all of the first time segment clients have the same probability of being the originator. Let us apply the law of total probability for PðF Þ. where we exploited that a client can not send any messages in the first time segment if it is not at all connected to the monitoring client: PðF jCÞ ¼ 0. The above formula gives the probability assigned to the first time segment clients. The connected clients not belonging to the first time segment have zero probability. The rest of the active clients has the same 1/|A| probability. We note that these probabilities still sum up to 1: probabilities among the connected clients were "redistributed" according to whether they belong to the first time segment. So far we only considered one monitoring client. If there are more monitoring clients the above mentioned sets are defined separately for each of them, and then the union of the corresponding sets is determined, i.e. Fðtx i Þ ¼ [ N j¼1 F j ðtx i Þ and Cðtx i Þ ¼ [ N j¼1 C j ðtx i Þ for N monitoring clients, where the subscripts denote the corresponding sets as observed for transaction tx i by the jth monitoring client. Using this method, monitoring Bitcoin clients do not need to be synchronized in time. If time synchronization among monitoring clients was achieved, we could further limit the F set of first time segment clients to those that broadcast the transaction in t 1 time after any of our monitoring clients first received the transaction. In our experiments, achieving reliable time synchronization was not possible, so the union of sets was used as described. We note that the set of active clients at a given time (A) is not straightforward to estimate even with a large number of monitoring clients. To do that, we would need to perform an active network discovery over the peer-to-peer network of Bitcoin clients. Instead of implementing this functionality ourselves, we relied on the Bitnodes.io database [20], which provides the estimated number of active Bitcoin clients as a function of time (i.e. |A|). The actual set is not required for the calculations, only the size of the set at the time of the transactions is considered. Step 2: Grouping the transactions belonging to the same user The next task is to group the Bitcoin addresses according to the users they are owned by. After this, every transaction can be assigned to the users by looking at the source Bitcoin addresses of the transaction. To group addresses, we exploit that Bitcoin addresses appearing on the input side of the same transaction typically belong to the same user. This assumption is employed widely in the literature as well [5,6,9,21]. This can be used for grouping individual Bitcoin addresses. The process is demonstrated in Fig 5. The left side of the figure shows the transactions and the input Bitcoin addresses where the Bitcoins are sent from. These Bitcoin addresses belong to the same user. When a Bitcoin address appears in different transactions (marked red and bold), all Bitcoin addresses can be merged and assigned to the same user. Although Bitcoin users are encouraged to generate new Bitcoin addresses after every transaction they make, so that the above grouping is less efficient [22], most of the users do not follow this guideline [23,24]. The transactions belong to the user that owns its input Bitcoin address(es). Step 3: Combining probabilities-Naive Bayes classification From the message propagation it can be determined how likely the clients are the originators of the transactions. So far we considered the transactions independently from each other. According to our assumptions, the transactions belonging to a single user were created by a few originator clients. This means that these transactions provide probabilities for the same set of originator clients. The originator clients can be identified more efficiently by combining the probabilities belonging to these transactions, thus obtaining a more decisive result. This can be calculated by the naive Bayes classifier method [25]. Table 1 shows the transactions (denoted by tx) created by a single user. The transactions assign probabilities to the clients (IP addresses), which indicate the likelihood that the client is the originator of the transaction. If the ratio of the connected clients is small, the individual probabilities in the table are also low. The probabilities of an IP address related to the different transactions can be combined by the naive Bayes classification, resulting a row of combined probabilities. This shows how likely the IP addresses belong to the examined user. The IP addresses will be divided into two classes, to the "originator" and the "non-originator" classes. For each transaction, there can be at most one IP address in the originator class. On the other hand, as a user can use multiple IP addresses to create Bitcoin transactions, after Table 1. The transactions of a single user (tx) assign probabilities to the clients (IP addresses), which shows the likelihood that the client is the originator of the transaction. PðIP i jtx j Þ denotes the probability that IP i address created the tx k transaction. combining multiple transactions, more than one IP address can be in the originator class in the final result. It is assumed that the Bitcoin users can be identified by a limited number of IP addresses they use when connected to the Bitcoin network. This involves that the users do not use TOR ("The Onion Router"), proxy servers or other similar systems hiding their IP addresses. If this does not hold, i.e. the users use TOR, the probabilities would be distributed among several IP addresses thus resulting in small final probabilities. We note, that the invalidity of this assumption for some users does not result in false IP address-user pairings: only those users will be identified whom the assumption holds for. Furthermore, previous work showed that the usage of the TOR network can be prevented by an active malicious attacker by connecting to the TOR network as well and sending malformed Bitcoin messages via the TOR exit nodes [8,26]. This kind of attack would result in users being unable to connect to the Bitcoin network via TOR. In the current work however, we limit our analysis to regular users, i.e. who connect to the Bitcoin network using only a few IP addresses. By the application of the naive Bayes classifier (see Appendix 7 for the detailed derivation), the combined probability of an IP address (IP i ) belonging to the C o originator class is given by where tx denotes the vector of all considered transactions, jAj is the average of the total number of active clients through the transactions and m is the number of transactions. The |A| number of active clients varies through the transactions as they occur in different times. Thus, the jAj average of the different |A| values is used as it is suggested in [27]. We note that the naive Bayes classification can only be applied if the transactions provide conditionally independent probabilities. Otherwise the dependencies between the transactions should be determined [28]. Data collection During the data collection campaign, we used our modified Bitcoin clients to connect to the network and monitor information about transactions relayed by connected clients. As the program code is open-source, it was straightforward to implement a monitoring client. Our monitoring clients logged the incoming "INV" messages along with the IP address of the sender client and the time of reception. These messages contain the 128-bit hash code of the transactions which are relayed. Using this hash code, the Bitcoin addresses, the amount of Bitcoin sent and other information of interest can then be looked up in the blockchain. In order to monitor as large part of the Bitcoin network as possible, the modified Bitcoin clients were installed simultaneously to 140 computers located at different parts of the world, and all of these were recording the observed traffic during the campaign. Bitcoin clients behind firewalls usually do not allow incoming connections, i.e. our monitoring clients can not establish connections to them. By using a large number of monitoring clients, it is more likely that Bitcoin clients behind firewalls initiate connections to some of our monitoring clients when they enter the network. We installed the monitoring clients on computers that are part of Pla-netLab, a system maintained for network communication research. [29] The data collection campaign took slightly more than two months between 10/14/2013 and 12/20/2013. During this period 300 million records were obtained, in which 4155387 transactions and 124498 IP-addresses were identified. The collected data was imported into an SQL database server. To calculate the probabilities described above, the total number of active clients need to be determined. From the Bitnodes.io database [20] one can look up the number of active IP addresses of the Bitcoin clients as a function of time. Ethics statement All data used in the analysis is made publicly available by the Bitcoin users as it is required by the Bitcoin protocol. Collecting data on the level of network traffic possibly allows linking Bitcoin addresses to the IP addresses of Bitcoin users. No other personally identifiable information beside IP address was collected about users, and no attempt was made to link IP addresses to actual people beside establishing coarse-grained geographic location. In the shared data, IP addresses were replaced with random identifiers to prevent connecting the transactions with individuals based on other IP address related information. Results When calculating the combined probability of each IP address belonging to the specific user, the question arises when should a pairing be accepted? As more than one IP address can be used by each user and one IP address can be used by several users, no restriction is made of this kind. A pairing is accepted, if its probability is higher than 0.5. This means that the IP address of interest has at least 0.5 probability of being used by the user. Fig 6 shows the distribution of the probabilities of the accepted pairings. It can be seen, that the vast majority of the probabilities are above 0.9, the avarage value of the pairings is 95.52%, so we expect the false positive rate of the results to be low. Two peaks can be observed on the figure, one with a maximum at 0.952, and another close to 1. The first peak is due to usual clients that initiate a relatively small number of transactions. We speculate that the other peak consists of servers offering wallet services, i.e. servers that can be used by several people thus initiating a lot of transactions (see below in more detail). The more initiated transactions can be taken into account, the higher the probability will be that can be assigned to the pairings. As a result, 22363 users could be identified, and altogether 1797 IP addresses were assigned to them. The imbalance is caused by three outstanding IP addresses to which 20680 users are assigned. These IP addresses probably belong to Bitcoin wallet services, which can be used for creating transactions on a website without using a private computer. Note that the incomplete grouping of Bitcoin addresses can also result in an IP address being associated with several groups of Bitcoin addresses. These groups actually belong to the same user, but they could not be connected in the grouping algorithm. For the remainder data, 1.14 IP addresses belong to one user on average. This is due to the fact that a user can use multiple IP addresses when connecting to the Bitcoin network. The maximum number of IP addresses identified as belonging to a single user is 8. Calculating the balances Examining the blockchain data alone allows to investigate the time evolution of user balances before, during and after the data collection campaign. Fig 7 shows the total balance of all identified users versus time. The time interval in which the data collection was taking place is marked by a shaded area. Before data collection, the amount of Bitcoin owned by the identified users is increasing. This is due to the fact that some of the identified Bitcoin addresses were created before the beginning of the https://doi.org/10.1371/journal.pone.0207000.g007 measurement campaign. After the measurement, some of the identified Bitcoin addresses were not used anymore, and other new unidentified Bitcoin addresses took over their place. The steep drop during the measurement is probably due to the significant increase of exchange rate in this time interval, which probably inspired the users to sell their Bitcoins for traditional currencies. We found a significant, −0.91 linear correlation coefficient between the total amount of Bitcoin owned by the identified users and the exchange rate during the measurement period. The total number of Bitcoins in use is constantly increasing as time goes by. At the time of the measurement � 13500000 Bitcoins were in circulation. The amount of Bitcoins owned by the identified users reached a maximum of 432666 on 10/25/2013, which corresponds to � 3.2% of the total amount of Bitcoins. We believe that this ratio is a statistically representative sample, if the data were collected with uniform random sampling. However, systematic differences could have affected the data collection as users in different parts of the world, with different intentions and technical backgrounds were possibly operating differently in the network. The users could be protected by firewalls, thus banning incoming connections, and they could also obscure their operation by using VPN, proxy service or TOR. Geographical distribution of Bitcoin and the cash flow The location of IP addresses can be determined from publicly available databases such as Max-Mind [30], which contains approximate locations of the IP addresses. If the Bitcoin users use additional tools to hide their IP addresses, or if the IP addresses are located at other positions than they are registered to, the database gives false location results. However, these inaccuracies are not relevant in the vast majority of the cases. Fig 8 shows the distribution of the identified Bitcoin clients. The coloring represents the logarithmic value of the density. The identified Bitcoin clients are mostly located on the more developed regions of the world. Note that in some countries, such as Russia or China, the Internet is regulated, therefore some interference of the connected clients (and their messages) can occur. By the localization of the IP addresses, the geographical distribution of Bitcoin can also be determined (Fig 9). This figure only shows the distribution of the Bitcoins that are owned by the identified clients; the coloring is logarithmic. The snapshot belongs to the end of the data collection period, 12/20/2013. The analysis detailed in Section 4 results in a data set of transactions and identified originators. It is worth to examine if some originator addresses can be mapped to receiver Bitcoin addresses as well. There are 68973 transactions in which both sides could be found, and altogether 196971 Bitcoins were transferred in the identified transactions. In these transactions 7372 users appear as senders and 6170 appear as receivers. The transactions are visualized on a world map (Fig 10). The thickness, opacity and saturation of the arrows express the amount of Bitcoin transferred in the related transaction. The Let us have a look at the flow of Bitcoin between the different countries, which is illustrated in Fig 11. As the vast majority of the identified Bitcoin transactions belongs to a few countries, only the top ten most significant ones are shown in the figure. 87.5% of the Bitcoins in our data set were transferred between these countries. The different countries are indicated by arcs on the perimeter of the figure. The colors of the links are identical with the color of the country where the Bitcoins were sent from. A high amount of the Bitcoins (24250 Bitcoins, more than 12.3% of the total amount) are transacted internally in the United States. There are several interesting connections: the second largest flow is between Germany and Argentina (25508 Bitcoins, 13.0% of the total amount), and there is a significant Bitcoin flow between China and the Netherlands as well. Conclusions In this paper we examined the problem of user identification in the Bitcoin network. While Bitcoin provides a significant level of anonymity as Bitcoin addresses can be generated freely and without providing any form of personal identification, the requirement to announce new transactions on the peer to peer network opens up the possibility of linking Bitcoin addresses to the IP addresses of clients. Our main goal was evaluating the feasibility of this procedure. We installed a modified Bitcoin client program on over a hundred computers, which recorded the propagating messages on the network that announced new transactions. Based on the information propagation properties of these messages, we developed a mathematical model using naive Bayes classifier method to assign Bitcoin addresses to the clients that most probably control them. As a result, Bitcoin address-IP address mappings were identified. Through the IP addresses of the clients, we could determine their geographical location, which enabled the spatial analysis of distribution and flow of Bitcoin. The method is cheap in terms of resources, the used algorithms are relatively easy to implement and can be combined with other Bitcoin-transaction related information. All monitoring clients behaved as regular Bitcoin clients during the measurement. Although they did not generate any transactions, the source code can be modified to do so if a better concealment is required. Furthermore, the monitoring clients do not need to be connected to other Bitcoin users in any detectable way (i.e. communication among them is trivially achieved outside the Bitcoin protocol), making it virtually impossible to reveal their monitoring activity. This raises the question if the Bitcoin network might already be monitored by a similar methodology. It can be implied that Bitcoin users should take further steps to adequately disguise their real IP addresses and preserve their anonymity. Appendix-Derivation of naive Bayes classifier method The model classifies the clients into the originator and non-originator classes (C o and C n respectively) based on their IP addresses and by considering m transactions. Transactions are denoted by tx = {tx i }, (i 2 [1; m]). Consider a single IP address, and let us examine the probabilities that the different transactions assign to it. Using the Bayes theorem the probability of belonging to the originator class is where PðIP i 2 C o Þ is the frequency of C o class (a priori probability). By assuming that the probabilities PðtxjIP i 2 C o Þ are conditionally independent, the expression can be simplified. Bayes theorem can be applied again to the factors in the product. The first factor is constant (depends only on the data), and it can be eliminated by the normalization of the probabilities. As PðIP i 2 C o jtxÞ þ PðIP i 2 C n jtxÞ ¼ 1 is valid, PðIP i 2 C n jtx i Þ PðIP i 2 C n Þ mÀ 1 |ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl {zffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl ffl } The expression can be simplified further. PðIP i 2 C o Þ is the initial frequency of occurrence of the clients in the C o class, which is 1/|A|. Although a Bitcoin client can use multiple IP addresses in the network, it is assumed that the 1/|A| value is a good approximation for the initial frequency in the vast majority of the cases. The total number of active clients varies with time in the scale of all considered transactions. Thus, the jAj average of the different |A| values is used as suggested in [27]. PðIP i 2 C o jtxÞ ¼ 1 This formula brings in a technical problem. Huge numbers are multiplied together in the product, which become significantly biased in regular number representations by rounding and may result in overflow. To relax this problem, the second term of the denominator is written in an exponential form. This results in the following practical formula. This formula enables us to combine the probabilities assigned to the IP addresses by the transactions.
10,151.4
2016-12-20T00:00:00.000
[ "Computer Science", "Mathematics" ]
AdS (super)projectors in three dimensions and partial masslessness We derive the transverse projection operators for fields with arbitrary integer and half-integer spin on three-dimensional anti-de Sitter space, AdS$_3$. The projectors are constructed in terms of the quadratic Casimir operators of the isometry group $\mathsf{SO}(2,2)$ of AdS$_3$. Their poles are demonstrated to correspond to (partially) massless fields. As an application, we make use of the projectors to recast the conformal and topologically massive higher-spin actions in AdS$_3$ into a manifestly gauge-invariant and factorised form. We also propose operators which isolate the component of a field that is transverse and carries a definite helicity. Such fields correspond to an irreducible representation of $\mathsf{SO}(2,2)$. Our results are then extended to the case of $\mathcal{N}=1$ AdS$_3$ supersymmetry. Introduction The spin projection operators, or transverse and traceless (TT) spin-s projectors, were first derived in four-dimensional (4d) Minkowski space M 4 by Behrends and Fronsdal [1,2]. Given a symmetric tensor field on M 4 that obeys the Klein-Gordon equation, it decomposes into a sum of constrained fields describing irreducible representations of the Poincaré group with varying spin. The Behrends-Fronsdal projectors allow one to extract the component of this decomposition corresponding to the representation with the highest spin. Many applications for the TT projectors have been found within the landscape of high energy physics. For example, they played a crucial role in the original formulation of conformal higher-spin gauge actions [3]. Since the work of [1,2], the spin projection operators have been generalised to diverse dimensions and symmetry groups. In the case of M d , the TT projectors were first derived by Segal [4] (see also [5][6][7][8]) in the bosonic case and later by Isaev and Podoinitsyn [8] for half-integer spins. In four dimensions, the projection operators in N = 1 Minkowski superspace, M 4|4 , were introduced by Salam and Strathdee [9] in the case of a scalar superfield, and by Sokatchev [10] for superfields of arbitrary rank. The superpojectors derived in [10] were formulated in terms of Casimir operators. A few years later Rittenberg and Sokatchev [11] made use of a similar method to construct the superprojectors in N -extended Minkowski superspace M 4|4N (see also [12]). An alternative powerful construction of the superprojectors in M 4|4N was given in [13,14]. 1 Recently, the superprojectors in three-dimensional N -extended Minkowski superspace, M 3|2N , were derived in Ref. [17], which built upon the earlier work of [18]. It is of interest to construct spin projection operators for fields on (anti-)de Sitter space, (A)dS. In particular, in order to describe irreducible representations of the AdS d isometry algebra, so(d − 1, 2), fields on AdS d must satisfy certain differential constraints involving the Lorentz-covariant derivative D a for AdS d . Since both dS and AdS spaces have nonvanishing curvature, the construction of the TT projectors proves to be technically challenging. However, recent progress has been made in [19,20], where the (super)projectors in AdS 4 were derived. The next logical step is to derive the TT (super)projectors in AdS d . In this work we consider the case d = 3, which serves as a starting point for this program. This paper is organised as follows. In section 2.1, we begin by reviewing on-shell fields in AdS 3 and the corresponding irreducible representations of so(2, 2) which they furnish. In section 2.2, we derive the spin projection operators for fields of arbitrary rank. More specifically, let us denote by V (n) the space of totally symmetric rank-n spinor fields φ α(n) := φ α 1 ...αn = φ (α 1 ...αn) on AdS 3 . For any integer n ≥ 2, we derive the rank-n spin projection operator, Π ⊥ [n] , which is defined by its action on V (n) according to the rule: For fixed n, this operator is defined by the following properties: is a projector in the sense that it squares to itself, Any operator satisfying all three of these properties may be considered to be an AdS 3 analogue of the Behrends-Fronsdal projector. 2 However, the field φ ⊥ α(n) will correspond to a reducible representation of so (2,2). In order to isolate the component describing an irreducible representation, it is necessary to bisect the projectors according to Π ⊥ [n] = P (+) [n] + P (−) [n] . The operator P (±) [n] is a helicity projector since it satisfies the properties 3 (1.2a) and (1.2b) and selects the component of φ α(n) carrying the definite value ± n 2 of helicity. They are constructed in section 2.3. In section 2.4 we make use of the orthogonal compliment of Π ⊥ [n] to decompose an unconstrained field φ α(n) into a sum of transverse fields φ ⊥ α(n−2j) where 0 ≤ j ≤ ⌊n/2⌋. We then provide an operator S ⊥ α(n−2j) which extracts the field φ ⊥ α(n−2j) from this decomposition. 2 We refer to any operator satisfying properties (1.2a), (1.2b) and (1.2c) as a spin projection operator. In section 2.2 we show that, under an additional assumption, such an operator is unique. In general, operators satisfying properties (1.2a) and (1.2b) will be called transverse projectors. The latter are sometimes referred to as TT projectors, which is a slight abuse of terminology, since in vector notation the field φ α(n) is already traceless. 3 Whilst P (±) [n] satisfies the properties (1.2a) and (1.2b), it does not satisfy (1.2c). Making use of these projection operators, we derive a number of interesting and nontrivial results. In particular, in section 2 we show that all information about (partially) massless fields is encoded in the poles of the transverse projectors. The novelty of our approach is that all projectors are derived in terms of the quadratic Casimir operators of so (2,2). This allows us to recast the AdS 3 higher-spin Cotton tensors and their corresponding conformal actions into a manifestly gauge-invariant and factorised form. Similar results are provided for new topologically massive (NTM) spin-s gauge models, which are of order 2s in derivatives, where s is a positive (half-)integer. In the case when s is an integer, it is possible to construct NTM models of order 2s − 1. In M 3 such models were recently proposed in [21], here we extend them to AdS 3 . The above results are detailed in section 2.5. Finally, in section 2.6 we study the flat limit of these results, and obtain new realisations for the spin projection operators, the helicity projectors and the conformal higher-spin actions in M 3 . In section 3, we extend some of these results to the case of N = 1 AdS 3 supersymmetry. Alongside concluding comments, new realisations of the Behrends-Fronsdal projectors in M 4 , expressed in terms of the Casimir operators of the 4d Poincaré algebra, are given in section 4. The main body is accompanied by two technical appendices. Appendix A summarises our conventions and notation. We review the generating function formalism in Appendix B, which is a convenient framework used in deriving the non-supersymmetric results of section 2. Our findings in this paper can be viewed as generalisations of the earlier results in AdS 4 [19,20] and AdS 3 [22], which in turn were based on the structure of (super)projectors in Minkowski (super)space [17,18]. Throughout this work we make use of the convention (1.3) Transverse projectors in AdS 3 The geometry of AdS 3 is described by the Lorentz covariant derivative, which satisfies the commutation relation Here e a m is the inverse vielbein, ω a bc is the Lorentz connection and the parameter S is related to the scalar curvature R via R = −24S 2 . The Lorentz generators with vector (M ab = −M ba ) and spinor (M αβ = M βα ) indices are defined in appendix A. In our subsequent analysis, we will make use of the quadratic Casimir operators of the AdS 3 isometry algebra so(2, 2) = sl(2, R) ⊕ sl(2, R), for which we choose (see, e.g., [23]) Here ✷ := D a D a = − 1 2 D αβ D αβ is the d'Alembert operator in AdS 3 . The operators F and Q are related to each other as follows for an arbitrary symmetric rank-n spinor field φ α(n) . The structure D α(2) D β(2) φ β(2)α(n−2) in (2.4) is not defined for the cases n = 0 and n = 1. However, it is multiplied by n(n − 1) which vanishes for these cases. On-shell fields In any irreducible representation of the AdS 3 isometry group SO(2, 2), the Casimir operators F and Q must be multiples of the identity operator. Therefore, in accordance with (2.4), one is led to consider on-shell fields of the type for some real mass parameter µ. Unitary representations of the Lie algebra so(2, 2) may be realised in terms of the onshell fields (2.5) for certain values of µ. As is well known (see, e.g., [24,25] and references therein), the irreducible unitary representations of so(2, 2) are denoted D(E 0 , s), where E 0 is the minimal energy (in units of S), s the helicity and |s| is the spin. In this paper we are interested in only those representations carrying integer or half-integer spin with |s| ≥ 1 and, consequently, the allowed values of s are s = ±1, ± 3 2 , ±2, . . . . In order for the representation D(E 0 , s) to be unitary, the inequality E 0 ≥ |s|, known as the unitarity bound, must be satisfied. The representation D(E 0 , s) ≡ D(E 0 , σ|s|), with σ := ±1, may be realised on the space of symmetric rank-n spinor fields φ α(n) satisfying the following differential constraints: Here the integer n ≥ 2 is related to s via n = 2|s|. The real parameter ρ ≥ 0, which carries mass dimension one, is called the pseudo-mass and is related to E 0 through In terms of ρ and n, the unitarity bound reads ρ ≥ n(n − 2)S. With this in mind, we will label the representations using ρ in place of E 0 , and use the notation D(ρ, σ n 2 ). The equations (2.6) were introduced in [25]. In the flat-space limit, these equations reduce to those proposed in [26,27]. The first-order equation (2.6b) is equivalent to (2.5b) with µ = σρ. Any field φ α(n) satisfying both constraints (2.6a) and (2.6b), is an eigenvector of the Casimir operator Q, In place of (2.6a) and (2.6b), one may instead consider tensor fields φ α(n) constrained by the equations (2.6a) and (2.8), In this case, the equation (2.4) becomes It follows that such a φ α(n) furnishes the reducible representation It may be shown that when the pseudo-mass takes on any of the special values ρ ≡ ρ (t,n) = n(n − 2t)S , then the representation D(ρ, σ n 2 ), with either sign for σ, shortens. At the field-theoretic level, this is manifested by the appearance of a depth-t gauge symmetry under which the system of equations (2.6), with ρ given by (2.12) and σ arbitrary, is invariant. 4 A field which satisfies the constraints (2.9a) and (2.8), and has pseudo-mass (2.12), will be said to be partially-massless with depth t and denoted by φ where the parameters τ (t,n) are known as the partially massless values. For t > 1, the pseudo-mass ρ (t,n) , eq. (2.12), violates the unitarity bound and hence the partially massless representations are non-unitary. Spin projection operators Given a tensor field φ α(n) on AdS 3 , the spin projection operator Π ⊥ [n] with the defining properties (1.2), selects the component φ ⊥ α(n) of φ α(n) which is transverse. If, in addition, φ α(n) satisfies the second order mass-shell equation (2.8), then Π ⊥ [n] φ α(n) furnishes the reducible representation D(ρ, − n 2 ) ⊕ D(ρ, n 2 ) of so (2,2). In this section we derive the spin projection operators Π ⊥ [n] . For this purpose it is convenient to make use of the generating function formalism, which is described in appendix B. In this framework, the properties (1.2a) and (1.2b) take the following form: It is necessary to separately analyse the cases with n even and n odd. Bosonic case We will begin by studying the bosonic case, n = 2s, for integer s ≥ 1. Let us introduce the differential operator T [2s] of order 2s in derivatives 6 Here τ (t,n) denotes the partially massless values (2.14). We refer the reader to appendix B for an explanation of the other notation. Given an arbitrary field φ (2s) ∈ V (2s) , using (B.3b) one may show that this operator maps it to a transverse field Partially massless fields have been studied in diverse dimensions for over 35 years, see e.g. [28][29][30][31][32] for some of the earlier works. 6 When the upper bound in a product is less than the lower bound, we define the result to be unity. However, it is not a projector on V (2s) since it does not square to itself, To prove this identity, we observe that only the j = s term of the sum in (2.16) survives when T [2s] acts on a transverse field such as T [2s] φ (2s) . To obtain a projector, we define the following dimensionless operator On V (2s) it inherits its transversality from T [2s] , and is idempotent by virtue of (2.18). In a fashion similar to the proof of (2.18), it may also be shown that Π ⊥ [2s] acts as the identity on the space of rank-(2s) transverse fields. Thus, Π ⊥ [2s] satisfies the properties (1.2) and is hence the spin projection operator on V (2s) . Making the indices explicit, the latter reads It is possible to construct a spin projection operator solely in terms of the two quadratic Casimir operators (2.3). To this end, we introduce the operator Let us show that (2.21) satisfies the three defining properties (1.2) on V (2s) . Given an arbitrary transverse field ψ α(2s) , D (−2) ψ (2s) = 0, using (2.4) one may show that It follows that Π ⊥ [2s] acts as the identity on the space of transverse fields, Next, the image of any unconstrained field φ (2s) under Π ⊥ [2s] is transverse, which follows elegantly from (B.3c) Finally, using (2.23) and (2.24) one can show that Π ⊥ [2s] squares to itself Next, we perform the same operation but in the opposite order, , it follows that on V (2s) the two are equal to one another, So far our analysis of the spin projection operators Π ⊥ [2s] and Π ⊥ [2s] has been restricted to the linear space V (2s) . However, for fixed s, the operator Π ⊥ [2s] given by eq. (2.21) is also defined to act on the linear spaces V (2s ′ ) with s ′ < s. In fact, making use of (2.4) and (B.3c), it is possible to show that the following holds true This important identity states that Π ⊥ [2s] annihilates any lower-rank field φ α(2s ′ ) ∈ V (2s ′ ) . It should be mentioned that Π ⊥ [2s] does not annihilate lower-rank fermionic fields φ α(2s ′ +1) . When acting on V (2s ′ ) , the two operators Π ⊥ [2s] and Π ⊥ [2s] are no longer equal to each other, and in particular Π ⊥ [2s] φ (2s ′ ) = 0. It is for this reason that we will continue to use different notation for the two operators. Fermionic case We now turn our attention to the fermionic case, n = 2s + 1, for integers s ≥ 1. Let us introduce the differential operator T [2s+1] of order 2s in derivatives Here τ (t,n) are the partially massless values (2.14). The operator However, this operator does not square to itself on V (2s+1) As a result, one can immediately define the dimensionless operator which is a transverse projector by construction. Following a derivation similar to that of (2.32), it can be shown that the operator Π ⊥ [2s+1] acts like the identity on the space of transverse fields. Hence, the operator Π ⊥ [2s+1] satisfies properties (1.2), and is thus a spin projection operator on V (2s+1) . Converting (2.33) to spinor notation yields As in the bosonic case, one can construct a fermionic projector purely in terms of the quadratic Casimir operators (2.3). Let us introduce the operator We wish to show that (2.35) indeed satisfies the properties (1.2) on V (2s+1) . Given an arbitrary transverse field ψ (2s+1) , using (2.4) one can derive the identity It follows that Π ⊥ [2s+1] acts like the identity on the space of transverse fields By making use of (B.3c), one can show that the operator Π ⊥ [2s+1] maps φ (2s+1) to a transverse field , and can thus be classified as a spin projector on AdS 3 . In a similar fashion to the bosonic case, it may be shown that Π Stepping away from V (2s+1) , one can show that for fixed s, the projector Π ⊥ [2s+1] annihilates any lower-rank field φ (2s ′ +1) ∈ V (2s ′ +1) The two operators Π ⊥ [2s+1] and Π ⊥ [2s+1] are not equivalent on V (2s ′ +1) . We remark that Π ⊥ [2s+1] does not annihilate lower-rank bosonic fields φ α(2s ′ +2) . It follows from (2.35) that the poles of Π ⊥ [2s+1] correspond to the partially massless values τ (j,2s+1) defined by (2.14). An important property of the projectors (2.21) and (2.35) is that they are symmetric operators, that is for arbitrary well-behaved fields ψ α(n) and φ α(n) . Helicity projectors As previously mentioned, given a rank-n field φ α(n) satisfying the mass-shell equation (2.8), its projection Π ⊥ [n] φ α(n) furnishes the reducible representation D(ρ, − n 2 ) ⊕ D(ρ, n 2 ). In particular, representations with both signs of helicity ± n 2 appear in this decomposition. In order to isolate the component of φ α(n) describing an irreducible representation of so(2, 2), it is necessary to split the spin projection operators Π ⊥ [n] according to Each of the helicity projectors P (±) [n] should satisfy (1.2a) and (1.2b). In addition, they should project out the component of φ α(n) carrying a single value of helicity. The last two requirements are equivalent to the equations . It is not difficult to show that the following operators satisfy these requirements are the spin projectors written in terms of Casimir operators, and are given by (2.21) and (2.35) in the bosonic and fermionic cases respectively. Of course, on V (n) , one could instead choose to represent the latter in their alternate form (2.19) and (2.33). Using the defining features of Π ⊥ [n] , it can be shown that the operators P (+) [n] and P (−) [n] are orthogonal projectors when restricted to V (n) : It is also clear that (2.45) projects onto the transverse subspace of V (n) -it inherits this property from Π [n] . Moreover, the off-shell field φ α(n) is on the mass-shell, eq. (2.8), then (2.47) reduces to (2.44b). , and form an orthogonal set of projectors Longitudinal projectors and lower-spin extractors Moreover, it can be shown that Π [n] projects a field φ α(n) onto its longitudinal component. A rank-n field ψ α(n) is said to be longitudinal if there exists a rank-(n − 2) field ψ α(n−2) such that ψ α(n) may be expressed as ψ α(n) = D α(2) ψ α(n−2) . Such fields are also sometimes referred to as being pure gauge. Therefore, we find that Using the fact that Π ⊥ [n] and Π [n] resolve the identity, one can decompose an arbitrary field φ α(n) as follows Here φ ⊥ α(n) is transverse and φ α(n−2) is unconstrained. Repeating this process iteratively, we obtain the following decomposition Here each of the fields φ ⊥ α(n−2j) are transverse, except of course φ ⊥ and φ ⊥ α . We note that, using (2.43), one may take the decomposition (2.52) a step further and bisect each term into irreducible components which are transverse and have positive or negative helicity, Making use of the projectors (2.21) and (2.35) and their corresponding properties, one can construct operators which extract the component φ ⊥ α(n−2j) from the decomposition (2.52), where 1 ≤ j ≤ ⌊n/2⌋. In particular, we find that the spin 1 2 (n − 2j) component may be extracted via where we have defined Therefore it is appropriate to call S ⊥ [n−2j] the transverse spin 1 2 (n − 2j) extractor. It is not a projector, since it is dimensionful and reduces the rank of the field on which it acts. Let ψ α(n) be some longitudinal field, ψ α(n) = D α(2) ζ α(n−2) . We do not assume it to be in the image of Π [n] . However, since Π ⊥ [n] commutes with D α (2) and annihilates all lower-rank fields, eq. (2.29), it follows that it also annihilates any rank-n longitudinal field 7 As a consequence, given two integers m, n satisfying 2 ≤ m ≤ n, it immediately follows that Π [n] acts as the identity operator on the space of rank-m longitudinal fields ψ α(m) , with s a non-negative integer. These properties will be useful in section 2.5. Linearised higher-spin Cotton tensors Further applications of spin projection operators can be found in modern conformal higher-spin theories. In particular, we will show that the spin projectors can be used to obtain new realisations of the linearised higher-spin Cotton tensors, which were recently derived in [22]. For integer n ≥ 2, the higher-spin bosonic and fermionic Cotton tensors The Cotton tensors are primary descendents of the conformal gauge field h α(n) , which is a real field defined modulo gauge transformations of the form for some real unconstrained gauge parameter ζ α(n−2) . The Cotton tensors (2.61) are characterised by the properties: Making use of the bosonic (2.19) and fermionic (2.33) spin projectors Π ⊥ [n] , we see that the higher-spin Cotton tensors (2.61) can be recast into the simple form: The identity F D s (−2) φ α(2s) = 0 proves useful in deriving (2.64a). In the flat space limit, S → 0, (2.64) coincides with the closed form expressions of C α(n) (h) given in [39,40]. 8 Moreover, we can make use of the equivalent family of projectors Π ⊥ [n] to recast C α(n) (h) purely in terms of the quadratic Casimir operators (2.3). Explicitly, they read (2.65b) There are many advantages to expressing the Cotton tensors in terms of spin projection operators. Firstly, in both (2.64) and (2.65), the properties of (i) transversality (2.63a) and (ii) gauge invariance (2.63b) are manifest, as a consequence of the projector properties (1.2b) and (2.57) respectively. Using this gauge freedom, one may impose the transverse gauge condition on h α(n) , On account of (1.2c), in this gauge the Cotton tensors become manifestly factorised into products of second order differential operators involving all partial masses, This property was observed in [22] without the use of projectors. An interesting feature of the new realisation (2.65), which was not observed in [22], is that the Cotton tensors are manifestly factorised in terms of second-order differential operators without having to enter the transverse gauge. By virtue of the above observations, it follows that the conformal higher-spin action [43,44] S (n) is manifestly gauge invariant and factorised when C α(n) (h) is expressed as in (2.65). Analogous factorised expressions can be given for the so-called new topologically massive (NTM) models. For bosonic fields they were first introduced in [45] in Minkowski space. Extensions of these models to fields with half-integer spin were proposed in [43], where their generalisations to an AdS background were also given. These models are formulated solely in terms of the gauge prepotentials h α(n) and the associated Cotton tensors C α(n) (h). Given an integer n ≥ 2, the gauge-invariant NTM action for the field h α(n) given in [43] is By analysing (2.70), it can be shown that on-shell, the action (2.69) describes a propagating mode with pseudo-mass ρ, spin n/2 and helicity σn/2 given ρ = ρ (t,2s) . For the case ρ = ρ (t,2s) , the model describes only pure gauge degrees of freedom. Recently, a new variant of the NTM model for bosonic fields in M 3 was proposed in [21]. This model also does not require auxilliary fields, but is of order 2s − 1 in derivatives, whereas those given in [45] are of order 2s. Given an integer s ≥ 1, the actions of [21] may be readily extended to AdS 3 as follows Results in Minkowski space In this section we study the flat-space limit of various results derived in section 2. Of particular interest are the transverse projectors which are constructed in terms of the Casimir operators of so(2, 2). In this limit we obtain novel realisations for the transverse projectors on M 3 which did not appear in [8,18]. They are expressed in terms of the quadratic Casimir operators of the three dimensional Poincaré algebra iso(2, 1), Here ∂ αβ are the partial derivatives of M 3 and W is the Pauli-Lubanski pseudo-scalar. We recall that an irreducible representation of iso(2, 1) with mass ρ and helicity σn/2 may be realised on the space of totally symmetric rank-n spinor fields φ α(n) satisfying the differential equations where σ = ±1. These equations are equivalent to those given in [26,27]. We are concerned only with representations carrying (half-)integer spin. • The transverse spin 1 2 (n − 2j) extractors (2.55), where 1 ≤ j ≤ ⌊n/2⌋, are given by • The new realisations for the higher-spin Cotton tensors (2.65) become (2.81b) It may be shown that each of these expressions are equivalent to the corresponding ones given in [18], except for the lower-spin extractors, which were not discussed in [18]. Transverse superprojectors in AdS 3|2 In this section, we derive the superprojectors in N = 1 AdS superspace, AdS 3|2 , and explore several of their applications. We remind the reader that AdS 3|2 is the maximally supersymmetric solution of three-dimensional N = 1 AdS supergravity [14]. We begin by reviewing the geometric structure of AdS 3|2 , as presented in [46], which is described in terms of its covariant derivatives 9 Here E A M is the inverse supervielbein and Ω A bc the Lorentz connection. The covariant derivatives obey the following (anti-)commutation relations 10 where S = 0 is a real constant parameter which determines the curvature of AdS 3|2 . We list several identities which prove indispensable for calculations: where we have denoted D 2 = D α D α . These relations can be derived from the algebra of covariant derivatives (3.2). Crucial to our analysis are two independent Casimir operators of the N = 1 AdS 3 isometry supergroup OSp(1|2; R) × SL(2, R). They are [22,43] Making use of the identity allows us to express Q in terms of the d'Alembert operator ✷ = D a D a . The operators Q and F are related to each other as follows F 2 Φ α(n) = (2n + 1) 2 Q + (2n + 1)(2n 2 + 2n − 1)iSD 2 + 4n 2 (n + 2) 2 S 2 Φ α(n) 9 In the hope that no confusion arises, we use the same notation for the vector covariant derivative in AdS 3 and in AdS 3|2 . 10 In vector notation, the commutation relations (3.2b) take the form [D a , D β ] = S(γ a ) β γ D γ and for an arbitrary symmetric rank-n spinor superfield Φ α(n) . On-shell superfields We begin by reviewing aspects of on-shell superfields in AdS 3|2 , as presented in [22]. Given an integer n ≥ 1, the real symmetric superfield Φ α(n) is said to be on-shell if it satisfies the two constraints where σ := ±1 and M ≥ 0 is a real parameter of unit mass dimension. Such a field furnishes an irreducible representation of the N = 1 AdS 3 superalgebra osp(1|2; R) ⊕ sl(2, R), which we denote as S(M, σ n 2 ). It can be shown that the representation S(M, σ n 2 ) decomposes into two irreducible representations of so(2, 2), Here, the pseudo-masses are given by ρ A = n 2n + 1 σM − (n + 2)S , ρ B = n + 1 2n + 1 σM + (n − 1)S , (3.9) and the corresponding signs of the superhelicities are The representation S(M, σ n 2 ) is unitary if the parameter M obeys the unitarity bound M ≥ 2(n − 1)(n + 1)S. This bound ensures that both representations appearing in the decomposition (3.8) are unitary. A superfield satisfying the first condition (3.7a) is said to be transverse. Any transverse superfield may be shown to satisfy the following relation If a transverse superfield also satisfies (3.7b), we say that it carries pseudo-mass M, superspin n/2 and superhelicity 1 2 (n + 1 2 )σ. From (3.11) it follows that an on-shell superfield (3.7) satisfies σM + 2n(n + 2)S Φ α(n) , (3.12) and hence the second-order mass-shell equation λ 2 := 1 (2n + 1) 2 σM + 2n(n + 2)S σM + 2(n − 1)(n + 1)S . (3.13b) The equations (3.7a) and (3.12) were introduced in [47]. On the other hand, one may instead consider a superfield Φ α(n) satisfying (3.7a) and (3.13a). In this case, using the identity (3.6), one can show that (3.13a) becomes where we have defined σ (±) = sgn(M (±) ) and It follows that such a field furnishes the reducible representation (3. 16) In AdS 3|2 there exist two distinct types of on-shell partially massless superfields [22], which are distinguished by the sign σ of their superhelicity. More specifically, they are described by an on-shell superfield (3.7) whose pseudo-mass and parameter σ assume the special combinations The integer t is called the (super)depth and the corresponding supermultiplets are denoted by Φ (t,+) α(n) and Φ (t,−) α(n) respectively. Their second order equations (3.13) take the form where we have introduced the partially massless values The gauge symmetry associated with positive and negative superhelicity partially massless superfields of depth-t is In particular, the system of equations (3.7) and (3.17) is invariant under these transformations for an on-shell real gauge parameter. Superspin projection operators We wish to find supersymmetric generalisations of the spin projection operators in AdS 3 which were computed in section 2. More precisely, let us denote by V (n) the space of totally symmetric rank-n superfields Φ α(n) on AdS 3|2 . For any integer n ≥ 1, we define the rank-n superspin projection operator 11 Π ⊥ [n] to act on V (n) by the rule which satisfies the following properties: 3. Every transverse superfield Ψ α(n) belongs to the image of Π ⊥ [n] , In other words, the superprojector Π ⊥ [n] maps Φ α(n) to a supermultiplet with the properties of a conserved supercurrent. To obtain a superprojector, we introduce the operator ∆ α β [43] ∆ α β := − and its corresponding extensions [22] ∆ α Note that for the case j = 1, (3.24) coincides with (3.23). It can be shown that the operator (3.24) has the following properties for arbitrary integers j and k. Let us define the operator T [n] , which acts on V (n) by the rule This operator maps Φ α(n) to a transverse superfield To see this, one needs to open the symmetrisation in (3.26) By making use of (3.25b), it can be shown that the remaining (n! − 1) terms can be expressed in the same form as the first. Then transversality follows immediately as a consequence of property (3.23). However, T [n] does not square to itself on V (n) where M (±) (t,n) denotes the pseudo-masses associated with a partially massless superfield (3.17). We can immediately introduce the dimensionless operator which is idempotent and transverse by construction. In addition, it can be shown that the operator Π ⊥ [n] acts as the identity on the space of transverse superfields (3.22c). Hence, Π ⊥ [n] satisfies properties (3.22) and can be identified as a rank-n superprojector on AdS 3|2 . An alternative form of the superprojector Π ⊥ [n] can be derived, which instead makes contact with the Casimir operator Q. Let us introduce the dimensionless operator (3.32) In the flat superspace limit, Π ⊥ [n] coincides with the superprojector derived in [17]. Making use of the properties of Π ⊥ [n] and the identity where Ψ α(n) is an arbitrary transverse superfield, it can be shown that Π ⊥ [n] Φ α(n) satisfies properties (3.22) and is also a superprojector on AdS 3|2 . Using an analogous proof employed to show the coincidence of the two bosonic projectors in section 2.2, it can be shown that Π ⊥ [n] and Π ⊥ [n] are indeed equivalent. So far, we have been unable to obtain an expression for Π ⊥ [n] which is purely in terms of the Casmir operators F and Q. We recall that in the non-supersymmetric case, one starts with a field φ α(n) lying on the mass-shell (2.9b) and its projection Π ⊥ [n] φ α(n) furnishes the reducible representation (2.11). A single irreducible representation from the decomposition (2.11) can be singled out via application of the helicity projectors (2.45). The significance of the condition (2.9b) is that it allows one to resolve the poles in both types of projectors. In the supersymmetric case, the equation analogous to (2.9b) which Φ α(n) should satisfy is (3.13a). Upon application of Π ⊥ [n] on such a Φ α(n) , one obtains the reducible representation (3.16). However, it appears that the imposition of (3.13a) does not allow one to resolve the poles of the superprojector in either of the forms (3.30) or (3.31). Therefore, rather then imposing (3.13a), one must start with a superfield Φ α(n) obeying the first-order constraint (3.7b), which does allow for resolution of the poles. In this case, after application of Π ⊥ [n] , the superfield Φ α(n) already corresponds to an irreducible representation with fixed superhelicity, relinquishing the need for superhelicity projectors. Thus, it suffices to provide only the superspin projection operators Π ⊥ [n] . Longitudinal projectors For n ≥ 1, let us define the orthogonal compliment of Π ⊥ [n] acting on Φ α(n) by the rule It can be shown that Π [n] extracts the longitudinal component of a superfield Φ α(n) . A rank-n superfield Ψ α(n) is said to be longitudinal if there exists a rank-(n − 1) superfield Ψ α(n−1) such that Ψ α(n) can be expressed as Ψ α(n) = i n D α Ψ α(n−1) . Thus, we find for some unconstrained real superfield Φ α(n−1) . In order to see this, it proves beneficial to make use of the superprojector Π ⊥ [n] , and express the operator ∆ β [j] α in the form Using the fact that the Π [n] and Π ⊥ [n] resolve the identity, it follows that one can decompose any superfield Φ α(n) in the following manner Here, Φ ⊥ α(n) is transverse and Φ α(n−1) is unconstrained. Repeating this prescription iteratively yields the decomposition Here, the real superfields Φ ⊥ α(n−2j) and Φ ⊥ α(n−2j−1) are transverse, except for Φ ⊥ . It can be shown that the superprojector Π ⊥ [n] annihilates any longitudinal superfield. Indeed, let us consider the action of Π ⊥ [n] on a superfield Ψ α(n) = i n D α Λ α(n−1) . Opening the symmetrisation present in Π ⊥ [n] gives Note that we have made use of the identity (3.25a) to rearrange the operators ∆ β [j] α . Making use of the relation (3.25c) allows us to express the other (n! − 1) permutations in the same form as the first. Then due to the property (3.23), it follows that Consequently, the operator Π [n] acts as unity on the space of rank-n longitudinal super- Linearised higher-spin super-Cotton tensors In this section, we make use of the rank-n superprojector to study the properties of superconformal higher-spin (SCHS) theories. In particular, we will make use of Π ⊥ [n] to construct the higher-spin super-Cotton tensors in AdS 3|2 , which were recently derived in [22]. The super-Cotton tensors W α(n) (H) were shown to take the explicit form which is a real primary descendent of the SCHS superfield H α(n) . The latter is defined modulo gauge transformations of the form The superprojectors (3.30) can be used to recast the super-Cotton tensors (3.43) in the simple form where M (±) (t,n) denotes the partial pseudo-masses (3.17). In the flat superspace limit, S → 0, the super-Cotton tensor (3.46) reduces to those given in [39,48]. Expressing W α(n) (H) in the form (3.46) is beneficial for the following reasons: (i) transversality of W α(n) (H) is manifest on account of property (3.27); (ii) gauge invariance is also manifest as a consequence of (3.41); and (iii) in the transverse gauge it follows from (3.22c) that W α(n) (H) factorises as follows From the above observations, it follows that the action [43,44] for the superconformal higher-spin prepotential H α(n) Conclusion Given a maximally symmetric spacetime, the unitary irreducible representations of its isometry algebra may be realised on the space of tensor fields satisfying certain differential constraints. The purpose of a spin projection operator is to take an unconstrained field, which describes a multiplet of irreducible representations, and return the component corresponding to the irreducible representation with maximal spin. 12 In this paper we have derived the spin projection operators for fields of arbitrary rank on AdS 3 space and their extensions to N = 1 AdS superspace. We leave generalisations of our results to the (p, q) AdS superspaces [46] with N = p + q > 1 for future work. Making use of the (super)spin projection operators, we obtained new representations for the linearised higher-spin (super)Cotton tensors and the corresponding (super)conformal actions in AdS 3 . The significance of these new realisations is that the following properties are each made manifest: (i) gauge invariance; (ii) transversality; and (iii) factorisation. We also show that the poles of the (super)projectors are intimately related to partially massless (super)fields. This property was first established in the case of AdS 4 (super)space in [19,20], and appears to be a universal feature of the (super)projectors. It would be interesting to verify this in the case of AdS d with d > 4. As compared with previous approaches in AdS 4 (super)space [19,20], a novel feature of the spin projectors derived here is that they are formulated entirely in terms of Casimir operators of the AdS 3 algebra. 13 Studying their zero curvature limit has allowed us to obtain new realisations of the spin projection operators in 3d Minkowski space in terms of only the Pauli-Lubanski scalar and the momentum squared operator. This idea may be straightforwardly applied to the case of 4d Minkowski space to derive new realisations of the Behrends-Fronsdal projectors. In particular, let us define the square of the Pauli-Lubankski vector, On the field φ α(m)α(n) of Lorentz type ( m 2 , n 2 ), it may be shown that W 2 assumes the form (see, e.g. [49]) where we have defined s := 1 2 (m + n). On any transverse field ψ α(m)α(n) this reduces to W 2 − s(s + 1)✷ ψ α(m)α(n) = 0. It is possible to express the Behrends-Fronsdal spin projection operators Π ⊥ (m,n) solely in terms of the Casimir operators W 2 and ✷ of the 4d Poincaré algebra as follows 14 (4.3b) The operators Π ⊥ (m,n) satisfy the four dimensional analogues of the properties (1.2). In a similar fashion, it should be possible to obtain new realisations for the AdS 4 spin projection operators of [20] in terms of the Casimir operators of the algebra so (3,2). In 13 We were not able to obtain expressions for the superspin projection operators in AdS 3|2 which involve only Casimir operators. 14 These expressions may be easily converted to vector or four component notation. Note added in proof: When m = n = s, the spin projection operator (4.3) takes the form In this case, it may be shown that Π ⊥ (s) annihilates any field φ α(s ′ )α(s ′ ) of lower rank: Let us comment on the implications of (4.7) on fields with vectorial indices. Consider a field h a 1 ...as which is totally symmetric in its vector indices and has a non-zero trace h a 1 ...as = h (a 1 ...as) ≡ h a(s) , η bc h bca(s−2) = 0 , (4.8) where η ab = diag(−1, 1, 1, 1). Upon converting to 4d two component spinor notation, see e.g. [49] for the details, h a(s) decomposes into irreducible SL(2, C) fields as follows h α 1α1 ,...,αsαs := (σ a 1 ) α 1α1 · · · (σ as ) αsαs h a 1 ...as = h α(s)α(s) + · · · . (4.9) Here h α(s)α(s) is associated with the traceless part of h a(s) , whilst the + · · · represent lower-rank fields h α(s ′ )α(s ′ ) associated with the trace of h a(s) . From (4.7) it follows that the operator Π ⊥ Therefore, the spin-s projection operator (4.6) is a TT projector when acting on a rank-s field which is symmetric and traceful in its vectorial indices. Similar conclusions hold in the three dimensional case. This is because the spin projection operators (2.21) and (2.35) in AdS 3 (and hence also those in M 3 given by eq. (2.77)) satisfy a property analogous to (4.7), as pointed out in eqs. (2.29) and (2.41 A Notation and conventions We follow the notation and conventions adopted in [50]. In particular, the Minkowski metric is η ab = diag(−1, 1, 1). The spinor indices are raised and lowered using the SL(2, R) invariant tensors ε αβ = 0 −1 1 0 , ε αβ = 0 1 −1 0 , ε αγ ε γβ = δ α β (A.1) by the standard rule: We make use of real gamma-matrices, γ a := (γ a ) α β , which obey the algebra γ a γ b = η ab ½ + ε abc γ c , where the Levi-Civita tensor is normalised as ε 012 = −ε 012 = 1. Given a three-vector V a , it can be equivalently described by a symmetric second-rank spinor V αβ defined as Any antisymmetric tensor F ab = −F ba is Hodge-dual to a three-vector F a , specifically F a = 1 2 ε abc F bc , F ab = −ε abc F c . (A.5) Then, the symmetric spinor F αβ = F βα , which is associated with F a , can equivalently be defined in terms of F ab : F αβ := (γ a ) αβ F a = 1 2 (γ a ) αβ ε abc F bc . (A.6) These three algebraic objects, F a , F ab and F αβ , are in one-to-one correspondence to each other, F a ↔ F ab ↔ F αβ . The corresponding inner products are related to each other as follows: The Lorentz generators with two vector indices (M ab = −M ba ), one vector index (M a ) and two spinor indices (M αβ = M βα ) are related to each other by the rules: M a = 1 2 ε abc M bc and M αβ = (γ a ) αβ M a . These generators act on a vector V c and a spinor Ψ γ as follows: The following identities hold: B Generating function formalism We employ the generating function formalism which was developed in [22]. Within this framework, a one-to-one correspondence between a homogenous polynomial φ (n) (Υ) of degree n and a rank-n spinor field φ α(n) is established via the rule φ (n) (Υ) := Υ α 1 · · · Υ αn φ α(n) . (B.1) Here, we have introduced the commuting real auxiliary variables Υ α , which are inert under the action of the Lorentz generators M αβ . Making use of the auxiliary fields Υ α , and their corresponding partial derivatives, ∂ β := ∂ ∂Υ β , we can realise the AdS 3 derivatives as index-free operators on the space of homogenous polynomials of degree n. We introduce the differential operators which increase and decrease the degree of homogeniety by 2, 0 and −2 respectively: Note that the action of D (0) is equivalent to that of the Casimir operator F .
10,563
2021-07-26T00:00:00.000
[ "Mathematics" ]
Assessing Multi-Site rs-fMRI-Based Connectomic Harmonization Using Information Theory Several harmonization techniques have recently been proposed for connectomics/networks derived from resting-state functional magnetic resonance imaging (rs-fMRI) acquired at multiple sites. These techniques have the objective of mitigating site-specific biases that complicate its subsequent analysis and, therefore, compromise the quality of the results when these images are analyzed together. Thus, harmonization is indispensable when large cohorts are required in which the data obtained must be independent of the particular condition of each resonator, its make and model, its calibration, and other features or artifacts that may affect the significance of the acquisition. To date, no assessment of the actual efficacy of these harmonization techniques has been proposed. In this work, we apply recently introduced Information Theory tools to analyze the effectiveness of these techniques, developing a methodology that allows us to compare different harmonization models. We demonstrate the usefulness of this methodology by applying it to some of the most widespread harmonization frameworks and datasets. As a result, we are able to show that some of these techniques are indeed ineffective since the acquisition site can still be determined from the fMRI data after the processing. Introduction Magnetic resonance imaging (MRI) is an imaging modality that allows, among other things, the monitoring and sensing of the neuronal activity in the brain. Several drivers are raising the application horizons of MRI imaging. To mention the main ones, MRI is innocuous and non-invasive, resonators are steadily becoming less expensive, and with increasing capabilities, the knowledge of the human brain's anatomy and physiology is turning more definite and precise, and new analysis techniques are able to extract subtle, latent information that is fundamental to several research and clinical purposes. In particular, resting-state functional magnetic resonance imaging (rs-fMRI) is able to capture interactions between brain regions that, in turn, can lead to evaluating several biomarkers of interest. Currently, acquisitions obtained from rs-fMRI data make it possible to thoroughly study several aspects of the human brain function, both in healthy subjects and those diagnosed with neurological or even psychiatric conditions, including Alzheimer's disease [1,2], schizophrenia [3,4], and autism spectrum disorder [5,6]. In order to carry out these studies correctly and obtain an adequate statistical significance, it is necessary to have a high number of acquisitions, which is not always possible in a single site and session. Therefore, the number of studies using images acquired at multiple sites has been increasing over the years [7][8][9][10]. This allows the speeding up the data collection and analysis process, and increasing the sample size naturally leads to greater predictive power and more sophisticated studies. Multi-site acquisition is also critically important Overview of our technique to analyze, with the help of Information Theory, the effectiveness of the harmonization techniques for multi-site acquired data. From a set of multi-site fMRI data, we generate the Phase Interaction Matrix and, from there, we verify in the Shannon-Fisher plane the quality of the application of different harmonization techniques. Datasets In this section, we describe the different datasets used for testing. Since the aim is to compare the sites within each multi-site dataset, we can see that it does not matter that each downloaded dataset had different processing steps already applied. Further, it is worth noting that three of the datasets are given as sets of time series (IMPAC, ABIDE, and ADHD-200), while one was given as a set of connectivity matrices of Pearson correlation values (SRPBS). See below. ABIDE ABIDE [41] is a publicly accessible repository of 20-site rs-fMRI data from 17 different international institutions. The dataset is composed of scans of 539 individuals diagnosed with autism spectrum disorder and 573 healthy control individuals. However, because some data had downloading problems, the number of total subjects was reduced from 1112 to 884. The image acquisition parameters can be found at https://fcon_1000.projects.nitrc. org/indi/abide/, (accessed on 15 April 2022). The data were preprocessed using the standard pipeline DPARSF (Data Processing Assistant for Resting-State fMRI Toolbox), which is based on the parametric statistical mapping. Finally, the time series were extracted following the definitions of seven different atlases: Talaraich ADHD-200 As part of the International Neuroimaging Datasharing Initiative (INDI), the ADHD-200 dataset [42] is a collaboration of 8 imaging sites, composed of neuroimaging data from 362 children and adolescents diagnosed with Attention Deficit Hyperactivity Disorder (ADHD) and 585 typically developing controls (total 947 subjects). As some scans had downloading problems or missing data, the number of subjects used in this work was reduced to a total of 768 subjects. The data were preprocessed using the Athena pipeline, which is based on tools from the AFNI and FSL software packages. The time series extracted from the images are based on six different atlases: Talaraich-Tournoux (97 ROIs), Harvard-Oxford (111 ROIs), AAL (116 ROIs), Eickhoff-Zilles (116 ROIs), Craddock (190 ROIs), and Craddock (351 ROIs). The data can be found at https://www.nitrc.org/frs/?group_id=383, (accessed on 15 April 2022), and details of the pipeline at https://www.nitrc.org/plugins/mwiki/index.php/ neurobureau:AthenaPipeline, (accessed on 15 April 2022). The imaging parameters are detailed at http://fcon_1000.projects.nitrc.org/indi/adhd200/, (accessed on 15 April 2022). SRPBS This dataset [43,44] includes data from subjects with four different diagnoses and healthy control subjects who were examined at nine sites corresponding to eight institutions. Of the 805 participants, 482 are healthy, 161 have a major depressive disorder, 49 have autism spectrum disorder, 65 have obsessive-compulsive disorder, and 48 have schizophrenia. Each participant underwent a single session of rs-fMRI for 5 to 10 min. The time series extraction procedure is detailed in the work by Yamashita and co-authors [31], where 268 regions of interest were delineated. Of note, participants who reported high levels of head movement were excluded, resulting in a reduction in the size of the dataset to 637 subjects. It is worth mentioning that this dataset also includes traveling-subject data, and it thus could be applicable to the traveling-subject harmonization methodology. The data are available at https://bicr.atr.jp/dcn/en/download/harmonization/, (accessed on 15 April 2022), while scan parameters can be found at https://bicr.atr.jp/rs-fmri-protocol-2/, (accessed on 15 April 2022). Traveling-Subject Dataset As will be explained in a later section, the traveling-subject dataset [31] is necessary to estimate measurement bias across sites in the SRPBS dataset. It is composed of data of 9 healthy participants that were scanned at each of 12 sites, which included the 9 sites in the SRPBS dataset, producing a total of 411 scan sessions. Connectivity We started by applying a bandpass filter of 0.04-0.07 Hz to the BOLD time series to select adequate low frequencies. Then, the instantaneous phases φ i (t) of each region i were estimated by applying the Hilbert transform to the filtered signals. The phase coherence P ij (t) between two regions i and j at a time t was calculated using the cosine of the phase difference, as shown in Equation (1). Since the Hilbert transform expresses signals in polar coordinates, using the cosine function makes two regions have a phase coherence close to 1 when their time series are in phase, 0 when they are orthogonal, and −1 when they are out of phase. In this way, the phase interaction matrix P(t) represents the instantaneous phase synchrony between each of the regions [45,46]. This procedure results in a matrix of size N × N × T for each subject, where N is the number of regions and T is the total number of points in the time series: Since T is a different value for each subject, the temporal dimension was eliminated by averaging each of the matrices over time. The procedure detailed above was applied to the IMPAC, ABIDE, and ADHD-200 datasets. For the SRPBS dataset, on the other hand, we worked with correlation matrices since the data provided by the dataset correspond to the Pearson correlation coefficient's values are widely used in previous studies. Harmonization As introduced in the previous section, to reduce potential biases and non-biological variances introduced by different acquisition sites and scanners, there are various methods of data harmonization. In this work, the effectiveness of three of them is evaluated. One is ComBat [21][22][23], which is probably the most accepted and widely used in the literature. The second is CovBat [28], which is a refined and improved version of ComBat. The last one is the traveling-subject method, whose authors have shown that it can be more effective than ComBat in certain cases [31]. Below we briefly describe their main features. It is important to emphasize that in this work, for both ComBat and CovBat, the biological covariates to be protected during the removal of scanner/site effects were defined as the gender, age, and diagnosis of each of the subjects. ComBat The ComBat technique, originally created to be used in Genomics analysis, is perhaps the most commonly used for the harmonization of brain connectivity data. It is based on Bayes' empirical method, assuming that errors in the data can be corrected by adjusting the means and variances of the different acquisition sites. It has been shown to be able to eliminate site differences while adequately maintaining biological variability [24,26]. Defining y tjv as the evaluation at site t, participant j. and characteristic v, the ComBat regression model can be written as where α v is the average connectivity of the feature v, X tj is the design matrix for the covariances of interest, and β v is the regression vector of coefficients corresponding to X. In turn, γ tv and δ tv represent the additive and multiplicative terms of the site i for the feature v, respectively, while tjv represents an error term assumed to arise from a normal distribution with zero mean and variance σ 2 v . The values harmonized by ComBat are then defined as: where γ * tv and δ * tv are the empirical Bayesian estimates of the parameters γ tv and δ tv . Therefore, biological and non-biological terms are modeled and estimated to algebraically eliminate the additive and multiplicative effects of the sites. The calculations were made using the library available at https://github.com/Jfortin1/ComBatHarmonization, (accessed on 15 April 2022). CovBat The method called Correcting Covariance Batch Effects (CovBat) was proposed to remove site effects in mean, variance, and covariance. It was built on top of the ComBat framework, assuming that the features follow Equation (3). However, the error vectors tjv may be spatially correlated and differ in covariance across sites, so this method modifies principal component scores to shift each within-site covariance to the pooled covariance structure. This means that the first term of Equation (4) is assumed to have a mean of 0, but its covariance matrix Σ may differ between sites. Therefore, principal component analysis (PCA) is performed to obtain an estimation of the eigenvalues λ and eigenvectors φ of Σ. The principal component scores are defined as where tjk is a zero-mean normal distribution and µ tk and ρ tk are the center and scale parameters corresponding to principal components k = 1, 2, · · · , K where K is a hyperparameter selected to capture the desired proportion of the variation in the observations. The parameters are estimated by finding the values that bring each site's mean and variance in scores to the pooled mean and variance. Then, the site effects are removed via The CovBat-adjusted residuals are obtained as Adding the intercepts and covariates' effects, the harmonized values result in Traveling-Subject Method This method is based on the identification of measurement biases, sampling biases, disorder factors, and subject factors. The measurement bias m for each site is defined as the deviation of the connectivity value between each pair of regions of interest from its average over all sites and is due to the differences between the properties of the scanners involved. The sampling bias s, introduced due to differences in participant groups between sites, is assumed to be different for subjects diagnosed with different disorders. Disorder factors d are defined as deviations from control subjects. In turn, the factors of the subjects p are calculated as the deviation of the connectivity from the average of the participants. All the biases and factors mentioned above are estimated by fitting a linear regression model using ordinary least squares with L 2 regularization. The connectivity value v for subject j is then given by where const represents the average connectivity between all participants and e the noise. The vectors x m , x s , x d , and x p are one-hot encoded. It is difficult to separate differences between sites using a single dataset because the two types of biases defined are correlated across sites. Therefore, in order to use this method, it is necessary to have an extra dataset (the so-called traveling-subject dataset), where the participants are constant; that is, it must be composed of scans of a constant set of healthy subjects in each of the sites in the original dataset. This is why the measurement bias is estimated only using the traveling-subject dataset. It is precisely the acquisition of this traveling-subject dataset that presents the main problem of the technique, since it requires that the same reference subjects physically travel to all the acquisition sites, with the consequent logistic problem that this implies. Finally, matching is achieved by subtracting the estimated measurement biases m v from the connectivity values, resulting in y Traveling jv Assessment of Harmonization Quality In this subsection, we will introduce the specific theoretic information foundations that we use in this work. As already mentioned, the measures and methods derived from this background are capable of assessing the actual effectiveness of the different harmonization measures. Readers familiar with Information Theory, Shannon entropy, Fisher Information, and the causality-complexity plane can skip the reading of this subsection. Information Theory Measures A key aspect of Information Theory is the concept of entropy as a measure of the uncertainty involved in the outcome of a random variable or process. These outcomes, in turn, are related to the probabilities (or relative frequencies) of the possible values that the variable or process may hold. Then, as the first step in our application of Information Theory in the context of rs-fMRI, it is necessary to define the probability distribution for the data. Our strategy was to binarize both the averaged phase interaction matrices and the correlation matrices using a threshold value of 0.5. Any other nontrivial threshold can be applied, and experiments show that the results to be exposed below are robust with respect to this choice in a wide range (f.e., from 0.2 to 0.8). Connections with values greater than the threshold were assigned to 1, while the others were assigned to 0. In this way, each of the new matrices results in an adjacency matrix A that represents the neural graph of a subject model of the given threshold. Once obtained, the probability that a random walk goes from a node i of a graph to any other node j is calculated. This probability distribution p i→j is defined for each node as where k i is the degree of the node, and its value is obtained as k i = ∑ j a ij . Shannon Entropy Based on the P (i) distribution, the Shannon Entropy for each node can be defined as where P (i) = {p i→j : j = 1, ..., N} is the probability distribution vector associated to node i. In turn, the Normalized Nodal Entropy for node i is obtained as Finally, the Normalized Network Shannon Entropy (SE) is calculated by averaging the Normalized Nodal Entropy over the entire network, resulting in SE is a global disorder measure commonly used in various applications of Information Theory. An advantage of SE in networks is that it is relatively insensitive to substantial changes in distributions that are concentrated in a small region of space. Therefore, it is able to quantify the heterogeneity of networks: H → 0 for sparse networks and H → 1 for fully connected networks. Fisher Information Fisher Information is a statistical model aimed at measuring how much information about an unknown parameter can be obtained from a sample. In other words, it assesses the amount of information that an observable random variable within a population carries about an unknown parameter of the distribution that models the population. Using the notation of the previous subsection, the Normalized Fisher Information for a node i is given by Then, the Normalized Network Fisher Information (FI) is defined as This measure can be interpreted in various ways, for instance, the ability to precisely estimate a parameter, the amount of information that can be extracted from a set of measures, or the state of disorder of a system. Unlike Shannon Entropy, Fisher Information is a local measure based on the gradient of the underlying distribution, so it is significantly sensitive to localized disturbances in small regions. Shannon-Fisher Plane The use of the Shannon-Fisher plane was originally proposed by Vignat and Bercher [47], who defined it to show that through the simultaneous examination of both Shannon Entropy and Fisher Information, the non-stationary behavior of a complex signal may be characterized. Without any assumption on the nature of the data, the Shannon-Fisher area can be simply defined as: Using this plane, we can find that our system lies in a very ordered state when the Shannon Entropy H ∼ 0 and the Fisher Information F ∼ 1. However, when the system stays in a very disordered state, we obtain that H ∼ 1 and F ∼ 0 [48]. In general, it is widely accepted that the Shannon-Fisher Information plane is an effective tool to contrast global and local characteristics of a given probability distribution. In our case, and as performed by Freitas et al. [37], each of the networks is placed in the Shannon-Fisher plane that arises from the two measurements that have been explained in the previous sections. Quantification Measures We define the null hypothesis as that the population median of all of the sites are equal, the Kruskal-Wallis [49] test was used twice to quantify the magnitude of the effects of the acquisition sites for SE and FI. All possible combinations of datasets, atlases, and harmonization methods were analyzed: 21 cases with no harmonization, 21 cases harmonized with ComBat, 19 cases with CovBat, and 1 case with the traveling-subject method. It is worth noting that two cases (ABIDE/Craddock400/CovBat and ADHD-200/Craddock351/CovBat) are missing due to limitations in computing resources. As stated in a previous section, for the traveling-subject method, an extra dataset is needed, so the only possible case to evaluate that method is the one from the SRPBS dataset. The transformation p = − log p was applied to the p-values obtained with the test to facilitate the comparison and interpretation of the results. Table 1 shows the comparison of the harmonization methods for the different datasets, atlases, and Information Theory measures, while the figures in the next subsection visually illustrate the performance of each method described in this manuscript. Analyzing the results for the case without the harmonization stage, it is observed that the influence of the acquisition site is considerable, even more so in the IMPAC database. In addition, the impact is greater in the SE measure than in the FI measure for all cases and in the atlases with a larger number of ROIs for most cases (ABIDE/Dosenbach is the largest exception). When applying the ComBat method, substantial improvements are obtained in all cases, i.e., the site influences are unnoticeable and not statistically significant, achieving p ≥ 0.05 in 12 cases. The harmonization is still better for the atlases with lower amounts of ROIs, but now the impact on the Information Theory measures is reversed: for all cases, the site effects are stronger for FI than for SE. The data for which the quality of the harmonization obtained are the lowest comes from the ADHD-200 dataset. This is due, as can be seen in Figure 2, to the existence of an outlier that may come from severe misalignments present in the scan of that particular subject. The CovBat method achieves similar results, slightly outperforming ComBat in most cases, especially when applied to atlases with no more than 200 ROIs. Finally, the traveling-subject method also represents an improvement over the case without harmonization, but it is almost insignificant compared to ComBat and CovBat. Visualization The relationships observed by analyzing Table 1 can also be visualized by plotting the Shannon-Fisher plane for the different cases. In this subsection, the most relevant results for the analysis are presented, while the others can be found in the Supplementary Material. The first row of Figure 3 shows the distribution of all the subjects of the IMPAC dataset within the Shannon-Fisher plane without previously applying a harmonization technique. Acquisition sites 28 and 31 present a significant distance from the center, so they are shown separately in the figures. In the second row, the planes corresponding to the ComBat method are presented, where the distribution is grouped in a much more uniform way. The same happens in the third row for the CovBat method. In Figure 4, both the effectiveness of ComBat/CovBat and the inability of the travelingsubject method to harmonize the measures extracted from the SRPBS dataset are evident. For the unharmonized case, we can observe that sites 5 and 6 lay outside the "cloud" of the rest of the measures, which can be attributed to acquisition differences due to already mentioned factors other than the biological ones (e.g., differences in the actual equipment, differences in the used functional BOLD MRI sequence settings, improper calibration, etc.) This separation is no longer appreciable after the application of ComBat or CovBat but remains practically unchanged with the traveling-subject method. Figures 2 and 5 show the resulting planes after applying the ComBat and CovBat methods to, respectively, the ABIDE dataset and the ADHD-200 dataset. As can be seen at the bottom left of the planes corresponding to the ADHD-200 dataset, there is an outlier that could not be removed with any harmonization method, probably caused by some kind of misalignment in the scanning device. Discussion and Conclusions These results provide new evidence about the importance of having techniques capable of removing unwanted biases caused by the acquisition sites from rs-fMRI images in order to merge data obtained by means of different scanners without incurring methodological errors. A set of tools based on Information Theory was presented to discern the quality of harmonization techniques for multi-site rs-fMRI measurements, allowing the quality of such techniques to be verified. This set of tools, when applied to data harmonized with different techniques, makes it possible to determine if there are still traces of the original locations or if the data are reliable for the subsequent specific treatment depending on the problem to be treated. We should mention that part of inter-site heterogeneity might be caused by differences in sample demographics (e.g., race, education, background), which cannot be addressed with the harmonization methods presented here. On the other hand, we observed that the tools performed best with a low number of ROIs, as shown in the Results Section. This is probably due to the finer granularity and higher detail associated with a higher-order (i.e., with more ROIs) atlas, which allows capturing more inter-subject features (i.e., biological features that do not depend on the site). Further, variability in fMRI imaging parameters across sites may affect the quality of harmonization. In general, we observed that ABIDE, IMPAC, and ADHD-200 MRI scan parameters vary a lot from site to site. As a consequence, this could induce a lot of variability in the BOLD SNR obtainable from each site. Given that the rs-fMRI connectivity metrics have sensitive fMRI BOLD SNR, this, in turn, could have an impact on the effectiveness of harmonization. For instance, as we mentioned, the disalignment in some individual cases may force ComBat and CovBat to fail to fully harmonize the full ADHD dataset, as can be seen in Figure 2, which could be due to differences in MRI imaging parameters between that site and other sites in the ADHD dataset, and between individual runs. In general, we observe that the theoretical information analysis reveals that ComBat and CovBat provide better results than the traveling-subject method, showing that these tools are effective in discerning the subtle details of the registration site that were not removed by the harmonization method. As a rule of thumb, we could say that CovBat provides the best results for datasets with less than 200 ROIs, while ComBat excels otherwise. The harmonization effectiveness of both Normalized Network Shannon Entropy (SE) and Normalized Network Fisher Information (FI) measures seem to deteriorate as the number of ROIs in the parcellation schemes increases. This could be due to variations between sites in the fMRI SNR. SE measures uncertainty, and thus a larger number of ROIs implies smaller sizes, which makes them more sensitive to variations in fMRI SNR and hence variability in scanning parameter-related site effects. FI, in turn, measures information, and thus a larger number of ROIs will be more sensitive to site effects as they yield more information about the site. It is worth noticing that the results obtained in this work do not correspond to those obtained by Yamashita and co-authors [31], where they achieved a greater reduction in measurement bias using their traveling-subject method than with other techniques, i.e., ComBat. This dichotomy could be an indication that their harmonization technique is useful in specific analyses, as performed in their paper, but is not robust to other processing methodologies, such as Information Theory measures. In particular, our analysis of the original data with the Shannon-Fisher plane revealed that the site-related information was not completely removed by the harmonization process, which may render the method inadequate for further comparative analysis, while ComBat presents a more general and robust performance. Another disadvantage of the traveling-subject method is its high cost and time consumption due to the need for a large group of participants to travel to all the sites involved. Therefore, applying this method to correlation matrices requires a wider logistic basis to achieve significant harmonization, while both ComBat and CovBat present very good results. Finally, it is worth mentioning that the workflow presented in this paper cannot be applied directly to images obtained with rs-fMRI studies; it is necessary to have the corresponding BOLD time series. Hence, for this study, from the wide spectrum of existing harmonization methods [20], we used the three compatible with this type of data. One interesting avenue for further research is to extend the workflow to more general types of information, which could be performed by properly defining the respective information theoretical measures. Further, for future work, we consider it relevant to be able to extend the analysis through the Shannon-Fisher plane to graphs and probabilistic networks (without requiring prior thresholding). Finally, we will investigate if these results can also be reproduced using the Pearson correlation matrix instead of the phase interaction matrix.
6,265
2022-09-01T00:00:00.000
[ "Computer Science" ]
A MODIFIED DYNAMIC BACTERIAL FORAGING ALGORITHM FOR AN ENHANCED POWER SYSTEM STATE ESTIMATION Power systems are getting more complex with the ongoing growing of the ever changing energy demand. This dynamic situation of the electric power networks makes the control and monitoring of the system a crucial issue. In order to have an accurate real time monitoring and representative models, state estimation practices are essential. This requirement becomes more significant for nonlinear systems such as the electric power networks. The objective of the state estimation problem is to apply a variety of statistical and optimization methods in order to determine the best estimate of the power system variables. The variables of the power system are conventionally measured using various common metering devices in spite of the complexity and gradual expansion of the networks. However, these measuring meters are associated with errors and inaccurate output readings due to several operational, communicational and device-linked causes. Consequently, determining an improved and optimized estimation of the system state is significant and essentially needed, and hence this topic is getting more attraction among the researchers. The most typically applied approach to deal with the state estimation problem is the Weighted Least-squares (WLS) method. In this paper a hybrid algorithm is introduced utilizing a WLS-based dynamic bacterial foraging algorithm (DBFA). The proposed algorithm was applied and validated using the well-known IEEE 14-bus system. The results demonstrated the effectiveness and superiority of the algorithm when compared to some of other techniques used to tackle the state estimation issue. INTRODUCTION State estimation is a significantly essential function for monitoring and security of power system networks.Power system operation and control problems such as optimal power flow, economic load dispatch and contingency analysis are solved based on the system state estimation outcomes.State estimation is conventionally performed based on the data measured by the various measuring devices located in different parts of the network.These measurements are then sent to the SCADA system and employed for various power system operation and control problems.Inaccuracy of measurements is a crucial issue that has an impact on determining the realistic state of the system variables such as the voltage profile as well as active and reactive powers of the grid.The inaccurate measurement could generally be caused by device deficiency and data transferlinked issues.The state estimation approach is to employ existing measured data for statistically computing the most optimal state of the power system [1].Traditionally, the most used methods for determining the optimal power system state estimation are the Weighted Least-squares (WLS) and the Maximum Likelihood Methods [2].The WLS technique is based on the criterion of minimizing the measurement errors so that optimal estimated values of system variables (states) are reached.Mathematically, this is usually carried out by minimizing the Jacobian matrix whose elements are the received measurements of the system.Due to the network topology and grid structure and because of the fact that not all the busses are directly coupled, the Jacobian matrix is usually a sparse matrix.As a consequence of the matrix sparsity, the resultant estimation could be considerably inaccurate.This dilemma is the main drawback associated with the WLS method [3][4][5].In order to tackle this downside of the typical WLS method, and to accomplish the most accurate estimated state of the monitored system, various optimization techniques are implemented.The Newton-Raphson method is one of the widely used optimization methods by which the first order optimality condition is satisfied.However this cannot be successfully applied to nonsmooth and nonconvex problems.In addition to that, when the Hessian is inverted, ill conditioned matrix is resulted.This situation can lead to an inaccurate estimation which needs to be improved by applying some effective rules [1].A great number of various optimization methods have been reported in the literature and applied to solve power system problems [6,7].Traditionally, most of these methods are deterministic calculus-based while the most recently introduced are the heuristic and artificial optimization methods.Non-calculus-based optimization techniques have demonstrated good convergence characteristics in solving nonconvex large scale optimization problems with high nonlinearities [8,9].Genetic algorithms have been applied to determine the optimal placement of phasor measurements units for an ‫العلوم‬ ‫األسمريت:‬ ‫الجامعت‬ ‫مجلت‬ ‫والتطبيقية‬ ‫األساسية‬ Journal of Alasmarya University: Basic and Applied Sciences accurate state estimation [10].Particles Swarm Optimization (PSO) method is utilized to investigate the state estimation problem in [11].Bacterial foraging algorithm (BFA) is another non-deterministic method that has been used to solve many power system optimization problems.This evolutionary heuristic method was originally inspired by the foraging behaviour of the E coli bacteria [12].The basic BFA is associated with critical and poor convergence properties when used with large-scaled high-dimensioned constrained nonlinear and nonconvex functions.To overcome these weaknesses, the BFA was modified, improved and implemented to find the optimal or near optimal solution for the economic dispatch [13] and hydrothermal scheduling problem [14][15][16][17][18].In this paper a WLS-based dynamic bacterial foraging algorithm (WLS-DBFA) is presented and implemented to compute the optimally accurate estimation of the system state.The reminder of the paper is organized as follows: Section 2 provides the formulation of the problem using WLS.In Section 3, the DBFA is described.Simulation results are demonstrated in Section 4. The conclusion is drawn in Section 5. WLS-BASED STATE ESTIMATION Computing unknown variables in a power system using measurements (samples) is a statistical estimation procedure.In this process the available inexact measurements are employed to formulate the optimal estimate of the unknown variables [2].It is obvious that the measured values are obtained from a number of measuring devices with some unidentified errors.These errors can mathematically be modelled as follows: ℎ i is the i th nonlinear function that relates the estimated value with its measurements. x is a state vector that represents the estimated variables.The state estimation problem is formulated as an optimization problem in which the objective function to be minimized is the sum of the residual errors.If the number of available measurements is m and the number of unknown variables is n, then this minimization problem is formulated as follows [2]: Formulation of the state estimation problem 2 variance for the i measurement, and ( ) measurement residual The expression given in Equation ( 2) is known as the weighted least-squares estimator which is the maximum likelihood estimator when the errors are modelled as random numbers with normal distribution characteristics.The above minimization function can be expressed in a vector form as follows: where [H]x is an m by n matrix and its elements are the coefficients of the linear function h i (x). The measurements are expressed in a column vector as: Then Equation ( 2) can be expressed in a compact matrix notation as: Where [R] is known as the covariance matrix of measurement errors and defined as follows: ‫العلوم‬ ‫األسمريت:‬ ‫الجامعت‬ ‫مجلت‬ ‫والتطبيقية‬ ‫األساسية‬ Journal of Alasmarya University: Basic and Applied Sciences The formulation in Equation ( 6) can be expanded to obtain a general minimization form expressed in the following equation [2]: Constraints The objective function formulated above is subject to a number of equality and inequality constraints that must be satisfied.The upper and lower boundaries of the problem are specified as [1]: where λ is the penalty factor and N is the number of variables.. THE DYNAMIC MODIFIED BACTERIAL FORAGING ALGORITHM In this section, the basic BFA is introduced first.Afterwards, the DBFA which is applied to solve the minimization state estimation problem is demonstrated. The basic bacterial foraging algorithm (BFA) The BFA is a heuristic optimization technique which is motivated by the foraging behavior of the E coli.bacteria [12].BFA was introduced in order to find the optimal solution vector for non-differentiable and non-gradient complex objective functions.The hyperspace search is performed using three main operations; chemotaxis, reproduction and elimination dispersal activities [12].The chemotaxis process is carried out by swimming and tumbling.The bacterium spends its life alternating between these two modes of motion.In the BFA, a tumble is represented by a unit length in a random direction, which specifies the direction of movement after a tumble.The size of the step taken in the random direction is represented by the constant run-length unit, C (i).For a population of bacteria, the location of the i th bacterium at the j th chemotactic step, kth reproduction step and lth elimination/dispersal event is represented by.At this location the cost function is denoted by, which is also known as the nutrient function.After a tumble, the location of the i th bacterium is represented by [12]: When at the cost function is lower then, another step of size C(i,j) in the same direction is taken.This operation is repeated as long as a lower cost is obtained until a maximum number of steps, N s , is reached.The cost function of each bacterium is affected by a kind of swarming that is performed by the cell-to-cell signalling released by the bacteria groups to form swarm patterns.This swarming is expressed as follows [12]: where d attract , ω attract , h repellant and ω repellant are coefficients represent the characteristics of the attractant and repellent signals released by the cell and is the m th component of i th bacterium position i  . ( , , )P j k l is the position of each member of the population of the S bacteria and defined as [12]: where S is the size of the bacteria population.The function which represents the cell-to-cell signalling effect is added to the cost function [12]: ( , , , ) ( , ) A reproduction process is performed after taking a maximum number of chemotactic steps, N c .The population is halved so that the least healthy half dies ‫العلوم‬ ‫األسمريت:‬ ‫الجامعت‬ ‫مجلت‬ ‫والتطبيقية‬ ‫األساسية‬ Journal of Alasmarya University: Basic and Applied Sciences and each bacterium in the other healthiest one splits into two bacteria which takes the same position [12]: After N re reproduction steps an elimination/dispersal event takes place for N ed number of excisions.In this operation each bacterium could be moved to explore another parts of the search space.The probability for each bacterium to experience the elimination/dispersal event is determined by a predefined fraction p ed . Dynamic Modified bacterial foraging algorithm In the basic BFA, the length unit step is fixed.This could be satisfying for small linear minimization or maximization problems.However, it may not be guaranteed that good convergence properties can be obtained if it is applied to solve large-scaled nonlinear optimization problems.Search spaces that involve high dimensionality require more dynamic characteristics to converge to global minima.In order to achieve the desired results using this algorithm, the runlength parameter is adjusted so that it could be dynamically adaptive.In fact, it is the main factor that controls the local and global search capabilities of the BFA.Accordingly, balancing the exploration and exploitation of the search could be accomplished by modifying the run-length unit.A decreasing nonlinear dynamic function is augmented to execute the swim walk as an alternative of the constant length unit step.This function is formulated as [19]: where j is the chemotactic step index and N c is the maximum number of chemotactic steps while C(N c ) is the initial predefined parameters. RESULTS AND DISCUSSION The WLS-based DMBFA was implemented to determine the optimal and "best" accurate state estimation of the well-known IEEE 14-bus standard power system as shown in figure 1 [20]. The algorithm was implemented and coded in MATLAB and executed on an Intel Core i7-8750H 2.20GHz personal computer with 8 GB RAM.In order to check for consistency, 50 independent runs were conducted with different random initial solution for each run.The line and bus data can be found in [20] and are tabulated as shown in tables 1, 2 respectively.The available measurements are given in [21].In order to validate the proposed algorithm, a small power system with only three buses was utilized before applying it to the IEEE 14-bus system.The WLS-DBFA was executed for the test system.Upper and lower limits for the system variables were set appropriately and so as for the DBFA parameters and its stopping criteria.The results were compared to those determined by applying the WLS-Newton-Raphson, WLS and the Particle Swarm Optimization PSO-WLS [21].The obtained results as well as the comparison with the mentioned methods are shown in table 3. The absolute percent error was computed for the bus voltage magnitude and angle.Figure 2 illustrates this error percentage. Absolute percent error Voltage magnitude Voltage angle Figure 2 . Figure 2. The absolute percent error of the estimated voltage magnitude and angle.
2,872.8
2021-12-31T00:00:00.000
[ "Engineering", "Computer Science" ]
Effect of synthesis route on the microstructure of SiO 2 doped bismuth titanate ceramics Synthesis and characterization of SiO2 doped bismuth titanate ceramics were investigated. Four investigated compositions belonging to the system Bi2O3-TiO2-SiO2 are located in the section Bi4Ti3O12-SiO2, near by it and in the binary Bi2O3-TiO2 system. Melt quenching was applied for synthesis of SiO2 doped bismuth titanate ceramics and for the sample 40Bi2O3·60TiO2. The binary sample 70Bi2O3·30TiO2 is prepared by gradient heating of bulk materials near the liquidus temperature (modified solid state reaction). The influence of the thermal treatment on the phase formation and microstructure was evaluated using XRD, EDS and SEM. The binary samples prepared by solid state reaction at low temperature (1000°C) possess poly-phased dense microstructure, while secondary crystallization combined with porosity formation is typical for the sample obtained at high temperature (1150°C). The ternary Bi2O3-TiO2-SiO2 samples, obtained by supercooling of the melts down to room temperature, are thermally treated at 700, 800°C. They consist of elongated crystals in amorphous matrix. The crystals have lower Bi2O3-content and higher TiO2-content than the nominal batch composition. The XRD data show that the main crystalline phases in the ceramics produced by melt quenching method and solid state reaction are β-Bi2O3, Bi4Ti3O12 and one unknown new phase. It is proved that the applied methods of synthesis are suitable for generation of different microstructures in the bulk SiO2 doped bismuth titanate ceramics, which is promising basis for modification of their electrical properties. I. Introduction The application of Aurivillius family of bismuthbased ferroelectric compounds with a layered structure [1] in capacitors, sensors, piezoelectric and electroop tic devices [2][3][4] is strongly influenced by the method of preparation.Doping with different cations is used to control the point defects.Various low-and high-temperature routes of synthesis are used: sol-gel method [5][6][7], hydrothermal crystallization [8], metal-organic decomposition [9] and others.Recently crystallization from melts and glasses is also applied [10][11][12][13][14].This method gives pos-sibility to control the particle size evolution during the transition from amorphous to crystalline state and to achieve suitable crystallographic orientation in the polycrystalline materials. In our previous study, the phase formation in the system Bi 2 O 3 -TiO 2 -SiO 2 from fast quenched melts is investigated [15].It is established that the introduction of 20-40 mol% SiO 2 stimulates the partial amorphization of the samples.It is known, that the properties of some ferroelectric ceramics can be enhanced by texturing of their structure as in the case of partial grain orientation in polycrystalline Bi 4 Ti 3 O 12 ceramics obtained by sol-gel, magnetic alignment and other methods [16][17][18][19].All these facts motivate our further investigations. The purpose of the present study is to elucidate the influenceoftheheattreatmentonthephaseformation and microstructure of SiO 2 doped bismuth titanate ceramics obtained by melting. II. Experimental The selection of compositions with high melting temperatures was made according to the data for the liquidus temperature in the system Bi 2 O 3 -TiO 2 [13,20,21] as well as to the glass formation in the system Bi 2 O 3 -SiO 2 [22,23].Four compositions (in mol%) were selected: one in the section Bi 4 Ti 3 O 12 -SiO 2 (24Bi 2 O 3 •36TiO 2 •40SiO 2 ) For all samples the phase formation was studied by X-raydiffractionanalysis(XRD-TURM62,Cu-Kα radiation) and energy dispersive spectroscopy (EDS -EDAX 9900).The microstructure was observed by scanning electron microscopy (SEM -525M, Philips).For the EDS analysis the samples were prepared by mechanical polishing and covered with thin carbon film,whiletheSEMobservationsweremadeonfresh fracturedsurfacesalsocoveredwiththincarbonfilm. III. Results and discussion The investigation started with a model binary composition 70Bi 2 O 3 •30TiO 2 obtained by solid state reaction with heating for 1 hour in the temperature range 1000-1150°С.TheSEManalysisillustratestheformationof different microstructures in the sample sintered at lowtemperature, 1000°С (Fig. 2a).Secondary crystallization combined with porosity formation is observed afterthehigh-temperature(1150°С)treatment(Fig.2b), where clearly shaped cubic-like crystals are formed.The sample prepared at low-temperature is poly-phased one and its XRD data contains the patterns of the phase β-Bi 2 O 3 andsomepatternsofothernon-identifiedphases(Fig.3a).Themaincrystallinephaseisβ-Bi 2 O 3 (Fig. 3b) for the sample treated at 1150°C. The sample with composition corresponding to the phase Bi 4 Ti 3 O 12 (40Bi 2 O 3 •60TiO 2 ) was quenched from the melt at 1300°C.The XRD analysis shows mainly the formation of the phase Bi 4 Ti 3 O 12 without texturing effects (Fig. 4).The melt quenched sample with composition 24Bi 2 O 3 •36TiO 2 •40SiO 2 after subsequent thermal treatment at 700°C, consists of elongated crystals in amorphous matrix (Fig. 5).According to the EDS analysis, the crystals and matrix differ in composition: the crystals have lower Bi 2 O 3 -content and higher TiO 2 -content in comparison to the nominal composition (Figs.5a,b).For all detected EDS analysis the statistical uncertainty is less than 4-5 mol%.Full crystallization combined with predominating orientation of the crystals is achieved after batch heat treatment for 5 hours at 800°C (Fig. 5c).Before the thermal treatment according to the XRD results, the sample was amorphous (Fig. 6a), but after long heating, the crystallization has been achieved (Fig. 6b) and a new phase having XRD patterns close to those of the phase Bi 4 Ti 3 O 12 appears.The decreasing of the SiO 2 -content leads to more complex crystallization with participation of several phases including Bi 4 Ti 3 O 12 and some unknown phase (Fig. 7). IV. Conclusions The investigation shows for the first time that depending on the conditions of the melting and additional heat treatment of the supercooled samples, different poly phased ceramic materials with various microstructures could be obtained in the system Bi 2 O 3 -TiO 2 -SiO 2 .Formation of the phase Bi 4 Ti 3 O 12 , Bi 2 O 3 -polymorphs and one unknown phase was established.These results arepromisingbasisforcontrolandmodificationofthe electrical properties of the obtained bulk SiO 2 doped bismuth titanate ceramics.
1,344
2009-01-01T00:00:00.000
[ "Materials Science" ]
2023 Search for the semileptonic decays Ξ 0 c → Ξ 0 ℓ + ℓ − at Belle − 1 collected with the Belle detector at the KEKB asymmetric energy electron-positron collider, we report the results of the first search for the rare semileptonic decays Ξ 0 c → Ξ 0 ℓ + ℓ − ( ℓ = e or µ ). No significant signals are observed in the Ξ 0 ℓ + ℓ − invariant-mass distributions. Taking the decay Ξ 0 c → Ξ − π + as the normalization mode, we report 90% credibility upper limits on the branching fraction ratios B (Ξ 0 c → Ξ 0 e + e − ) / B (Ξ 0 c → Ξ − π + ) < 6 . 7 × 10 − 3 and B (Ξ 0 c → Ξ 0 µ + µ − ) / B (Ξ 0 c → Ξ − π + ) < 4 . 3 × 10 − 3 based on the phase-space assumption for signal decays. The 90% credibility upper limits on the absolute branching fractions of B (Ξ 0 c → Ξ 0 e + e − ) and B (Ξ 0 c → Ξ 0 µ + µ − ) are found to be 9 . 9 × 10 − 5 and 6 . 5 × 10 − 5 , respectively. I. INTRODUCTION In the Standard Model (SM), the weak-current interaction has an identical coupling to all lepton generations, which allows Lepton Flavor Universality (LFU) to be tested in the semileptonic decays of the hadrons.Theoretically, the study of semileptonic decays of baryons has complications that are not present in the study of analogous decays of mesons as the contributions from W -exchange transitions lead to sensitivity to the helicity structure of the effective Hamiltonian [1][2][3][4].Furthermore, the hadronic form factors are not as well known for baryons as they are for mesons.Thus, the experimental results on baryonic semileptonic decays give important inputs for lattice quantum chromodynamics and other theoretical models. Experimentally, few baryonic neutrino-less semileptonic decays have been observed. The FCNC process is forbidden at tree level in the SM by the Glashow-Iliopoulos-Maiani mechanism [12].However, some tensions have been reported recently in B meson decays involving the b → sℓ + ℓ − processes via LFU observables and angular analysis [13][14][15][16][17][18], whereas recently LHCb reported the disappearance of the anomaly on LFU [19].Hence, the study of semileptonic decays of baryons provides an opportunity to test the SM, and also can help in the understanding of the recent anomalies in meson FCNC processes. The lack of studies on semileptonic decays of charmed baryons provides a strong motivation for further research on these decays.The Ξ 0 c → Ξ 0 ℓ + ℓ − decays, which are related to the W -exchange contribution in Λ + c → pℓ + ℓ − decays under SU(3) flavor symmetry, have not been experimentally measured yet.Measurement of both Ξ 0 c → Ξ 0 e + e − and Ξ 0 c → Ξ 0 µ + µ − decay rates would also allow an LFU test to be performed.Based on the SU(3) flavor symmetry and the recent experimental result on B(Λ + c → pµ + µ − ) [11], the upper limits at the 68.3% confidence level on the branching fractions of the Cabibbo-favored modes Ξ 0 c → Ξ 0 ℓ + ℓ − are predicted to be In this paper, we show the results of the first search for the Ξ 0 c → Ξ 0 ℓ + ℓ − decays using the full data sample of 980 fb −1 collected with the Belle detector [20].The decay Ξ 0 c → Ξ − π + is used as the normalization mode. II. THE DATA SAMPLE AND THE BELLE DETECTOR This analysis is based on data recorded at or near the Υ(nS) (n = 1 − 5) resonances by the Belle detector [20] at the KEKB asymmetric energy electron-positron collider [21].The Belle detector is a large solid-angle magnetic spectrometer consisting of a silicon vertex detector, a 50-layer central drift chamber (CDC), an array of aerogel threshold Cherenkov counters (ACC), a barrel-like arrangement of time-offlight scintillation counters (TOF), an electromagnetic calorimeter (ECL) comprised of CsI(Tl) crystals located inside a superconducting solenoid coil that provides a 1.5 T magnetic field, and an iron flux return placed outside the coil, which is instrumented to detect K 0 L mesons and to identify muons (KLM). Signal Monte Carlo (MC) events are generated using EVTGEN [22] to determine signal shapes, optimize the selection criteria, and calculate the reconstruction efficiencies.The generated e + e − → cc events are simulated using pythia [23] with a specific Belle configuration.The Ξ 0 c particles in signal MC simulation decay to Ξ 0 e + e − , Ξ 0 µ + µ − , and Ξ − π + using a phasespace model.These events are processed by a detector simulation based on geant3 [24].The Belle generic MC samples, which contain the MC samples of Υ(1S, 2S, 3S) decays, Υ(4S) s , and e + e − → q q (q = u, d, s, c) at center-of-mass (c.m.) energies, √ s, of 9.46, 10.024, 10.355, 10.52, 10.58, and 10.867 GeV with two times the total integrated luminosity of data, are used to study possible peaking backgrounds and verify the event selection criteria. III. EVENT SELECTION CRITERIA For well-reconstructed charged tracks, except those from Ξ − → Λπ − and Λ → pπ − decays, the impact parameters perpendicular to and along the beam direction with respect to the nominal interaction point (IP) are required to be less than 0.1 cm and 2 cm, respectively.Particle identification (PID) is applied to the reconstructed tracks.Pions, kaons, and protons are distinguished based on specific ionisation in the CDC, time measurement in the TOF, and the response of the ACC: this information is combined to form a likelihood L i for each particle hypothesis i, where i = π, K, or p. Related likelihoods are used to identify leptons: electron identification also includes a comparison of track and ECL cluster information, and muon identification is based on an extrapolation of the particle track, and hits in the KLM [25][26][27]. The Λ candidates are reconstructed via Λ → pπ − decay using a Kalman filter [28] with fitted χ 2 probability, P χ 2 , greater than 0. The reconstructed mass should be within ±3.5 MeV/c 2 of the nominal mass [29], corresponding to approximately 2.5 times of the mass resolution (σ).The transverse distance for reconstructed Λ vertex with respect to the IP is required to be greater than 0.35 cm.A loose PID requirement is applied on the proton with L p /(L p + L K ) > 0.2 and L p /(L p + L π ) > 0.2.And cos(α xyz (Λ)) is required to be larger than 0. Hereinafter, α xyz (i) is defined as the angle between the vector from the IP to the fitted decay vertex and the momentum vector of the reconstructed particle i; α xy (i) is defined as the angle between the projections of these vectors on the plane perpendicular to the beam direction. Each π 0 candidate is reconstructed from a pair of photons with energy larger than 30 MeV in the barrel region of the ECL (−0.63 < cos θ < 0.85) or larger than 50 MeV in the endcaps (−0.91 < cos θ < −0.63 or 0.85 < cos θ < 0.98).Here, θ is the polar angle with respect to the detector axis, with the θ = 0 direction aligned approximately with the e − beam.The reconstructed invariant mass of the π 0 candidates is required to be within ±17. 4 MeV/c 2 (∼ 3σ) of the π 0 nominal mass.A mass-constrained fit is applied to the π 0 candidates, and the momenta of the fitted π 0 candidates in the laboratory frame are required to exceed 0.15 GeV/c.The Ξ − → Λπ − decays are selected using the following criteria.The π − track is required to have a transverse momentum higher than 50 MeV/c.A TreeFit algorithm [28] which performs global decay chain vertex fitting for a particular process has been applied to the Ξ − decay chain with P χ 2 > 0 required.The decay chain is required to satisfy cos(α xyz (Ξ − )) > 0 and cos(α xy (Λ))/ cos(α xy (Ξ − )) < 1.006.The distances of the decay vertices of the reconstructed candidates from the IP, denoted as L i , should satisfy L Λ > L Ξ − > 0.1 cm.The reconstructed mass should be within ±5 MeV/c 2 (∼ 2.5σ) of the nominal mass [29]. The Ξ 0 is reconstructed by combining the selected Λ and π 0 candidates.A TreeFit [28,30] to the Ξ 0 decay chain is applied with P χ 2 > 0 required.Since the π 0 from Ξ 0 decay has negligible vertex position information, the fit is performed with the following steps.Firstly, taking the IP as the point of origin of the Ξ 0 , the point of intersection of the Ξ 0 trajectory and the reconstructed Λ trajectory is found.Then, this position is taken as the decay location of the Ξ 0 hyperon, and the π 0 is then re-made using this position as its point of origin.Only those combinations with the decay location of the Ξ 0 indicating a positive Ξ 0 path length are retained.The decay chain is also required to satisfy cos(α xyz (Ξ 0 )) > 0, cos(α xy (Ξ 0 )) > cos(α xy (Λ)), and L Λ > L Ξ 0 > 0.5 cm.The reconstructed mass should be within ±12 MeV/c 2 (∼ 2.5σ) of the nominal mass.Backgrounds are studied using sideband samples: Ξ 0 candidates whose invariant mass differs by between 20 and 44 MeV/c 2 from the nominal value [29]. For the normalization channel Ξ 0 c → Ξ − π + , the selected Ξ − hyperons are combined with selected π + candidates identified with L π /(L π + L K ) > 0.2 and L π /(L π + L p ) > 0.2.To reconstruct the signal modes Ξ 0 c → Ξ 0 ℓ + ℓ − , the Ξ 0 candidate is combined with a pair of lepton tracks, e + e − or µ + µ − , which are identified with L e /(L e + L non−e ) > 0.9 and L µ /(L µ + L K + L π ) > 0.9 for electrons and muons, respectively, where L non−e is the likelihood for non-electron tracks.The Ξ 0 c candidates should be consistent with originating from the IP and pass the vertex and mass-constrained fits with P χ 2 > 0.01 to the whole decay chain including the intermediate states, Ξ 0 , Ξ − , Λ, and π 0 [28].To reduce combinatorial background, especially those from B meson decays, the scaled momentum for the Ξ 0 c candidate, 4 , is required to be greater than 0.5, where p * Ξ 0 c is the momentum of Ξ 0 c candidate in the e + e − c.m. frame and M Ξ 0 c is the invariant mass of the Ξ 0 c candidate.To suppress background from photon conversion for Ξ 0 c → Ξ 0 e + e − decay, the e + e − pair is required to have invariant mass greater than 0.1 GeV/c 2 .Each of the electron candidates is also combined with every opposite-charged particle in the event, using the electron hypothesis: the invariant mass of all such pairs is required to be greater than 0.1 GeV/c 2 .In events where there is at least one candidate, the average number of candidates is about 1.3.All candidates are retained. The selection criteria on the invariant mass of the electron pair, P χ 2 for the Ξ 0 c decay chain, and scaled momentum x p in this analysis are optimized by maximizing the Punzi figure-of-merit, ε/(3/2 + √ B) [31].Here, '3' is the desired significance level, ε is the detection efficiency of the Ξ 0 c → Ξ 0 e + e − mode based on signal MC simulation, and B is the number of the normalized generic MC events in the signal range, 2.32 < M Ξ 0 e + e − < 2.50 GeV/c 2 (> 95% signal events retained).These requirements are also found to be optimal for Ξ 0 c → Ξ 0 µ + µ − , so they are applied for both channels. IV. BRANCHING FRACTION MEASUREMENT For the reference mode, Ξ 0 c → Ξ − π + , the above selection criteria for Ξ − and Ξ 0 c candidates are applied.Figure 1 shows the invariant-mass distribution of Ξ − π + combinations from data, together with the result of an unbinned extended maximum-likelihood (EML) fit.In the fit, the signal shape of Ξ 0 c candidates is parameterized by a double-Gaussian function, and the background shape is described by a first-order polynomial.The parameters are free in the fit.The fitted signal yield is 28937±272.FIG.2: The invariant-mass distributions of Ξ 0 → Λπ 0 candidates before combining with the ℓ + ℓ − pairs in the selected Ξ 0 c signal regions in the data.The dots with error bars represent the data, the solid curve shows the total best-fit result, the dashed curve shows the background shape, and the solid and dashed lines show the signal and sideband regions of Ξ 0 candidates, respectively. After applying the selection criteria introduced in the last section, Fig. 2 shows the invariant-mass spectrum for the reconstructed Ξ 0 candidates before combining with the lepton pairs in the selected Ξ 0 c signal regions from data, together with the fit result.Here, a double-Gaussian function is used to model the signal shape, and a second-order polynomial is used for the background.The signal shape parameters are fixed to the values found in signal MC, while the background parameters are free in the fit. The invariant-mass distributions of Ξ 0 e + e − and Ξ 0 µ + µ − from signal MC simulations and data are shown in Fig. 3 and Fig. 4, respectively, together with the unbinned EML fit results to the true signal distributions from signal MC events and spectra from data.To take energy loss due to bremsstrahlung into account, the shapes of correctly reconstructed Ξ 0 c candidates are described by two Crystal Ball functions [32] for the dielectron mode, while a double-Gaussian function is used as the signal shape for the di-muon mode.Incorrectly reconstructed signal candidates ("broken signal") have a broader distribution in signal MC simulation, shown by the cyan-shaded histograms in Fig. 3. Broken signal is mainly due to incorrectly selected photons in Ξ 0 reconstruction.Similar to the treatment in Ref. [33], we extract the shape of the broken signal from MC simulation via rookeyspdf [34], and treat it as a distinct component in the final Ξ 0 c signal yield extraction. The peaking background components are determined using the algorithm of Ref. [35], and we find them negligible. No significant Ξ 0 c → Ξ 0 ℓ + ℓ − signals are observed in the data.The cyan shaded histograms in Fig. 4 indicate the normalized Ξ 0 sidebands.For the fits to data, the signal shapes are taken from the fits to the signal MC samples above with all the parameters fixed.Here, the width of the signal shape is multiplied by a correction factor R σ = σ data /σ MC = 1.12 ± 0.06, where σ data and σ MC are the fitted resolutions of Ξ 0 c → Ξ − π + shapes from data and MC simulation, respectively.The broken signal shape is taken from the MC simulation as described above, and the ratio of the broken signal to correctly reconstructed signal events, R broken/signal , is fixed at 0.50 (0.46) for the Ξ 0 e + e − (Ξ 0 µ + µ − ) mode according to MC simulation.Linear functions with free parameters are used for the smooth background shapes.The fitted Ξ 0 c signal yields are 9.1±7.1 with a significance of 1.4σ and −0.9±2.1 for Ξ 0 c → Ξ 0 e + e − and Ξ 0 c → Ξ 0 µ + µ − decays, respectively. Assuming the signal branching fraction has a uniform prior probability density function, the Bayesian upper limit at 90% credibility on the number of signal events (N UL ) is determined by solving the equation where x is the number of fitted signal events and L(x) is the likelihood function in the fit to data.The upper limits at 90% credibility on Here, N fit is the fitted signal yield, N UL is the 90% credibility upper limit on the number of signal events from data before considering systematic uncertainties, B UL /B(Ξ 0 c → Ξ − π + ) and B UL are the 90% credible upper limits on the relative and absolute branching fractions, respectively, for the Ξ 0 c → Ξ 0 ℓ + ℓ − decays with systematic uncertainties included, and B(Ξ 0 c → Ξ − π + ) = (1.43 ± 0.32)% is taken from the Particle Data Group [29]. the relative branching fractions are calculated by separately for ℓ = e and ℓ = µ.Here, ) are the upper limits on signal yield in data and the reconstruction efficiencies according to MC simulations, respectively, of Ξ 0 c → Ξ 0 ℓ + ℓ − decays, N obs (Ξ 0 c → Ξ − π + ) and ε(Ξ 0 c → Ξ − π + ) are the number of observed events in data and the reconstruction efficiency, respectively, of Ξ 0 c → Ξ − π + decay, and the branching fractions are taken as B(Ξ 0 c → Ξ − π + ) = (1.43 ± 0.32)%, B(Ξ 0 → Λπ 0 ) = (99.524± 0.012)%, and B(Ξ − → Λπ − ) = (99.887± 0.035)% [29].To take into account the systematic uncertainties detailed in the next section, the likelihood curve is convolved with a Gaussian function whose width equals the corresponding total multiplicative systematic uncertainty.The calculated 90% credible upper limits on the numbers of signal events, and relative and absolute branching fractions in data, are summarized in Table I.The muon identification criterion used in this analysis effectively excludes tracks with a momentum too low to reach the KLM [27]: this leads to a reconstruction efficiency in the di-muon channel that is a factor of three lower than in the di-electron channel. V. SYSTEMATIC UNCERTAINTIES The systematic uncertainties in the measurements of the branching fractions are divided into two categories: multiplicative and additive systematic uncertainties.] branching fractions of intermediate states, and the fitting uncertainty for the normalization mode Ξ 0 c → Ξ − π + .The additive systematic uncertainties are the uncertainties in the fits to extract signal yields for Ξ 0 c → Ξ 0 ℓ + ℓ − decays. The detection-efficiency-related uncertainties include those for tracking efficiency, PID efficiency, π 0 and Λ selection efficiencies, and are estimated based on the simulated MC samples.Since there are four charged tracks in the final states for both signal and reference decay modes, the uncertainty in tracking efficiency cancels in this analysis.The proton PID uncertainties are found to be 0.6% and 1.1% for Ξ 0 c → Ξ 0 e + e − and Ξ 0 c → Ξ 0 µ + µ − modes, respectively, by taking into account the proton momentum differences with the normalization mode.Since the Λ → pπ − decay is reconstructed in each decay mode and no PID requirement is assigned for the pion track decay from Λ, the other sources of Λ selection uncertainties cancel.Using the control samples of D * + → D 0 π + with D 0 → K − π + , the PID uncertainties are estimated to be 0.5% and 0.6% for pions from Ξ 0 c and Ξ − decays, respectively, and are added linearly to be 1.1% for the pion tracks in Ξ 0 c → Ξ − π + decay.Based on the study of J/ψ → ℓ + ℓ − decay, the uncertainties from lepton identification are determined to be 3.2% for electrons and 5.2% for muons.The PID uncertainties here are summed in quadrature for different decay modes, assuming that those uncertainties are independent for Ξ 0 c → Ξ 0 ℓ + ℓ − and Ξ 0 c → Ξ − π + decays.The total PID systematic uncertainties for Ξ 0 c → Ξ 0 e + e − and Ξ 0 c → Ξ 0 µ + µ − decays are determined to be 3.5% and 5.5%, respectively.The systematic uncertainties for momentum-weighted π 0 selection efficiency are estimated to be 3.3% and 3.0% for Ξ 0 c → Ξ 0 e + e − and Ξ 0 c → Ξ 0 µ + µ − decays, respectively, according to a study of a τ − → π − π 0 ν τ control sample.Assuming these uncertainties to be uncorrelated, the uncertainties from PID and π 0 efficiencies are added in quadrature to yield the total multiplicative systematic uncertainties. In the measurements of absolute branching fractions, the uncertainty on B(Ξ 0 c → Ξ − π + ) is 22.4% [29], which is the dominant contribution. In the fit to the M (Ξ − π + ) distribution from data for Ξ 0 c → Ξ − π + decay, we change the fit range by ±10% and the order of the polynomial for the background shape, and the relative differences of the fitted signal yields are taken as the uncertainties.These uncertainties are added in quadrature: the total is 0.7%. Additive systematic uncertainties due to the Ξ 0 ℓ + ℓ − invariant-mass fits are considered by re-performing the fits with all combinations of the following options: (1) change the resolution scale factor R σ by ±1σ of its uncertainty; (2) change the fit range by ±10%, (3) change the polynomial describing the background shape from first-order to second-order; and (4) multiply the fixed R broken/signal ratios by the correction factors of 1.43 and 0.93 for wrong Ξ 0 combinations in di-electron and dimuon modes, respectively, which are calculated according to the ratios of the number of events in Ξ 0 sidebands from data over that from generic MC samples. For the measurements of the upper limits at 90% credibility on the relative and absolute branching fractions of Ξ 0 c → Ξ 0 ℓ + ℓ − decays, the systematic uncertainties are taken into account in two steps.First, based on the study of the additive systematic uncertainties, the most conservative upper limits at 90% credibility on the numbers of Ξ 0 c signal events are 25.6 and 4.6 for di-election and di-muon modes, respectively.Then, the likelihood function of the case with the most conservative upper limit is convolved with a Gaussian function whose width equals the corresponding total multiplicative systematic uncertainty for each Ξ 0 c → Ξ 0 ℓ + ℓ − decay to get the final upper limit. The multiplicative systematic uncertainties from different sources are summarized in Table II. In this analysis, the simulated Ξ 0 c → Ξ 0 ℓ + ℓ − decays are generated by the phase space model, since the exact physics models for the decays are unknown, and no significant signals are observed in data.Thus, no systematic uncertainty due to the choice of the decay model is included.Instead, we provide the reconstruction efficiencies in (M 2 ℓ + ℓ − , M 2 Ξ 0 ℓ + ) bins, which are shown in Table III and Table IV for the di-electron and di-muon modes respectively. Comparing with the theoretical predictions of the upper limits on the branching fractions of Ξ 0 c → Ξ 0 ℓ + ℓ − decays, B(Ξ 0 c → Ξ 0 e + e − ) < 2.35 × 10 −6 and B(Ξ 0 c → Ξ 0 µ + µ − ) < 2.25 × 10 −6 [1], the experimental upper limits reported in this paper using a uniform-phase-space distribution are higher by an order-or-magnitude than those calculated using theoretical arguments and input from other experimental results.A more precise analysis based on larger data samples collected by Belle II is expected in the future. FIG. 1 : FIG. 1:The invariant-mass distribution of Ξ − π + combinations in data.The dots with error bar represent the data, the solid curve shows the best-fit result, and the blue dashed curve shows the fitted backgrounds. FIG. 3 : FIG.3:The invariant-mass distributions of (a) Ξ 0 e + e − and (b) Ξ 0 µ + µ − combinations in signal MC simulation.Points with error bars show the correctly reconstructed signal, the blue solid curves show the results of the fit to the signal shape, and the cyan shaded histograms show the broken signal distributions. Events/ 5 FIG. 4 : FIG. 4: The invariant-mass distributions of (a) Ξ 0 e + e − and (b) Ξ 0 µ + µ − combinations in data.Points with error bars show the data, the solid curves show the best-fit results, and the blue dashed curves show the background component in the fits.Cyan shaded histograms show the normalised Ξ 0 sidebands. TABLE I : The summarized values for branching fraction measurements of Ξ 0 c TABLE II : The multiplicative systematic uncertainties (%) on the measurements of relative and absolute branching fractions.
5,993.4
2023-12-05T00:00:00.000
[ "Physics" ]
Effect of Savings on a Gas-Like Model Economy with Credit and Debt In kinetic exchange models, agents make transactions based on well-established microscopic rules that give rise to macroscopic variables in analogy to statistical physics. These models have been applied to study processes such as income and wealth distribution, economic inequality sources, economic growth, etc., recovering well-known concepts in the economic literature. In this work, we apply ensemble formalism to a geometric agents model to study the effect of saving propensity in a system with money, credit, and debt. We calculate the partition function to obtain the total money of the system, with which we give an interpretation of the economic temperature in terms of the different payment methods available to the agents. We observe an interplay between the fraction of money that agents can save and their maximum debt. The system’s entropy increases as a function of the saved proportion, and increases even more when there is debt. Introduction Econophysics brings together a set of theoretical and empirical achievements that come from applying well-known tools and physics, particularly from thermodynamics and statistical mechanics, to economics and financial markets. Its scope and applicability are still being discussed [1]. On the one hand, there are strong criticisms about the veracity of ideal hypotheses inherited from physical systems. However, recent evidence shows that well-known concepts such as Solow's economic growth, and economic inequality in the sense of Piketty's work, may arise from kinetic exchange models [2]. The truth is that this area has influenced financial economics studies for the last twenty years and can be considered a well-established current research branch [3]. The kinetic exchange gas-like models are inspired by molecular models of gases formed by colliding particles [4]. As the particles of a gas exchange energy during collisions, agents exchange a fraction of their capital under the hypothesis of the conservation of total money [5]. With this approach, it has been possible to reproduce some patterns that are observed in capitalist economic systems, such as the Pareto rule of wealth distribution [6]. These models' main advantage, and the reason they have become attractive in various disciplines, is that their mathematical formulation and numerical implementation are very simple and straightforward [7]. They are so versatile that a wide range of practical economic and social interest situations have been addressed with them [8]. Through exchange models, some economic inequality indices have been explained and calculated from microscopic principles [9][10][11]. When studying the distribution of income and wealth with these models, different trends have been verified, such as the formation of classes [12,13]. Indeed, it has been seen that in the presence of labor unions, government policies to reduce inequality, or other solidarity factors of social protection, the corresponding wealth distribution changes to a bimodal distribution [14]. Although these distributions are rare and seem unrealistic, they have also been found when considering risky transactions and other stressful situations [15], such as an epidemic outbreak [16]. In order to simulate these theoretical models, it has been necessary to use different numerical and computational tools such as Monte Carlo methods for open systems [17] or classification methods for market behavior [18]. It has been shown how these exchange models could serve as optimization algorithms [19], so it would be interesting to use other techniques to study them [20,21]. Among the situations that can be addressed with these models, two are of interest in the present work. The first is that agents can incur debt when requesting credit from the bank. Credit and debt are introduced through a new variable that is different from the money coming from income, in such a way that it could minimize negative values, indicating the acquired debt [5]. The economic model, including credit and debt, was first studied by Viaggiu et al., using the tools of the statistical ensembles [22]. There, they adopt the Boltzmann-Gibbs distribution where energy is replaced by total money, including income, credit, and debt. This method was extended to studying markets and exchange economies through complex networks [23] and stock price formation processes from the order book [24]. This last system has also been addressed with the Boltzmann equation in a non-equilibrium situation [25]. The resulting aggregated economic variables could be related to macroeconomics in the same way that the gas particles' microscopic energy gives rise to thermodynamics. It is worth noting that this idea of establishing a link between classical thermodynamics and economics is not new and was first suggested by Samuelson [26]. Just as in thermodynamics, where different microscopic interactions lead to measurable macroscopic effects, in agent models, it is possible to consider different transactions between agents and see their effect in analogous thermodynamic quantities. One of the interactions that can be modeled in this way, and we are interested in, is agent savings. In this model, saving propensity is defined as the fraction of an agent's money λ that will not be spent during the transaction given by the corresponding exchange rule [27]. It is well known that saving agents' income distribution follows the so-called Gamma distribution when the system reaches equilibrium. If the agents do not save, the Boltzmann-Gibbs distribution, which usually models the lower-middle class, is recovered. Indeed, Pareto's law can be recovered for random savings within these models [28]. Furthermore, by considering that some agents have fixed savings and another sector has a random saving propensity, this leads to a distribution that has an exponential zone and another zone modeled by a power law, as observed in real economies [29]. The system of agents with saving propensity has also been studied through other approaches. In [30], López-Ruíz et al. introduce an analytical geometric model where agents' money obeys an additive constraint that defines an N-dimensional equiprobability surface. The corresponding analog of the Hamiltonian contains the saving propensity as an exponent of a geometrical variable that can be identified with the monetary variable. In this thermodynamic-like approach, the aggregate variables can tell us about the economic system's behavior. For instance, the temperature of an economic system can be used as an index that indicates, on average, how the total money available is distributed among agents. For systems where agents can only pay with their given income, T turns out to be simply the average money per agent [5]. However, when debt is introduced or alternative means of payment are considered, this changes correspondingly [22]. On the other hand, by considering saving propensity, the economic temperature reduces by a factor that is a function of λ, analogous to the equipartition theorem of energy [30]. In this work, we study the thermostatistical properties of a kinetic exchange model that describes a simple closed economy with income, savings, credit, and debt. We analytically calculate the canonical partition function with the ensemble formalism, introducing the agents' saving propensity through the geometric approach. From the partition function, we calculate the economic quantities that are analogous to the thermodynamic variables. In general, we observe an interaction between the fraction of money that agents can save and the debt that they can acquire. Specifically, we obtain the economic temperature and entropy. The economic temperature that we find can be read as the arithmetic means of two terms, the first is the average money per agent reduced by the saving propensity, and the second is related to the maximum overdraft. This second term reduces to d if no savings are present. Nevertheless, when λ = 0, it turns out to be non-trivially coupled with average money and the saving propensity. For a fixed temperature, debt induces an upper bound for the percentage that agents can save, such that, as the value of debt increases, agents save less. For entropy, we find that it has an increases in both temperature and savings. However, a minimum temperature appears for which the model is valid, which depends on the savings. Furthermore, these minimum temperatures and the entropy values change depending on the maximum debt. This paper is structured as follows. In Section 2, we review the fundamental concepts of the geometric model for the distribution of the income for saving agents, and in Section 3, we review the ensemble theory for money, credit, and debt. In Section 4, we present the statistical ensemble for saving agents. We study the cases with and without debt and the corresponding limit for when the agents do not save money in their transactions, recovering the case studied by Viaggiu et al. [22], and Patriarca et al. [27], respectively. Finally, in the last section, we present a summary and discussion of the obtained results and possible future work routes. Microscopic and Geometric Models for a System of Saving Agents Let us consider a simple, discrete, closed economy model in which N agents can exchange money in pairs. In the beginning, each agent has the same amount of money, say M/N, where M is the total money in the system, so the initial distribution of capital is uniform. Then, at each time step, a pair of agents (i, j) randomly chosen begin to interact with each other, where i = j, and i, j = 1, 2, ..., N. During the exchange, the capital of each agent changes following the next exchange rule constrained to the fact that the total money is conserved. where u i and u i is the money of agent i before and after a transaction, respectively. The amount exchanged ∆u is taken randomly and depends on the details of the transaction, for instance, for constant saving propensity λ is where is a uniformly distributed random variable. In this case, during the transaction between agents i and j, the money that can be reallocated is reduced by 1 − λ, corresponding to the fraction of the initial capital each agent has decided to use in the exchange. For this model, the agents' equilibrium distribution has been studied both analytically and numerically, and it depends on the value of the saving propensity parameter. Specifically, simulations point to a Gamma distribution for the money [27] where z = nu u , u is the average money and is the shape factor of the Gamma distribution, which, in this context, is a function of the saving propensity. For λ = 0, where there is no saving criterion, Equation (3) reduces to the Boltzmann-Gibbs distribution [5]. Distribution (3) for increasing values of the saving propensity can be seen in Figure 1. (3) for different values of the saving propensity λ. The green curve corresponds to λ = 0, the saving factor increases as it tends to the blue, so the darker blue curve corresponds to λ = 0.9. The same distribution can be obtained from a geometric perspective [30]. Let be a set of positive variables {x i } i=1,...,N satisfy the constraint with b a positive real constant and M the total money. The equality in (5) defines a symmetrical surface, it also defines a transaction between agents. Here, x is an internal geometrical variable related to the capital per agent through the probability density [30]. The probability f (x)dx of finding an agent with generic coordinate x is proportional to the volume V N−1 (M − x b ) 1/b of all the points contained into the (N − 1)-dimensional symmetrical region limited by the constraint (M − x b ). A visualization of these surfaces for the case of three agents is shown in Figure 2. Thus, by considering the normalization condition, the distribution function is . By assuming that the volume is proportional to the radius of the region, and for large N, it is possible to find the same Gamma distribution (3) for the desired probability, with z = x b b x b . Thus, by comparing both expressions, we find Note that in the particular case when b = 1, corresponding to no savings, x is exactly the money of the agents. In this way, saving propensity appears in the constriction through the power of the geometric variables. This model will allow for a Hamiltonian formulation proposal that we will use when defining the saving agents' ensemble. Statistical Ensembles for Agents with Credit and Debt As noted by Viaggiu et al. [22], to build an ensemble, it is only necessary to have a conservation law. In their case, they introduce the total money in the system as a conserved quantity, which depends on two possible variables-u the money of each agent coming from income and v, which is a monetary variable whose positive values correspond to the credit obtained by the agent, and its negative values to the acquired debt. Therefore, in general The canonical partition function is introduced as follows [22] where T is the economic temperature related to the average money per agent but depends on agents' interactions. With this definition of the partition function, it is possible to find the thermodynamic variables in the same way as in equilibrium statistical mechanics. In particular, entropy has the same interpretation, being proportional to the number of microscopic configurations of the system; therefore, the equilibrium state corresponds to the configuration that maximizes the entropy. In order to interpret the meaning of economic temperature, let us briefly review three cases. Firstly, let us consider the simple case M(u) = ∑ i u i , so the integral of the partition function (8) is simply where V v is the integral over the v-variable, which is constant and depends only on the domain of v, and can be interpreted as the maximum credit accessible in the system. From the usual thermodynamic relationship for the internal energy, namely we can obtain the system's total capital and the economic temperature as the mean capital per agent T = M/N. As mentioned above, in these gas-type models, debt can be modeled as negative money; this appears when an agent does not have enough capital and borrows a certain amount from a bank, which in this model will not charge interest, so the agent's balance, in the end, becomes negative [5,22]. Let us now consider the function money for credit and debt as m(u, v) Thus, if v i is positive, the agent has available credit that he can use in the same way as u i , while if v i is negative, he has borrowed money previously, with maximum debt d, that for simplicity, we consider the same for all agents. However, a more realistic situation would be for the bank to decide which limit to impose on each agent based on their income and credit score. The partition function and thermodynamic quantities can be calculated with the corresponding limits, and the result is as follows: Let us explore a couple of features. In this case, the temperature is not simply the average money per agent, but it is distributed among the different payment alternatives, namely, the agent's money and his approved credit. Economic temperature T can be thought of as an index that relates the average money to an agent's debt capacity. For instance, if M/N < d then T/d < 1, which indicates that in such an economy, the agents could not cover a debt d on average. Indeed, this ratio is bounded when the average money is much lower than the amount of credit; then, T/d tends to 1/2. In the case that M/N ∝ d, then T/d ≈ 1, which is the limit of the debt that could be covered. Then, for an economy to have no problem with a credit amount d, we should look for at least T/d > 1, which implies that, on average, M/N > d. Indeed, when we ask for M/N 2d, then T/d 1, there will be enough liquidity to cover the debt in this regime. On the other hand, recalling the standard interpretation of the partition function Z as the number of accessible microstates of the system for a given temperature, according to the expression (11), the number of accessible states grows exponentially with the ratio d/T. In fact, given those mentioned above, the more microscopic states are available for the system, the more difficulty the agents will have to pay the debt. We can go further, removing the credit from the agents and leaving them with the debt d, which can be done if v ∈ [−d, 0), d 0. In this case, the partition function and the thermodynamic quantities are as follows: Although apparently in the limit d/T 1, the expressions of the previous case are recovered, this case is not valid in the corresponding regime, in addition to having the aforementioned problems. However, if the debt is small, an expansion of the exponential can be performed for d/T 1, which implies that In this limit, economic temperature goes as which is consistent with M/N d/2. The temperature of the system increases a little as long as the debt is much less than twice the average capital. Statistical Ensambles for Money, Credit and Debt with Saving Propensity In this section, we study the effect of the saving propensity on thermodynamic quantities, particularly in terms of economic temperature, entropy, and in the partition function. Given the Hamiltonian formulation in Section 2, we can construct different money functions to calculate the corresponding thermodynamics and partition functions. Case 1: Money and Savings Let us consider directly the money function (5) and calculate the partition function simply where b is given by the inverse of Equation (4). This integral can be easily related to the Gamma function through a variable change, such that With the use of the definition (10), we can determine the economic temperature The expression (17) resembles the equipartition theorem of energy, which states that each microscopic degree of freedom contributes by a term proportional to T to the total energy. In this case, we can say that each agent contributes to the total money an amount proportional to the economic temperature by a coefficient that is a function of the saving propensity, Equation (4), such that 0 ≤ b ≤ 1. The ratio between temperature and the money per agent decreases as the fraction of money saved increases, see Figure 3. For the ideal gas, the equivalent ratio of T by the energy per particle is a constant equal to 2/3, which depends on the functional form of the microscopic kinetic energy and the dimension. Expression (17) was previously obtained by Patriarca et al. [27], where b −1 is interpreted as an effective dimension. It is also possible to obtain the entropy of the system as typically done in thermodynamic systems TS = M + T ln Z, with which the following expression is obtained: where there is no emphasis on the indistinguishability of the agents. The expression (18) is plotted in Figure 4, where, for each fixed value of λ, an increasing logarithmic behavior in T can be seen, as occurs in thermodynamics with the well-known Sackur-Tetrode formula. However, in this case, we see that the entropy begins to grow from a certain minimum temperature that changes according to the agents' saving capacity; the more they save, the lower the minimum temperature is. Indeed, we can also observe that the more the agents save, the more rapidly the entropy increases. Case 2: Savings, Money, Credit and Debt To introduce credit and debt, we must add to the money constraint (5) an additional term with monetary units in a similar way as was done in Equation (7). Let us call y b the corresponding geometric variable, whose interpretation is similar to that of the Equation (6) and must satisfy y ∈ [−d 1/b , ∞). Here, we can identify d again as the maximum debt. Therefore, for the money function m(x, y) = ∑ i (x b i + y b i ), we can calculate the partition function as follows: The first integral is, again, a gamma function, while the second one can be rewritten as the upper incomplete gamma function Γ(b −1 , zdT −1 ), where z = e iπb . This last term appears due to the negative lower integration limit introduced by the debt. However, this function can be analytically continued to complex numbers, maintaining many of its real-valued counterparts' properties. In particular, it satisfies [31]: Γ(a, rz) = z a Γ(a, r) + (1 − z a )Γ(a). With this property together with the relation γ(a, x) + Γ(a, x) = Γ(a), with γ(a, x) the lower incomplete gamma function [31], the partition function can be written as By taking the derivative of the partition function, we find the total money of the system where the last term corresponds to the approximation of the incomplete gamma function with the leading term of (22). In the case b = 1, Equation (25) reduces to S ≈ 2N ln T + 2N, obtained in [22]. We show in Figure 5, three graphs of S for different values of d. It can be seen that the behavior is qualitatively similar to that of Figure 4. However, we note that when d = 0, entropy grows faster as b decreases, or when saving increases. Furthermore, as d increases, the behavior of the minimum T changes in each case. To interpret the economic temperature, in this case, consider Equation (24) and solve for T as follows: Let us realize that T still appears on the right-hand side of the above expression. However, as a first approximation, we can consider T 0 ∼ Mb/2N, and replace it into Equation (26), to have an iterated solution, as shown below. In this case, we can keep the interpretation of T as the arithmetic means of two payment methods, the first, given by the average money per agent reduced by a factor dependent on the savings propensity, while the second term corresponds to a non-trivial function of the debt and average money, which reduces to d in the case of no savings. To analyze the behavior of the temperature given by Equation (27), it is convenient to study the ratio T/d, since such an approximation is valid when this ratio is much greater than one. In Figure 6, the ratio T/d is plotted as a function of the saving propensity for the particular case in which the mean money per agent is equal to 1, in monetary units. For the case in which debt d is small compared to one, the savings decrease the relation T/d as λ increases. When T = d, the approximation is no longer valid, this defines a λ max , which is the maximum percentage that an agent can save. As the value of debt increases, λ max decreases, i.e., agents save less. When d = 1, the approximation is not valid for any λ. It is interesting to see that as d increases, the ratio T/d increases for high values of λ. This behavior has no meaning in terms of the model, although mathematically, it is consistent with the application regime. To summarize, as d increases, the agents' savings capacity decreases. Conclusions In this work, we applied the statistical ensemble formalism to a geometric model of agents to study the effect of saving propensity on a system with money, credit, and debt. This formalism allowed us to obtain analogous thermodynamic variables. We studied the system's economic temperature, which is an index that relates the average money with the agents' debt capacity. The exact expression is given by the solution of Equation (21). In the case T d, which corresponds to a system where agents can pay their debt, it was possible to find an approximation for the economic temperature expressed as Equation (27); that is, the arithmetic mean between the different payment methods, namely, cash from income and the requested credit with a specific debt limit. The savings propensity modifies each term. On the one hand, the term associated with the average money per agent follows an analogous equipartition theorem and is reduced by a factor b(λ), while the term associated with the debt is coupled with the average money and the savings propensity in a non-trivial way, but reduces to d when there is no saving. We show the behavior of Equation (27) in Figure 6, where we see that the ratio between T and d decreases as savings propensity grows to a maximum value that depends on d. The entropy of the saver agent system when debt is present was also calculated. In general, we can say that entropy increases as savings increase, starting from a minimum temperature that depends on b, for which the entropy is greater than zero. In the case of having debt, the entropy increases even more, and the variations in the minimum temperature when changing b vary according to the different values of d. Although these analogous thermodynamic variables are not in general use in economics, economic temperature or entropy could certainly serve as indicators of how agents' decisions affect the collectivity and, therefore, suggest how economic systems behave. As mentioned in the introduction, there are some criticisms of these models; for example, ideal hypotheses limit their scope. However, the models have the advantage that their mathematical formulation is easily generalizable. Indeed, this formalism of statistical ensembles can be extended to more complicated or realistic cases or with different kinds of microscopic interactions. For example, in a most realistic case in which the bank establishes the credit limit according to each agent, the integrals in Equation (19) for calculating the partition function give a product of incomplete gamma functions with different arguments each, and the calculation becomes more involved. The difference between having an individual and a collective debt limit has recently been studied in the context of exchange agent models on a connected graph that simulates a social network, finding different distributions in each case [34]. Interactions between agents can be directly included in the exchange rules. Besides saving, risk or various social protection factors can be introduced [35]. With these interactions, one could model increasingly realistic systems and try to explain phenomena such as economic inequality based on the agents' decisions and the policies that influence them. It would be interesting to study some economic processes defined with these generalized rules within the ensemble formalism to determine how their corresponding analogous thermodynamic variables change and eventually compare them with real economic systems. For instance, relate them to indices that characterize inequality such as the Gini or Kolkata [9][10][11]. In future works, this kind of relations will be addressed.
6,169.8
2021-01-04T00:00:00.000
[ "Economics", "Physics" ]
Quark mass dependence of $\pi \pi$ scattering in isospin 0, 1, and 2 from lattice QCD Using lattice QCD we extract $\pi\pi$ scattering amplitudes with isospin--0,1,2 in low partial-waves at two values of the light quark mass corresponding to $m_\pi \sim 283$ and $330$ MeV. We confirm expectations of weak repulsion in isospin--2, and the presence of a narrow $\rho$ resonance in isospin--1, and study the pion mass dependence of these channels. In isospin--0 we find that the two pion masses considered straddle the point at which the $\sigma$ transitions from being a stable bound-state to being either a virtual bound-state or a subthreshold resonance. We discuss the ability of lattice calculations like these to precisely determine the $\sigma$ pole location when it is a resonance, and propose an approach in which the full complement of amplitudes computed in this paper can be used simultaneously to provide more constraint. I. INTRODUCTION Hadron-hadron scattering processes have long been used as a tool to explore strong interaction physics.The amplitudes which describe these processes as a function of energy and angle can be expanded in partial-waves, and examination of these yields information about the resonance content of quantum chromodynamics.Scattering of the lightest hadron, the pion, off the pion cloud around a proton or nucleus offers the simplest such process, being unburdened by complications of spin. Experimentally, ππ scattering in the lowest partialwaves shows very different behavior across the three possible isospins.Isospin-2 is found to be weak and repulsive, with the lack of resonances being an early motivator of the q q quark model.Isospin-1 houses the narrow ρ resonance, appearing as a rapid rise in the phase-shift of the P -wave amplitude, which is otherwise featureless at low energy.Isospin-0 is found to be attractive, but shows only a slow rise in phase-shift with energy across the elastic scattering region.Typically, this rise is associated with a broad scalar resonance, the σ.A state with these quantum numbers has historically been included in models of the nucleon-nucleon potential to describe intermediate-range effects.Precise determination of the pole location of the σ remained a problem until recently, with a range of amplitude parameterizations applied to experimental data generating a spread of pole locations [1].This problem was solved by applying dispersion relations which implement analyticity and crossing symmetry in a consistent way, providing additional constraint beyond that given by the isospin-0 scattering data on the real energy axis alone [2][3][4][5][6][7]. The nature of the σ within QCD is somewhat unclear [8][9][10][11][12].It is often partnered with the f 0 (980), a 0 (980) and κ states into a 'scalar nonet', despite the very different appearances of these resonances (narrow states at K K threshold versus very broad states away from any threshold).An association of this type for the lightest vector resonances, ρ, ω, ϕ, K * , is quite natural, given their common properties, and is often used as motivation for a q q quark-model assignment of these states, with their modest differences being due to the mild breaking of an approximate SU (3) flavor symmetry that leads to states with strange content being heavier.The scalar nonet has no such simple interpretation [13][14][15][16]. Recently, ππ scattering in QCD has been considered making use of the first-principles lattice approach.The discrete spectrum of states with the quantum numbers of ππ in the finite periodic spatial volume of the lattice can be related to ππ scattering amplitudes through the Lüscher relation [17,18].Computations have taken place at several values of the light-quark mass for all three isospins .By parameterizing the elastic scattering amplitudes the resonance pole content can be investigated through analytic continuation into the complex energy plane. Isospin-2 is found to be weak and repulsive, as in experiment, and the variation of the scattering length with changing quark mass has been explored in the context of chiral perturbation theory [35,40].Isospin-1 is found to feature a ρ-like resonance whose mass increases and width decreases with increasing quark mass until it becomes stable at a pion mass near 400 MeV.Isospin-0 shows a much more dramatic evolution with changing quark mass: at m π ∼ 391 MeV, a clear stable bound-state σ is observed, while at m π ∼ 239 MeV, a slow variation of the phase-shift appears to indicate a broad resonance σ, albeit with a large degree of amplitude parameterization dependence in the pole position [53,57]. The possibility that the σ could undergo a transition from being a broad resonance into being bound, as the light quark mass is increased, was previously explored in a unitarized version of chiral perturbation theory [55,58].By making assumptions about the quark mass dependence of certain low-energy coefficients, it was found that over a relatively small variation in pion mass, the σ undergoes a rapid transition from being a bound state, to being a virtual bound state (a pole on the real energy axis below threshold on the unphysical Riemann sheet), to being a broad resonance.These results provide a particular manifestation of the general framework for scalar resonance pole trajectory discussed in Ref. [59]. In this paper, we will report the results of a calculation determining ππ scattering amplitudes in all three isospins at two previously unconsidered light quark masses, corresponding to m π ∼ 283 and 330 MeV.These values lie between the points at which the σ has been observed in lattice calculations to be bound, and where it appears as a broad resonance, so that we aim to be able to close in on the region where the transition takes place. The scatter of σ pole positions under reasonable variation of amplitude parameterization in the previous lattice calculation at m π ∼ 239 MeV indicated that the same issue present in analysis of experimental scattering data may plague attempts to pin down with precision the pole location in these first-principles QCD efforts.A possible mechanism to overcome this might be to apply dispersion relations to the results of lattice QCD computations.Such an approach would require input of computed ππ scattering amplitudes in all three isospins in low partial waves, which is what we provide in this paper. We will show that the isospin-2 amplitude evolves smoothly with changing light quark mass, and we will explore the role of the 'Adler zero' predicted by the broken chiral symmetry of QCD.The evolution of the ρ resonance which dominates the isospin-1 amplitude is presented, with a confirmation of the near independence of its coupling to ππ to variations in the light quark mass.The isospin-0 S-wave amplitude is found to undergo a dramatic transition between m π ∼ 330 MeV and 283 MeV, from a behavior compatible with an only-just-bound σ at the heavier mass to a mild energy dependence compatible with being either a virtual bound state or a subthreshold resonance at the lighter mass.We will conclude that to make more precise statements about the σ in cases that it is unbound, we require additional constraints of the type offered by dispersion relations1 . II. LATTICES AND OPERATOR CONSTRUCTION The calculations described in this manuscript make use of anisotropic Clover lattices [62,63] whose parameters are presented in Table I.These three-flavor lattices, which have a s ∼ 0.12 fm, degenerate light quarks, and a strange quark mass approximately tuned to the physical strange quark mass, were previously used in calculations of the πK system [64,65] 2 . In order to determine ππ scattering amplitudes, we require the spectra of states with the appropriate quantum numbers in the finite spatial volume of the lattice.These spectra are extracted using variational analysis of matrices of two-point correlation functions computed using a basis of operators at source and sink.A basis that has proven successful in prior calculations [36,45,53,57,66,67] makes use of 'single-hadron' operators (in isospin-0 and isospin-1) of fermion bilinear type, ψΓ ← → D . . .← → D ψ, supplemented by operators resembling a pair of mesons having definite total momentum, and magnitude of relative momentum, (1) The operators appearing in the product on the right-hand side are selected to be those linear combinations of 'singlehadron' operators that optimally overlap with the pion states in the variational analysis of correlation functions with the quantum numbers of a single pion.The 'lattice Clebsch-Gordan coefficients' in this equation ensure that the operator transforms in a definite irreducible representation, Λ, of the relevant lattice symmetry group.Systems of definite angular momentum subduce into these 'irreps' according to Table II of Ref. [36] for I = 0, 2, and according to Table III of Ref. [45] for I = 1.In this paper, we will consider irreps with total momentum Use of the distillation framework [68] allows for efficient computation of a large number of correlation functions, and in particular, allows diagrams featuring quarkantiquark annihilation (of which there are many in the isospin-0 case) to be evaluated without further approximation [53,57]. While our primary focus is on ππ elastic scattering, in order to have a reliable evaluation of the finite-volume spectra in the energy region where the K K and ηη thresholds open, we have included, where relevant, also K K-like and ηη-like operators into our basis.We are guided as to which relative momentum combinations to include in the basis by the non-interacting energy of the operator, where m is the mass of the meson (π, K, η).All operator constructions are included which have non-interacting energy in the energy region we intend to study. The matrices of correlation functions computed in the large basis indicated above are analyzed using a variational approach based upon solving a generalized eigenvalue problem.Our primary interest is in the spectrum which is obtained by fitting the exponential timedependence of the extracted eigenvalues.In order to −atm ℓ (L/as) [61]. account for the impact of the choice of fitting window and the number of exponentials, we implement a version of the "model averaging" technique proposed in Ref. [69], as described in Ref. [65].In addition, the sensitivity of the extracted energy levels to the choice of the reference timeslice t 0 in the generalized eigenvalue problem, and to the precise choice of operators in the basis is explored and reflected in the quoted energy values and uncertainties.When dimensionful quantities are required, the lattice scale is set using the Ω baryon mass computed on the relevant lattice, a t = atmΩ m phys Ω , where the physical mass of the Ω baryon is m phys Ω = 1672.45MeV.The quoted pion mass in MeV for each lattice follows from use of this prescription. III. EXTRACTING SCATTERING AMPLITUDES FROM FINITE-VOLUME SPECTRA The relationship between two-body scattering amplitudes and the discrete spectrum of states in a finite periodic volume is well established [17,[70][71][72][73][74][75][76][77][78][79].For an irrep Λ of total momentum ⃗ P , the discrete spectrum in an L × L × L box corresponds to the solutions of where M(E, L) is a matrix of known kinematic functions which characterize the cubic spatial volume, while t(E) contains the partial-wave scattering amplitudes.In general, these are matrices in the spaces of coupled-channels and partial-wave angular momentum, ℓ, but for elastic scattering they reduce to being dense and diagonal matrices respectively in ℓ. Our approach follows from parameterizing the energydependence of the partial-wave amplitudes t I ℓ (s) for those lowest values of ℓ which subduce into the irrep ⃗ P , Λ.In practice for I = 1, only ℓ = 1 is relevant over the elastic region, while for I = 0, 2, both ℓ = 0 and ℓ = 2 are considered.For a given set of parameter values in these parameterizations, the solution of Eq. (2) yields a discrete spectrum that can be compared to the lattice QCD computed spectrum via a correlated χ 2 .We form this χ 2 by considering energy levels in all irreps which constrain the partial-waves for each choice of isospin, and take as the amplitude results those which minimize the χ 2 .Explicit expressions for the subduction of partialwaves into irreps of the relevant symmetry group are presented in Refs.[80] and [81], and further discussion of the approach, and implementation details can be found in Refs.[18,57,80,82]. The elastic scattering partial-wave amplitudes appearing in Eq. ( 2) can be parameterized by compact forms, allowing for a description of the entire lattice QCD computed spectrum in terms of a few free parameters.The resulting amplitudes can be analytically continued into the complex energy plane to search for pole singularities.A range of parameterization forms is typically used, with the spread of amplitude behaviors and pole locations providing an estimate of systematic error.Each relevant partial-wave amplitude t I ℓ (s) is parameterized in a way that respects elastic unitarity exactly, but may not necessarily respect other fundamental constraints. In terms of the phase-shift, δ I ℓ (s), elastic amplitudes can be written as where ρ(s) = 2k/ √ s is the ππ phase-space, with k = 1 2 s − 4m 2 π being the scattering momentum.In those cases where a single partial-wave only dominates Eq. ( 2), or where the amplitudes for higher partial waves are fixed, each discrete finite-volume energy can be used to obtain a discrete value of the phase-shift at that energy.This approach is used to make the discrete phase-shift 'data points' that will appear in plots later in this document.The amplitude curves are not obtained by fitting these 'data', but rather using the spectrum χ 2 approach described above. At low scattering energies, the slow variation of the S-wave and D-wave can be well described by a low-order expansion in the square of the scattering momentum, typically called the effective range expansion, where the conventional choice has F I ℓ (s) = 1, a I ℓ interpreted as the scattering length and r I ℓ as the effective range.Additional desired features can be built into the amplitude with other choices, such as to ensure a zero of the amplitude, like those predicted by broken chiral symmetry known as 'Adler zeroes'. An alternative expansion follows from defining which must be real analytic between the elastic threshold and the inelastic threshold.One can engineer the presence of an effective inelastic threshold (s 0 ), and the opening of the left-hand-cut at s = 0 (required by crossing-symmetry), by using a conformal mapping variable [83,84], In this expression α and s 0 are fixed parameters that determine what energy region is mapped into a unit disk of ω 3 .The convergence of the conformal expansion is expected to be rapid, where, again, one may build additional properties into the amplitude by suitable choices for F I ℓ (s), for example, , to force a resonance.As suggested in Ref. [85], spurious singularities introduced below threshold by this conformal expansion can be removed by adding a suitable function, Partial-waves that contain a narrow resonance and no other features, like the experimental I = 1 P -wave, are usually well-described over a limited energy region by a Breit-Wigner form, which effectively parameterizes a single nearby pole, with the energy-dependent width, Γ(s) = s .The width form can be elaborated to damp out the threshold behavior at high energies (sometimes called barrier factors) at the cost of including at least one extra parameter and possibly spurious singularities. A rather flexible parameterization scheme which generalizes nicely to the case of coupled-channel amplitudes, uses the K-matrix defined in [86] t I ℓ (s) where a common parameterization choice is a sum of poles plus a finite-order polynomial, 3 In practice, we will set s 0 = 0.09 a −2 t and α = 0.8 for the I = 2 waves and the I = 0 D-wave, as they do not exhibit any inelastic behavior up to high energies.We use s 0 = 0.05 a −2 t and α = 1 for the I = 1 P -wave.For the I = 0 S-wave, where we expect the inelasticity to become significant at a lower energy, we set α = 1, and we consider two values of s 0 = 0.032 a −2 t , 0.04 a −2 t . This form can be modified to ensure an Adler zero by taking K(s) → (s − s A ) K(s), and the unphysical singularity in the phase-space at s = 0 can be removed from the physical sheet by replacing −iρ(s) with the Chew-Mandelstam function, which we present subtracted at threshold, as which has Im I(s) = −ρ(s) as required by unitarity. For each partial wave, we will consider a large number of parameterizations based on the forms above, reporting all those which are found to be capable of describing the computed finite-volume spectra as established by the spectrum χ 2 value. For every amplitude parameterization, once the parameters are constrained by describing the lattice QCD spectra, we search the second Riemann sheet for poles that we interpret as being due to resonances.The pole locations provide a model-independent definition of a mass and width for the resonance, , and the corresponding residue in t I ℓ (s) ∼ c 2 /(s R − s), gives a coupling of the resonance to ππ.An alternative definition of the ππ coupling, as presented in Ref. [87], is related to ours by, We will find, as has been observed in previous lattice calculations [45,47,57,64,88,89], and in amplitude analyses of experimental data [90][91][92][93], that when a narrow resonance is present the pole position and coupling typically show very little scatter over a range of sufficiently flexible parameterizations, but when a resonance pole lies far into the complex plane, different amplitudes which behave similarly in a limited energy region on the real energy axis (and which describe the finite-volume spectra equally well in the lattice case) can lead to quite different pole locations [94][95][96].We will return to this point later when discussing the σ pole appearing in the isospin-0 S-wave. In the following, we will illustrate the finite-volume energy levels included in our fits in black, to discern them from other levels that are not fitted, plotted in gray. IV. ππ → ππ I = 2 As in experiment, previous determinations in lattice QCD at various pion masses (e.g.[27,32,[34][35][36][37]97]) have found ππ scattering in isospin-2 to be weak and repulsive.Lattice calculations of this channel typically use a basis of operators resembling a pair of pions only, since q q operators cannot access I = 2.The lowest inelastic channel is ππππ, but expectations from experiment are that the coupling of this channel to ππ turns on very slowly [98][99][100]. A. I = 2 finite-volume spectra Using bases of ππ operators as described in Section II, matrices of correlation functions are computed, and variational analysis leads to the spectra shown in Fig. 1.Departure of the discrete energy levels from the values for non-interacting ππ pairs can be observed, being much larger in those irreps which feature a subduction of the S-wave. The first inelastic threshold here is ππππ, indicated in Fig. 1 by the horizontal dashed line.We have not included any ππππ-like operator constructions in our basis, so the spectrum presented above the inelastic threshold will only be a correct subset of the complete true spectrum in the case that the ππ and ππππ channels are decoupled.In experiment, this is indeed the case until quite high energies, well above those considered here [98][99][100]. The determined energies have fractional errors typically at the 0.5% level, where this includes an estimate of systematic error coming from varying different fitting details and whether a "weighting-shifting" step (see Ref. [36]) is applied to cancel mild finite-time-extent effects.These systematic variations impact at a level below the statistical error on most points.Across all irreps, we extract 50 energy levels for m π ∼ 330 MeV, and 98 for m π ∼ 283 MeV, of which 19 and 41 are below or at the ππππ threshold, respectively. B. ππ → ππ I = 2 scattering The spectra presented in the previous section can be used to constrain S-wave and D-wave elastic scattering amplitudes4 using the approach described in Section III.Examining energy levels in those irreps whose lowest subduced partial-wave is ℓ = 2, we observe extremely small energy shifts from the non-interacting curves that suggest a very weak interaction. Descriptions in terms of parameterizations featuring only a single free parameter, such as a scattering length, lead to good descriptions of the spectra, and as can be seen in Fig. 2, clearly describe a very weak D-wave interaction.Adding further parameter freedom does not lead to an improved description of the spectra.The spectra shown in the lower row of Fig. 1 are for those irreps in which the S-wave is present.These are included together with the spectra in the top row in a χ 2 to obtain descriptions of the S-and D-wave amplitudes simultaneously.The S-wave amplitudes for several sample parameterization choices are shown in Fig. 3.The principal difference between these various descriptions of the finite-volume spectrum, most of which have χ 2 /N dof ∼ 1, can be observed to be at threshold where the slope of the phase-shift curve, and hence the scattering length, appears to be poorly constrained.This is more clearly seen in the plots of k cot δ 2 0 , where for both pion masses a spread of behaviors at threshold, well outside the statistical uncertainty, is observed.The behaviors fall into two broad categories -amplitudes where k cot δ 2 0 is fairly flat at threshold correspond to those which have not been engineered to have an Adler zero below threshold, unlike those which fall at threshold, where an Adler zero was included at the tree-level χPT location, s A = 2m 2 π .Given that the pion masses used in this study are further from the chiral limit than the experimental pion mass, we expect corrections to the tree-level location of an Adler zero that may be significant.As was shown in Ref. [101], dispersive analyses of experimental data suggest that even for the physical pion mass the Adler zero may be displaced from the tree-level location.Motivated by this result, we take the range produced by the "CFD" dispersive predictions in Ref. [101], and extrapolate it to the pion masses used herein using s A = s phys A m π /m phys π 2 .We consider descriptions of the finite-volume spectra using amplitudes with Adler zeros fixed at the extremes suggested by this approach, together with some amplitudes for which the Adler zero is allowed to float freely, although these latter choices lead to statistically imprecise results for the amplitude.The location of the enforced Adler zero (or the central value when fitted) for each description is given in Fig. 4 as the ratio to the tree-level value, s A /2m 2 π . We plot in Fig. 4 the values of S-wave scattering length extracted from all parameterizations which provide a reasonable description of the finite-volume spectra, separated between those parameterizations with an Adler zero (of varying location) and those without -a clear systematic difference is observed, indicating a strong correlation between the location of a subthreshold zero and the value of the scattering length when constrained by only finitevolume energy levels above threshold.This systematic spread is not reduced for the lighter-pion-mass case, despite the fact that we fit over twice the data points than for the heavier mass. In Fig. 5 we show the pion mass evolution of the I = 2 S-wave scattering length across four pion masses computed with the same lattice action.The rightmost point, taken from Ref. [36] reflects an average over several parameterizations in which Adler zeroes were not enforced.The leftmost points, at m π ∼ 239 MeV, show an analysis of the same type followed by this paper that has not been previously published 5 .It is clear that the slope of the variation with pion mass is very sensitive to the existence, or not, of an Adler zero.We return in this paper's conclusions to whether the presence and exact location of the Adler zero, which lies far from the region of constraint provided by finite-volume energy levels, can be resolved using only lattice QCD data.0 .9 2 0 .9 1 0 .9 3 0 .9 2 0 .9 1 0 .8 6 0 .8 9 0 .8 6 0 .9 1 0 .8 8 0 .9 0 0 .8 6 that feature an Adler zero, while blue points lack an enforced subthreshold zero.The result of dispersive analysis applied to experimental data [101] is shown by the gray point.The I = 1 channel contains the P -wave ρ resonance which appears below the K K threshold, while the F -wave amplitude is expected to be featureless and very weak across the elastic region.In order to reliably determine the finite-volume spectrum up to slightly above the K K threshold, we make use of a large basis of single hadron operators, ππ-like operators, and K K-like operators. A. I = 1 finite-volume spectra Figure 6 shows the extracted spectra for the two pion masses considered in this calculation, where large departures from the ππ non-interacting energies (red curves) are observed, indicative of strong interactions.The isolated 'extra' levels near a t E cm ∼ 0.14 suggest a narrow resonance in that energy region.At higher energies, the extracted finite-volume spectra lie very close to the noninteracting energies (including those corresponding to K K) suggesting that the scattering amplitude may be featureless above the resonance. In total, we extracted 23 levels for m π ∼ 330 MeV and 95 levels for m π ∼ 283 MeV, of which 17 and 42 are below the K K threshold, respectively.Examination of the operator overlaps for states above the K K threshold suggests that there appears to be no significant coupling between the ππ and K K channels, indicating that an analysis of elastic scattering above threshold, retaining only those levels with overlap onto ππ operators, may be successful.This appears to be essentially the same situation as was observed for m π ∼ 239 MeV in Ref. [47] 6 , where coupled channel analysis showed negligible ππ, K K 6 Referred to in that paper as mπ ∼ 236 MeV.An improved channel coupling over a significant energy region above threshold. B. ππ → ππ I = 1 scattering As explained above, we restrict ourselves to an elastic analysis in this manuscript, and the extracted spectra indicate that the F -wave amplitude is negligible relative to the P -wave in the region of interest.The ℓ = 3 angular momentum barrier factor suppresses the low-energy interactions, and the only resonance with those quantum numbers that decays to ππ is a ρ 3 , which is expected to appear far above the energy region we consider.Thus, in this case, each energy level can be used to determine a discrete value of δ 1 1 (s), as plotted in Fig. 7.The behavior for each pion mass is clearly that of a narrow resonance, and we consider elastic amplitude parameterizations which describe the finite-volume spectra up to a t E cm = 0.19. A Breit-Wigner form, Eq. ( 9), is found to describe the finite-volume spectra reasonably, with parameters A parameterization using a conformal mapping with a resonance enforcing F I ℓ (s) factor shown by the black curve, and a K-matrix with a single pole plus a constant shown by the red curve.Discrete 'data' points with large uncertainties have been removed from the plot for clarity.The shaded region indicates energies above the K K threshold.(15) for m π ∼ 283 MeV.The matrices illustrate the statistical correlation between the parameters.In both cases, the Breit-Wigner mass and coupling parameters are essentially uncorrelated. Considering a wider variety of amplitude parameterizations, including K-matrix forms and conformal expansions, we can find examples that describe the data with slightly improved χ 2 /N dof , but all successful descriptions show compatible phase-shift energy dependence in the region of the resonance.In the next section, we will examine the variation of the ρ resonance pole with parameterization choice. The amplitude at threshold is characterized by a scattering 'length', defined via , and as seen in Fig. 8, amplitudes capable of describing the volume spectrum with the smallest χ 2 values have compatible values of this parameter.The first entry plotted for each pion mass corresponds to the Breit-Wigner fit, and such a form is not expected to provide a faithful description of amplitudes away from the resonance that is being parameterized, and hence this form may not describe the threshold behavior accurately. C. The ρ resonance In the case of the I = 1 P -wave, for both pion masses, a pole singularity lying near the real axis is found on the second Riemann sheet for every parameterization that successfully describes the finite-volume spectra.The pole location for each parameterization is plotted in Fig. 9 where we observe very little scatter, indicating that the lattice spectra precisely determine the mass and width of the ρ resonance at these pion masses without significant amplitude parameterization dependence.These ρ pole results supplement those obtained in Refs.[45,47] at m π ∼ 391, 239 MeV, and in Fig. 10 we present the evolution of the pole position and pole residue coupling (defined in Eq. ( 13)) with varying pion mass.As expected the ρ becomes heavier as the light quark mass increases and narrower as the phase-space for decay to two pions decreases.The coupling appears to be consistent with being constant across the range of pion masses considered.These results agree with the expectations for an 'ordinary q q meson' as defined in Ref. [103], and agree with predictions made for the quark mass trajectory of the ρ in unitarized chiral perturbation theory models [58, 104- [45,47] (green, orange).Right: Magnitude of the complex ρ resonance pole coupling, as defined in Eq. ( 13), and the real coupling, g BW , extracted when a Breit-Wigner, Eqn 9, is used to describe the spectrum.The uncertainties on the pole location and pole couplings quoted from Ref. [47] (orange points) are a rather conservative average over a large number of parameterizations, including several which include the K K coupled-channel region.For pole properties, the "Roy" result of dispersive analysis of experimental data [87] is shown in gray, while the physical value of g BW comes from the neutral e + e − mode listed in the PDG [1]. VI. ππ → ππ I = 0 Isospin-0 ππ scattering in S-wave is not simple to describe, being neither weak nor dominated by a single narrow resonance.At the physical pion mass, despite there being relevant scattering data available for over forty years, it is only recently that the role of the broad σ resonance has been confirmed with certainty [7,87,110].Recent consideration of this scattering channel using firstprinciples lattice QCD showed a clear σ bound-state when m π ∼ 391 MeV, and evidence for a broad σ resonance (albeit with significant parameterization dependence) when m π ∼ 239 MeV [53].We will observe in the current calculation that between the two pion masses considered here the σ undergoes a dramatic change in form. A. I = 0 finite-volume spectra Spectra were extracted from correlator matrices computed using a basis of single-hadron operators, ππ-like operators, K K-like operators, and some ηη-like operators (for the lighter pion mass, larger volume lattice).The energies are shown in Fig. 11, where it is clear that there are large departures from the non-interacting ππ energies in those irreps containing subduction of the ππ S-wave suggesting strong scattering, while those irreps having D-wave as their leading partial-wave show only small downward shifts indicative of mild attraction.D-wave resonances, f 2 , f ′ 2 , are expected to lie at significantly higher energy, well into the coupled-channel region [57]. Even though the I = 0 correlation functions receive vital contributions from relatively noisy diagrams featuring complete quark-line annihilation, the use of distillation leads to high-quality signals, and the extracted energy In total, there are 23 levels for m π ∼ 330 MeV and 75 levels for m π ∼ 283 MeV.For the D-wave dominated irreps, there is no evidence of coupling between ππ and K K, and a description in terms of purely elastic ππ scattering, even above the K K threshold will prove to be successful.The S-wave dominated irreps on the other hand cannot be described so simply, and we consider only energy levels lying some way below the K K threshold, where channel coupling is expected to turn on rapidly.For m π ∼ 330 MeV we use 18 energy levels, and 48 for m π ∼ 283 MeV. B. ππ → ππ I = 0 scattering There is no evidence in the computed spectra that amplitudes with ℓ > 2 are required over the energy region we are considering, and as seen in Fig. 12, even the D-wave amplitude is only very mildly attractive. The S-wave amplitudes, shown in Fig. 13, provide our first example of amplitudes whose description is not obvious, and where the behavior changes dramatically between the two pion masses considered.The lighter pion mass shows a phase-shift increasing with a moderate slope from threshold, and when plotted as k cot δ 0 0 , a crossing of threshold at a small positive value, indicating a large positive scattering length.The heavier pion mass shows a qualitatively different energy dependence, having an approximately flat phase-shift above threshold, and a k cot δ 0 0 threshold crossing at a small negative value, indicating a large negative scattering length. The plots of k cot δ 0 0 show clearly the presence of a systematic variation with choice of parameterization.A wide range of forms of the type described in Section III is used, including cases with and without an Adler zero.As was the case for I = 2 S-wave analysis, we explore Adler zeroes fixed at the leading order location, s A = 1 2 m 2 π , and varying between the dispersive "CFD" predictions in Ref. [101] (appropriately scaled for the changed pion mass), and finally, allowing the zero location to float as a free parameter.The locations of the different Adler zeroes are given in Fig. 14. The four illustrative cases presented in Fig. 13 zero, likely reduced relative to the I = 2 case by virtue of the zero being further below threshold.The energy levels below threshold do not obviously suggest a preference either way for an Adler zero. Figure 14 shows the scattering length for each parameterization choice that successfully describes the finitevolume spectra.It is clear that at the heavier pion mass, the presence, or not, of an Adler zero has no impact on the value of the scattering length, and we will discuss this further in the next section in the context of there being a bound-state pole dominating the amplitude.At the lighter pion mass, there is a slight tendency to a larger scattering length for amplitudes that lack an Adler zero, but the effect is barely significant.Our estimates at these two pion masses and those determined in Ref. [53] are plotted in Fig. 15.An explanation of the observed behavior would be that these pion masses straddle a rapid divergence near m π ∼ 300 MeV, where the scattering length tends to ±∞ on either side of the divergence.In the next section, we will discuss how this can be related to the σ pole undergoing a transition between Riemann sheets by passing through the ππ threshold. C. The σ pole The presence of singularities on the real axis below threshold can be inferred rather directly from graphs of it follows that pole singularities are present whenever the graph of k cot δ intersects a curve, ik = ± √ −k 2 below threshold.The negative sign corresponds to a pole on the physical Riemann sheet, a bound state, while the positive sign corresponds to the second Riemann sheet and a virtual bound state. In Fig. 13 (left), for the heavier pion mass, all amplitude parameterizations intersect with − √ −k 2 only slightly below threshold, with a parameterization dependence below the statistical uncertainty.This indicates the presence of a bound-state lying very close to threshold, and indeed numerical determination shows it to be statistically compatible with being at threshold, see Fig. 16.Restricting amplitude fits to only describing levels in a small region around threshold does not change this conclusion. As a bound-state pole approaches threshold, the value of k cot δ 0 0 at the pole location tends to 1/a 0 0 , and hence the scattering length must diverge to −∞ as was suggested in the previous section.An argument due to Weinberg [111] suggests that the scattering length (a 0 0 ), effective range (r 0 0 ), and binding energy (ϵ = 2m π − m σ ) together can be used to determine the degree to which this bound-state is of "ππ-molecular" versus "compact" nature, where Z is interpreted as the probability to find the state in a compact configuration.Compatible values of Z are obtained from each of these two equations suggesting that (suppressed) corrections are modest, and the resulting Z = 0.07(4) suggests dominance of a ππ component over any compact component in the σ at this pion mass.In Fig. 13 (right), for the lighter pion mass, there are now two classes of parameterization.Many parameterizations capable of describing the finite-volume spectrum cross the curve + √ −k 2 below threshold, indicating the presence of a virtual bound state, but some do not.Upon searching these latter amplitudes for poles off the real axis, complex conjugate pairs of poles are found off the real axis below threshold.As can be seen in Fig. 13 (right), there is not a significant qualitative difference in the amplitude above threshold between the effect of a virtual bound-state and a sub-threshold resonance.The locations of these poles are shown in Fig. 16. In the case of a virtual bound state lying at threshold, a similar logic to that presented above for a bound state indicates that the scattering length must diverge to +∞ as the pole reaches threshold.The transition, as the pion mass increases, from scattering length diverging to +∞, to reducing from −∞ would therefore correspond to a pole on the second Riemann sheet moving onto the physical Riemann sheet by passing through the threshold.).Red points correspond to parameterizations featuring an Adler zero, while blue points have no enforced subthreshold zero.The result of dispersive analysis applied to experimental data [101] is shown by the gray point. Figure 17 summarizes the σ pole positions extracted from calculations at m π ∼ 391, 330, 283 and 239 MeV using the same lattice action.At the heaviest two pion masses, the σ is a stable bound-state, at 283 MeV it is either a virtual bound-state or a subthreshold resonance (depending upon parameterization), while at 239 MeV it appears to be a broad resonance.Qualitatively this evolution in quark mass conforms to the general scheme The behavior of the amplitude in the pion mass region where the conjugate pole pair meet on the real axis indicates the origin of the large statistical errors in the right panel of Fig. 16.At this point, the derivative of the pole location with respect to amplitude parameters can diverge, leading to an inability to propagate a statistical error 7 .The fact that our m π ∼ 283 MeV choice appears to be close to this point generates larger statistical uncertainties on the pole position than might otherwise be expected. The lack of a reliable determination of a second subthreshold pole (as expected by the pole evolution argument described above) for the heavier pion mass considered in our calculation likely reflects the insensitivity of the amplitude near and above threshold (which determines the finite volume spectrum) to such a rather distant pole.Some additional constraints below threshold would be required to pin it down with certainty. 7For example, with an effective range parameterization, the dependence on the pole locations on the scattering length and effective range is given by and the location where the two poles coincide is k R = i/r with r negative and with a = −2r. The reduction in the value of |g ππ | observed in Fig. 17 for m π ∼ 330 MeV is expected on general grounds if this point is close to the pion mass where the σ pole passes through the threshold.Kinematic constraints on the amplitude at threshold ensure that as the pole approaches threshold, the S-wave coupling must behave like g 2 ππ ∝ s R − 4m 2 π , and hence must vanish as the pole crosses the threshold. As was previously observed in a lattice calculation with m π ∼ 239 MeV [53], the results of this paper indicate that in those cases where the σ is unbound, even with the use of large numbers of high-precision finite-volume energy levels, the σ pole location cannot be precisely pinned down.Different parameterizations which describe the real-energy data equally well lead to pole locations and couplings scattered well outside the statistical uncertainty, and we conclude that to reduce this systematic error it will be necessary to introduce a greater level of theoretical constraint into the determination of the scattering amplitudes.In the next section, we will describe a plausible approach to achieve this which makes use of the full set of partial-wave amplitudes across three isospins computed in this paper. VII. CONCLUSIONS AND OUTLOOK We have reported on the extraction of ππ elastic scattering amplitudes across all three isospins for low partialwaves, supplementing earlier calculations on anisotropic lattices with m π ∼ 391, 239 MeV, with new calculations at two interpolating pion masses, 330 and 283 MeV.The new pion masses explore the region where we expected the σ state appearing in the I = 0 S-wave to transition from a bound state into a resonance.We continued the philosophy used in prior publications [36,45,47,53,57,64,66,67,80,88,89,112,113] of exploring a wide range of amplitude parameterizations FIG.17: Left: σ pole location with varying pion mass from this calculation and from calculations on lattices with the same action [45,47].Green (m π ∼ 391 MeV, from [53]) and blue (m π ∼ 330 MeV) points lie on the physical sheet while red (m π ∼ 283 MeV) and orange (m π ∼ 239 MeV) points lie on the unphysical sheet (additional parameterizations have been applied to the energy levels published in [53] to generate the spread of orange points).Right: Magnitude of the σ pole coupling, as defined in Eq. ( 13) at four pion masses, with different parameterizations displaced horizontally for clarity.The points at m π ∼ 283 MeV are separated into two groupings according to whether the pole in that case is a subthreshold resonance or a virtual bound-state.The dashed vertical lines locate each one of our pion masses.The result of dispersive analyses [7,87,110] ( [87] only for the coupling) of experimental data is shown in gray in each plot. to test the uniqueness of the lattice QCD spectrum constrained amplitudes and their resonance content.The isospin-2 S-wave at both new pion masses is found to be weak and repulsive, and we isolated a sensitivity in the extracted value of the scattering length to whether amplitude parameterizations contain a subthreshold zero like an Adler zero.The isospin-1 P -wave is dominated by an isolated narrow ρ resonance, and we were able to establish the trajectory of the corresponding resonance pole through the complex plane as the pion mass varies.The corresponding coupling of the resonance to ππ was found to be essentially quark mass independent. The isospin-0 S-wave, which houses the σ, shows the most dramatic change between the two pion masses considered.At m π ∼ 330 MeV the phase-shift is relatively flat over the entire region and is found to feature a boundstate σ with a binding energy of only about 3 MeV, while at m π ∼ 283 MeV the phase-shift rises slowly from 0 • caused by the σ being either a virtual bound-state or a subthreshold resonance.We conclude that the σ undergoes a transition from being a bound state to being a virtual bound state somewhere between m π ∼ 283 MeV and m π ∼ 330 MeV. The very different quark mass evolutions observed for the vector ρ and the scalar σ agree with the general arguments that a P -wave resonance can only become stable by having the complex conjugate resonance pole pair coalesce at the threshold, while an S-wave state need not meet this requirement.Once the pair of S-wave poles meet on the real axis below threshold, they evolve differently, with one of them approaching threshold as the quark mass increases.In those lattices where we find a bound σ, the pole closest to threshold determines the low energy behavior of the partial-wave. The fact that we are unable to state with certainty whether the σ at m π ∼ 283 MeV is a virtual bound-state or a subthreshold resonance reflects the same problem that was previously reported for the σ at m π ∼ 239 MeV in Ref. [53], where equally good amplitude descriptions of the finite-volume spectrum have poles in locations scattered across the complex plane.Given the degree of systematic uncertainty associated with the choice of parameterization observed, it would not be appropriate to attempt to extrapolate the current data at unphysical pion masses to the physical pion mass. The inability of even large numbers of high-precision lattice QCD energy levels to uniquely pin down the σ pole location, and also to determine the location of Adler zeroes in the I = 0 and I = 2 S-wave amplitudes, are problems that most likely have a common origin.In both cases, we are required to analytically continue relatively far from where the amplitudes are constrained, which is over a limited section of the real energy axis mainly above threshold. We propose that a solution is to apply additional theoretical constraints to the amplitudes.In particular, the behavior of any fixed isospin partial-wave amplitude for s < 0 is controlled by partial waves in all isospins by virtue of crossing symmetry, and dispersion relations allow us to make practical use of this symmetry, while also ensuring good analytic properties of the amplitudes.Since we have computed amplitudes with all isospins on the same lattices in this paper, we can envisage applying a dispersion relation analysis approach to more accurately constrain partial-wave amplitudes.We are pursuing such an approach, and a publication is in preparation. FIG.3: I = 2 S-wave scattering for m π ∼ 330 MeV (left) and m π ∼ 283 MeV (right).Four example parameterizations are shown: a two-term conformal mapping (black), an effective range expansion with two terms (green), and two choices with an Adler zero fixed at the leading order χPT location, a two-term conformal mapping (red), and an effective range expansion with two terms (orange).The enforced presence of the Adler zero can be seen in the deviation of the red and orange curves from nearly flat behavior at threshold in the lower panels.Discrete 'data' points with large uncertainties have been removed from the plot for clarity.The shaded region indicates energies above the ππππ threshold.Note that, in both cases, the conformal mapping parameterization without an Adler zero produces a relatively poor fit, and for this reason, their scattering lengths are not quoted in Figs.4 and 5. FIG. 4 :FIG. 5 : FIG. 4: Extracted scattering length for a range of I = 2 S-wave amplitude parameterizations for m π ∼ 330 MeV (top) and m π ∼ 283 MeV (bottom).Each amplitude is labelled by the χ 2 /N dof , on the top axis, with which it describes the finite-volume spectrum.Red points correspond to amplitudes containing an Adler zero at some location, while blue points lack any enforced subthreshold zero.The numbers below the red points indicate the corresponding Adler zero location, in units of the LO value, 2m 2 π . χ 2 / N dof = 41.34 49−2 = 0.88.extraction of the pion mass and the Ω baryon mass used to set the scale provide the newer value. 20 FIG. 7 : FIG.7: I = 1 P -wave phase-shift for m π ∼ 330 MeV (top) and m π ∼ 283 MeV (bottom).A parameterization using a conformal mapping with a resonance enforcing F I ℓ (s) factor shown by the black curve, and a K-matrix with a single pole plus a constant shown by the red curve.Discrete 'data' points with large uncertainties have been removed from the plot for clarity.The shaded region indicates energies above the K K threshold. FIG. 11 : FIG.11: I = 0 spectra, a t E cm , by irrep, against L/a s , for m π ∼ 330 MeV (left) and m π ∼ 283 MeV (right).Upper panel irreps have D-wave as their leading partial-wave, while those in the lower panel have S-wave leading.Red/green curves show ππ/K K non-interacting energies. 20 FIG. 12 : FIG.12: I = 0 D-wave phase-shift for m π ∼ 330 MeV (top) and m π ∼ 283 MeV (bottom).Discrete data points come from irreps in which ℓ = 2 is the lowest subduced partial-wave, assuming ℓ ≥ 4 scattering to be negligible.Curves show two illustrative parameterizations: an effective range expansion with two terms (red) and a conformal mapping with two terms (black).The shaded region indicates energies above the K K threshold. 9 FIG. 14 :FIG. 15 : FIG. 14: Extracted scattering length for a range of I = 0 S-wave amplitude parameterizations for m π ∼ 330 MeV (top) and m π ∼ 283 MeV (bottom).Each amplitude is labeled by the χ 2 /N dof , on the top(bottom) axis, with which it describes the finitevolume spectrum.Red points correspond to amplitudes containing an Adler zero at some location, while blue points lack any enforced subthreshold zero.The numbers closest to the red points indicate the corresponding Adler zero location, in units of the LO value, m 2 π /2.Negative values come from those cases where the zero location is a free parameter, and these values have large uncertainties. TABLE I : Anisotropic three-flavor lattices used in this paper.Anisotropy values, ξ, are obtained from the pion dispersion relation.N vecs indicates the number of distillation vectors used in the construction of correlation functions, and N tsrc the number of 0 → t perambulator time-sources averaged over I = 2 D-wave phase-shift for m π ∼ 330 MeV (top) and m π ∼ 283 MeV (bottom).Discrete data points come from irreps in which ℓ = 2 is the lowest subduced partial-wave, assuming ℓ ≥ 4 scattering to be negligible.Curves show two illustrative parameteriza- tions: a scattering length form (red) and a conformal mapping with two terms (black), fitted to the full set of energy levels below ππππ threshold.The shaded region indicates energies above the ππππ threshold. I = 1 finite-volume spectra, a t E cm , by irrep, against L/a s , for m π ∼ 330 MeV (left) and m π ∼ 283 MeV (right).Red/green curves show ππ/K K non-interacting energies.Levels appearing on top of K K non-interacting energies are not considered in our elastic fits to data. BW mπ/ MeV FIG.10: Left: ρ resonance pole location with varying pion mass from this calculation (blue and red points) and from calculations on lattices with the same action I = 0 S-wave scattering for m π ∼ MeV (left) and m π ∼ 283 MeV (right).Four example parameterizations are shown: a two-term conformal mapping (black), an effective range expansion with two terms (green), and two choices with an Adler zero fixed at the leading order χPT location, a two-term conformal mapping (red), and an effective range expansion with two terms (orange).In the bottom panels, the gray curves show ∓ √ −k 2 below threshold -intersection of the k cot δ 0 0 curves with these indicate the location of a bound-state or a virtual boundstate, respectively. [58,59,104] pole location for each I = 0 S-wave parameterization found capable of describing the finitevolume spectra for m π ∼ 330 MeV (left) and m π ∼ 283 MeV (right).Left panel shows the physical sheet housing a bound-state pole, right panel shows the lower half-plane of the unphysical sheet housing either a virtual boundstate pole or a subthreshold resonance pole.For some of the parameterizations producing a virtual bound-state, a second, lighter pole is also observed on the real axis.presented in Ref.[58,59,104]-as the pion mass is increased from its physical value, where the σ is a broad resonance, the complex conjugate pole pairs on the second Riemann sheet move toward the real energy axis, eventually meeting at a point below threshold.One pole then moves away towards negative infinity, becoming less relevant, while the other moves toward threshold as a virtual bound state.When this pole reaches threshold, it moves onto the physical sheet as a bound state, which becomes more deeply bound as the pion mass increases further.
12,238.8
2023-03-19T00:00:00.000
[ "Physics" ]
Unique morphological architecture of the hamstring muscles and its functional relevance revealed by analysis of isolated muscle specimens and quantification of structural parameters Abstract The structural and functional differences of individual hamstrings have not been sufficiently evaluated. This study aimed to clarify the morphological architecture of the hamstrings including the superficial tendons in detail using isolated muscle specimens, together with quantification of structural parameters of the muscle. Sixteen lower limbs of human cadavers were used in this study. The semimembranosus (SM), semitendinosus (ST), biceps femoris long head (BFlh), and biceps femoris short head (BFsh) were dissected from cadavers to prepare isolated muscle specimens. Structural parameters, including muscle volume, muscle length, fiber length, sarcomere length, pennation angle, and physiological cross‐sectional area (PCSA) were measured. In addition, the proximal and distal attachment areas of the muscle fibers were measured, and the proximal/distal area ratio was calculated. The SM, ST, and BFlh were spindle‐shaped with the superficial origin and insertion tendons on the muscle surface, and the BFsh was quadrate with direct attachment to the skeleton and BFlh tendon. The muscle architecture was pennate in the four muscles. The four hamstrings possessed either of two types of structural parameters, one with shorter fiber length and larger PCSA, as in the SM and BFlh, and the other with longer fiber length and smaller PCSA, as in the ST and BFsh. Sarcomere length was unique in each of the four hamstrings, and thus the fiber length was suitably normalized using the average sarcomere length for each, instead of uniform length of 2.7 μm. The proximal/distal area ratio was even in the SM, large in the ST, and small in the BFsh and BFlh. This study clarified that the superficial origin and insertion tendons are critical determinants of the unique internal structure and structural parameters representing the functional properties of the hamstring muscles. | INTRODUC TI ON The hamstrings are the muscle group in the posterior thigh that serves as the principal knee flexor and hip extensor. These muscles play an important role in activities of daily living, such as sit-tostand (Hanawa et al., 2017), walking (Arnold et al., 2005;Winter & Yack, 1987), and sprinting (Chumanov et al., 2007;Thelen et al., 2005). The hamstrings are also a frequent site of muscle strain injury in various sports activities (Ekstrand et al., 2011;Volpi et al., 2004). For these reasons, the functional properties and status of hamstrings have been intensively studied in vivo by measuring muscle length (ML) with ultrasonography (Kellis et al., 2021) or recording muscle activity with electromyogram (Hirose et al., 2021). The hamstrings consist of four muscles, namely, the semimembranosus (SM), semitendinosus (ST), biceps femoris long head (BFlh), and biceps femoris short head (BFsh), and collaborate in knee flexion and hip extension. Structural and functional differences in the four muscles have been suggested. The structural parameters vary among the four muscles (Charles et al., 2019;Kellis et al., 2012;Klein Horsman et al., 2007;Ward et al., 2009;Wickiewicz et al., 1983), suggesting that they have significantly different functional characteristics. Muscle strains have been reported to be frequent both at the proximal muscle-tendon junction of the BFlh and in the proximal tendon of the SM, depending on the mode of motion (Askling et al., 2007a(Askling et al., , 2007b. The superficial origin and insertion tendons were reported as an integral part of muscle architecture in the hamstrings by Woodley and Mercer (2005), Kellis et al. (2010) and van der Made et al. (2015), and also in other fusiform muscles as reported by Sakai and Kato (2021). The tendons were recognized as dense connective tissue that connected the muscular tissue to the skeleton and may be designated as aponeurosis in the flattened form. They could be subdivided into the free tendon without muscular tissue (extramuscular part) and the surface tendon on the muscular tissue (epimuscular part). While the extramuscular tendons serve to transmit the whole muscle forces to the skeleton, the epimuscular tendons providing wide attachment areas to the muscle fibers are critical determinants of the functional and clinical characteristics of the muscles. Although the structural parameters and functional properties were reported to be different among the four hamstring muscles, the difference of the morphology of epimuscular tendons and the arrangement of muscle fibers have not been well elucidated in the previous studies. It is well known that muscle tension is maximal at the optimal sarcomere length (SL) and decreases considerably with lengthening or shortening of the sarcomere (Lieber et al., 1994). On this basis, Lieber and Friden (2000) recommended normalizing the fiber length (FL) with the measured and optimal SL to calculate the physiological cross-sectional area (PCSA). Burkholder and Lieber (2001) rationalized the normalization of FL in a review article by showing that the SLs reported in 29 articles were different among animal species as well as among individual muscles and among different methods and different researchers. Felder et al. (2005) measured both FL and SL in the mouse tibialis anterior at different joint angles and normalized the FL with high precision. SL has been reported to change with different joint angles in various muscles in humans Cromie et al., 2013;Lichtwark et al., 2018;Ljung et al., 1999) and monkeys (Ando et al., 2021). In calculating the structural parameters of muscles, Lieber and Friden (2000) recommended normalizing the FL with the measured SL and a uniform optimal SL of 2.7 μm reported by Walker and Schrodt (1974). However, since different values of SL among the individual muscles were known from recent studies reporting the structural parameters (Borst et al., 2011;Cutts, 1988;Ward et al., 2009), the calculating methods of structural parameters should be re-evaluated. In the present study, we examined the internal structure of the hamstrings using isolated muscle specimens to clarify the morphology of the epimuscular tendons and the internal muscle architecture, and in addition to evaluate structural parameters critically as a sound basis of functional properties of the hamstrings. | Source of cadavers We used cadavers of persons who had donated their bodies for medical education and research to Juntendo University School of Medicine. Before donation, written consent from donors and their families was obtained. Sixteen left legs were collected from formaldehyde-embalmed Japanese body donors (8 males 82.0 ± 6.4 years old, and 8 females 88.4 ± 6.4 years old), which were dissected by medical students in the gross anatomy course at Juntendo University School of Medicine. We do not have information on weight and height of the individual cadavers in this study. In our dissection protocol for medical students, the right legs were dissected to observe the bones and ligaments; thus, the muscles, nerves, and vasculature were more or less completely destroyed. Therefore, we only used the hamstrings of the left leg, where there was no serious structural damage to the muscles. We excluded cadavers that exhibited significant pathological alterations in muscles (such as muscular dystrophy, fatty degeneration, and large intramuscular hematomas), traumatic lesions, surgical scars, and flexion contracture of the knee joint. Each cadaver was formaldehyde-embalmed in an anatomical position with negligible flexion of the joints (less than 10 degrees). Photographs were reversed to represent the right side. | Preparation and observation of isolated hamstrings The skin, subcutaneous tissue, superficial and deep fasciae, gluteus maximus, and neurovascular bundles were removed from the posterior thigh to reveal the hamstring muscles. The SM, ST, and BFlh were then released from the ischial tuberosity, and the BFsh was removed from the lateral lips of the femur. Subsequently, the insertion of the SM, ST, BFlh, and BFsh was removed from the medial condyle of the tibia, tibial tuberosity, and fibular head to prepare isolated whole hamstrings. The whole hamstring was then divided into individual SM, ST, BFlh, and BFsh muscles. | Measurement Structural parameters were measured in isolated specimens of individual muscles. The ML and FL were measured using a tape measure. The pennation angle (PA) was measured using a protractor. The ML were measured as distances between the proximal and distal ends of the muscle body containing muscle fibers, excluding the extramusclar tendons. The FL and PA were measured along the fascicles beginning from 9 points plotted evenly on both sides of the edge from top to bottom (5 points on one side in the BFsh) of the proximal or distal superficial tendon on the muscle surface. The PA was measured on the muscle surface as the angle between the individual fascicles and the long axis of the muscle. The mean values and standard deviations of each FL and PA were subsequently calculated using SPSS. The SL was measured under a light microscope (Olympus, BX53F) at five points in each of the five muscle fibers obtained from each of the 16 individual specimens for each of the SM, ST, BFlh, and BFsh. Optimal SL was calculated using the following formula (Lieber et al., 1994): where Lf is the optimal fiber length, Lf' is the raw fiber length, Ls is the standard sarcomere length, and Ls' is the raw sarcomere length. The standard SL was defined as the average of the measured SLs for the individual muscles. The average cross-sectional area (AvCSA) was estimated using the following equation: The proximal and distal attachment areas of the fascicles to the tendon or skeleton were measured on the photographs of isolated muscle specimens using Adobe Photoshop. The muscle fiber attachment areas were defined as those areas on the skeleton or tendinous tissue on which the muscle fibers attach, namely the areas of epimuscular tendons in SM and BFlh, the areas of epimuscular tendons plus the attachment areas on the BFlh proximal tendon in ST, and attachment areas on the lateral lips of the femur and on the BFls distal tendon. The muscle fiber attachment area ratio was calculated from dividing the attachment areas by the total muscle areas measured on the photographs. | Statistical analyses The muscle fiber attachment area ratio was compared using a paired t-test. SLs and normalized PCSA were compared between groups using analysis of variance followed by the Bonferroni test as a post hoc test; P values of <0.05 were considered statistically significant. The PCSA of the four hamstrings were compared among three groups after normalization with different standard values of sarcomere length. (group 1, Ls = average; group 2, Ls = 2.7 μm; group 3, without normalization). All analyses were performed using SPSS software. | Anatomy of hamstrings in situ The structure of the epimuscular origin and insertion tendons and the arrangement of the muscular fascicles were analyzed in isolated specimens of the individual muscles. | Semimembranosus The origin tendon of the SM consisted of the slender and thick extramuscular tendon and the membranous epimuscular tendon on the superficial lateral surface extending down to the distal third of the muscle (Figure 2a). The epimuscular tendon possessed a composite substructure with a thicker axial part and thinner wing part (indicated by asterisks in Figure 2a). The axial part was composed of parallel bundles of tendon fibers, which continued at the tip of the epimuscular tendon as radiating branches within the muscle (intramuscular tendon). The wing part spread away from the axial part as a thin sheet and ran obliquely parallel to the muscle fibers. The insertion tendon of the SM consisted of a significantly short stout extramuscular tendon and membranous epimuscular tendon on the deep medial surface, surrounding the distal half up to the middle of the muscle (Figure 2(a)'). The epimuscular tendon was homogeneous in structure and radiated from the distal end, becoming thinner toward the proximal end. The membranous epimuscular tendon at the origin and insertion sides was located on the opposite side of the muscle and spread equally in the area. The muscular bundles connected both epimuscular tendons and were arranged oblique to the longitudinal axis of the muscle, so that the muscle had a parallel pennate architecture ( Figure 3a). The FL was significantly short in comparison with the muscle body length, as was evident in the small value of the muscle fiber ratio (approximately 20%) in this muscle. | Semitendinosus The ST did not form a specific origin tendon outside of the muscle F I G U R E 2 Isolated specimens of the SM, ST, BFlh, and BFsh. (a, a') SM in the superficial surface (a) and deep surface (a'). The slender extramuscular origin tendon (exOT) outside of the muscle continued to the membranous epimuscular origin tendon (epiOT) on the superficial lateral surface of the muscle. The membranous epimuscular insertion tendon (epiIT) spreads widely on the deep medial surface of the muscle and continues to the short extramuscular insertion tendon (exIT). The epimuscular origin tendon is subdivided into a thicker axial part and a thinner wing part at the boundary indicated by asterisks. (b, b') ST in the superficial surface (b) and deep surface (b'). The origin tendon of ST possesses no extramuscular part and extends shortly as the epimuscular origin tendon (epiOT) on the lateral surface of the muscle. The epimuscular insertion tendon (epiIT) on the medial surface of the muscle continued to a long string-like extramuscular insertion tendon (exIT). (c, c') BFlh in the superficial surface (c) and deep surface (c'). The slender extramuscular origin tendon (exOT) continues to the narrow band-like epimuscular origin tendon (epiOT) on the deep surface. The epimuscular insertion tendon (epiIT) covers the superficial surface of the muscle and continues to the strap-shaped extramuscular insertion tendon (exOT). (d, d') BFsh in the superficial surface (d) and deep surface (d'). The edge-like proximal end of BFsh possesses no tendon attaching muscularly to the femur (aO); white arrowheadsattachment site of origin side. The thicker distal end of BFsh also possesses no tendon attaching to the deep surface on the extramuscular insertion tendon of BFlh (aI); red arrowheads-attachment site of insertion side. aO, attachment site of origin side; aI, attachment site of insertion side; epiIT, epimuscular insertion tendon; epiOT, epimuscular origin tendon; exIT, extramuscular insertion tendon; exOT, extramuscular origin tendon. the origin and insertion structures, but tended to run parallel along the muscle length in the middle, especially near the tendinous inscription. The origin structure of the ST was located on the upper lateral part, and the insertion epimuscular tendon was found on the lower medial side of the muscle so that the musculotendinous connection areas on the proximal and distal sides were placed in parallel on the opposite side of the muscle (Figure 3c). The origin area was twice as wide as the insertion area, and the muscle fibers converged from the origin side to the insertion side, resulting in a converging pennate muscle architecture. The tinting and convexity of the tendinous interruption corresponded to the arrangement of the origin structures and insertion epimuscular tendon. The muscle fibers of the ST were longer than the muscle body length (approximately 54%) in this muscle. | Biceps femoris long head The origin tendon of the BFlh consisted of the slender extramuscular tendon and the narrow band-like epimuscular tendon on the deep medial surface extending down to the center of the muscle (Figure 2c'). The epimuscular tendon became thinner and more distally slender and slipped into the muscle to become an intramuscular tendon down to the distal third of the muscle. The insertion tendon of the BFlh possessed a strap-shaped extramuscular tendon and a broad epimuscular tendon covering the superficial lateral surface and extending up to the middle of the muscle body (Figure 2c). The epimuscular tendon was homogeneous in structure and radiated from the distal end becoming thinner toward the proximal end, similar to that of the SM. The origin and insertion epimuscular tendons were located on the opposite surfaces of the muscle, and their areas on the muscle were quite different, being much smaller on the origin side than on the insertion side. The muscular bundles extended from the narrow origin epimuscular tendon obliquely to the muscle axis and radially to the broad insertion epimuscular tendon so that the muscle had a radiating pennate architecture (Figure 3b). The FL was significantly short in comparison with the muscle body length, as was evident from the small value of the muscle fiber ratio (approximately 28%) in this muscle. F I G U R E 3 Longitudinal sections of the hamstrings through both the proximal and distal epimuscular tendons. (a) In the SM, the muscle fibers connect obliquely between the epimuscular origin tendon (epiOT) and epimuscular insertion tendon (epiIT) on the opposite sides, exhibiting pennate architecture with short muscle fibers. The tip of epimuscular origin tendon continues as radiating branches within the muscle (IntOT, intramuscular origin tendon). (b) In the BFlh, the muscle fibers connect obliquely between the epimuscular origin tendon (epiOT) and epimuscular insertion tendon (epiIT) on the opposite sides, showing pennate architecture with short muscle fibers. The epimuscular tendon becomes thinner and more slender distally and slips into the muscle to become an intramuscular tendon (IntOT). (c) In the ST, the muscle fibers connect diagonally between the proximal and distal attachment sites at the muscle extremities (epiOT, epiIT), presenting pennate architecture with relatively long muscle fibers. The muscle fibers of ST are interrupted by a disc-like tendinous inscription (TI) in the middle of the muscle. epiIT, epimuscular insertion tendon; blue arrowheads-area of epiIT; epiOT, epimuscular origin tendon; yellow arrowheads-area of epiOT; IntOT, intramuscular origin tendon; brown arrowheads-area of IntOT; TI, tendinous inscription. | Biceps femoris short head The BFsh did not form separate origin and insertion tendons but attached muscularly to the linea aspera of the femur on the proximal side and to the extramuscular insertion tendon of the BFlh on the distal side (Figure 2d,d'). The proximal border of the muscle was thin and elongated vertically with a slender attachment area at the lateral edge of the muscle. The muscle became thicker and narrower distally and was attached to the deep surface of the insertion tendon of the BFlh. The BFsh had a parallel pennate architecture. The muscle fibers of the BFsh were not significantly short compared with the muscle body length (approximately 45%). | Structural parameters of the hamstring muscles The SM, ST, and BFlh were spindle-like, and the BFsh was a quadrate; all of them were categorized as pennate in architecture. However, they displayed manifest structural differences in the expansion of the epimuscular tendons and extent of the muscle-tendon junctions, which must be correlated with the functional properties of the muscles. To evaluate the functional properties of these muscles, we measured the size and length of the muscles and fibers to calculate the PCSA, AvCSA, FL-to-ML (FL/ML) ratio, and PCSA-to-AvCSA (PCSA/AvCSA) ratio (Table 1) The muscle volume was maximum in the SM, followed by the BFlh and ST, and the BFsh was small. However, individual differences were extremely large, as shown by the coefficient of variation (CV) (30.8%-38.2%). The difference in ML among the four muscles was small. The FLs of the SM (5.15 cm) and BFlh (7.01 cm) were relatively short, and those of the ST (15.96 cm) and BFsh (12.33 cm) were relatively long. Individual differences were small, as shown by the CV (7.5%-10.2%). The difference of FL was found within individual muscles and was obviously much smaller than the difference among the four hamstring muscles ( Figure 5). The distal fascicles arising around the tip of origin tendon were shorter than the proximal fascicles by 7.0% in SM and 10.9% in BFlh, and the distal fascicles in BFsh were shorter than the proximal ones by 9.9%. The PA showed almost no difference in the PA between the measurement sites in the SM (13.82°) and ST (9.20°). In the TA B L E 1 Descriptive structural parameters of the hamstring muscles. | The ratio of the proximal and distal attachment areas of muscle fascicles to the tendon or skeleton In the present study, we observed that the individual hamstring mus- We conducted a paired t-test for the proportion of proximal and distal attachment areas and found a significant difference between the proximal and distal attachment areas in the ST, BFlh, and BFsh. | Muscle architecture and shape of hamstrings We clarified the morphology of the attachment areas of the muscle fibers in the hamstrings. The attachment of the muscle fibers was provided by the epimuscular tendons in the SM, ST, and BFlh and by the extramuscular skeletal and tendinous elements in the BFsh. The attachment areas of both the proximal and distal sides were arranged on opposite sides of the muscle, with the muscle belly in between. This positional relationship between the proximal and distal attachments has also been recognized in many other muscles and can be called the principle of opposition of the origin and insertion surfaces (Sakai & Kato, 2021). Kellis et al. (2010) and Woodley and Mercer (2005) studied the variation of FL in the individual hamstring muscles, and we confirmed that the differences of FL between the muscles surpassed far those within the muscles (Figure 5), as reported in the dorsal muscle of the scapula (Langenderfer et al., 2006) and in the flexor muscle of the forearm (Liu et al., 2014). The opposing origin and insertion epimuscular tendons or structures are favored to maintain the FLs relatively constant (Sakai & Kato, 2021). The muscle shape was fusiform in the SM, ST, and BFlh and quadrate in the BFsh, and the muscle architecture was pennate in all four muscles. In previous anatomy textbooks, the architecture of fusiform muscles differed from that of pennate muscles, and their muscle fibers were arranged in longitudinal directions (Moore, 1980;Romanes, 1972;Rosse & Gaddum, 1997;Schaeffer, 1953;Williams, 1995). As shown in the present study, the concept of "fusiform" concerned solely the shape of the muscle and not the architecture of the muscle, whereas the concept of "pennate" concerned solely the architecture of the muscle and not the shape of the muscle. | Normalization of FL by SL In the normalization of FL, the optimal SL was usually assumed to be 2.72 μm based on the study by Walker and Schrodt (1974) who measured the SL in several muscles using transmission electron microscopy without verifying the difference in SL among the individual muscles. In fact, the SLs were considerably different among F I G U R E 6 The ratio of the proximal and distal attachment areas of muscle fibers on the epimuscular tendons or extramuscular skeletal structures to the whole muscle areas evaluated on the photographs of isolated muscle specimens. Differences of the area ratio to the muscle surface are tested using a pair t-test. *p < 0.05. BFlh, biceps femoris long head; BFsh, biceps femoris short head; SM, semimembranosus; ST, semitendinosus. muscles, as reported by Cutts (1988), showing variation in SL in 10 lower limb muscles between 1.970 and 3.007 μm. Thereafter, variations in SL among muscles were frequently shown in studies to calculate the PCSA, as 2.66-3.07 μm in the three muscles posterior to the scapula (Langenderfer et al., 2006), 2.12-3.31 μm in 28 muscles of the lower limbs (Ward et al., 2009), 2.52-2.77 μm in 3 muscles of the pelvic floor (Alperin et al., 2014), 2.16-2.78 μm in 5 muscles of the forearm (Liu et al., 2014), 1.98-3.64 μm in 35 cervical muscles (Borst et al., 2011), 2.14-3.57 μm in 11 trunk muscles (Bayoglu et al., 2017), and 2.45-2.85 μm in 8 hip muscles (Parvaresh et al., 2019). These studies strongly indicated the possibility that the optimal SLs differed among the muscles. Furthermore, recent studies have shown that SL is longer in cases of cerebral palsy (Lieber & Friden, 2019;Mathewson et al., 2014Mathewson et al., , 2015Smith et al., 2011) and that the SL and number decreased in contracted muscles during apoplexy (Adkins et al., 2021), indicating the probable SL change in adaptation to muscle activity. The currently available information does not support the assumption that the optimal SLs are single and constant among the different muscles in the human body. In the present study, we | Relevance of quantitative measurements The structural parameters were measured in three hamstrings to calculate the PCSA and other muscle parameters in the present study. Similar data reported in previous studies (Charles et al., 2019;Kellis et al., 2012;Ward et al., 2009) were too diverse to obtain standard values for individual muscles. The diversity of structural parameters was known to be correlated to some extent with the individual differences affected by age, sex, and race but possibly by the differences in the employed methods, such as direct measurement on the cadaveric specimens or virtual measurement on the MRI. The muscle volume and PCSA were significantly variable among individuals, as indicated by the large coefficient variation (30.8%-38.2% for muscle volume and 29.2%-35.5% for PCSA) compared with the ML and FL (7.5%-11.2% for ML and 7.5%-10.2% for FL). The PCSA would be inappropriate as a general index for the functional characteristics of individual muscles but useful as a specific index for the degree of development and muscle strength in individuals. Muscle hypertrophy after exercise is known to result from the enlargement of individual muscle fibers (Farup et al., 2012). The PCSA/AvCSA ratio calculated in this study as PCSA divided by AvCSA was theoretically 1.0 in the perfect fusiform muscle with FL equal to ML, and much larger in the pennate muscle with FL shorter than ML, and would be useful as a general index of the functional characteristics of individual muscles. The PCSA/AvCSA ratio in the hamstring muscles was relatively constant among individuals (Figure 7h), as indicated by the much smaller CV (6.7%-14.8%) than PCSA (29.2%-35.5%), and expected to be a specific value for the functional properties of the muscle as FL/ML. | Functional characteristics of individual hamstrings predicted by the architecture Muscle strain in the hamstrings is most frequent in the BFlh, especially at the proximal muscle-tendon junction (Askling In the present study, with anatomical analysis of isolated muscle specimens, we confirmed that the muscle-tendon junction area at the proximal side of the BFlh was smaller compared with the distal side and the other hamstrings quantitatively after morphometry. If we may reasonably premise that the every muscle fiber span between the proximal and distal attachment, it would be logically concluded that the concentration of muscle fibers at the proximal side of the BFlh exceeds far the distal side of BFlh and the other hamstrings, a condition indicating susceptibility for muscle strain at this muscle-tendon junction. AUTH O R CO NTR I B UTI O N S Conceptualization, KT and TS; cadaver preparation, KI; data curation, KT, TS, and KK; data validation, KT and TS; and manuscript writing, KT and TS. ACK N OWLED G M ENTS The authors thank the individuals who donated their bodies to Juntendo University School of Medicine. This study was made possible by selling gifts from these donors. CO N FLI C T O F I NTE R E S T S TATE M E NT The authors declare no potential conflict of interest. DATA AVA I L A B I L I T Y S TAT E M E N T On reasonable request, the obtained data supporting the finding of the present study are available from the corresponding authors.
6,287.4
2023-03-13T00:00:00.000
[ "Biology" ]
Use of Ssu/msu Satellite Observations to Validate Upper Atmospheric Temperature Trends in Cmip5 Simulations The tropospheric and stratospheric temperature trends and uncertainties in the fifth Coupled Model Intercomparison Project (CMIP5) model simulations in the period of 1979–2005 have been compared with satellite observations. The satellite data include those from the Stratospheric Sounding Units (SSU), Microwave Sounding Units (MSU), and the Advanced Microwave Sounding Unit-A (AMSU). The results show that the CMIP5 model simulations reproduced the common stratospheric cooling (´0.46–´0.95 K/decade) and tropospheric warming (0.05–0.19 K/decade) features although a significant discrepancy was found among the individual models being selected. The changes of global mean temperature in CMIP5 simulations are highly consistent with the SSU measurements in the stratosphere, and the temporal correlation coefficients between observation and model simulations vary from 0.6–0.99 at the 99% confidence level. At the same time, the spread of temperature mean in CMIP5 simulations increased from stratosphere to troposphere. Multiple linear regression analysis indicates that the temperature variability in the stratosphere is dominated by radioactive gases, volcanic events and solar forcing. Generally, the high-top models show better agreement with observations than the low-top model, especially in the lower stratosphere. The CMIP5 simulations underestimated the stratospheric cooling in the tropics and overestimated the cooling over the Antarctic compared to the satellite observations. The largest spread of temperature trends in CMIP5 simulations is seen in both the Arctic and Antarctic areas, especially in the stratospheric Antarctic. Introduction As an important aspect of climate change, the vertical structure of temperature trends from the troposphere to stratosphere has received a great deal of attention in the climate change research community [1][2][3][4][5][6][7][8][9].The World Climate Research Program (WCRP) made an incredible effort to understand the variability in the stratosphere based on climate model simulations [10].Compared to the third phase of the Coupled Model Intercomparison Project (CMIP3), many models of the fifth phase of the Coupled Model Intercomparison Project (CMIP5) increased the model top above 1 hPa to improve the representation of atmospheric change in the upper layers [10,11] in the coupled climate models.However, the reliability of these new simulations in the middle troposphere and stratosphere is still not completely clear [12].Also, many of the climate models do not include all the physical and chemical processes necessary for simulating the stratospheric climate [11].Santer and his co-authors compared CMIP5 model simulations with satellite observations to conduct attribution studies on atmospheric temperature trends [13,14], and they found that CMIP5 models underestimate the observed cooling of the lower stratosphere and overestimate the warming of the troposphere.As a consequence, evaluating the climate model results with integrated observational data sets is necessary to understand the model capabilities and limitations in representing long term climate change and short term variability. The satellite observations from the Stratospheric Sounding Units (SSU), Microwave Sounding Units (MSU) and the Advanced Microwave Sounding Unit-A (AMSU-A) provide key assessments of climate change [15][16][17][18][19] and the performance of climate model simulations [7,14,20].The National Oceanic and Atmospheric Administration (NOAA) Center for Satellite Applications and Research (STAR) developed both MSU/AMSU-A and SSU temperature time series [17- 19,21] that can be used to validate climate model simulations.This is the only data available that can provide near-global temperature information over multidecadal periods from the middle troposphere up to the upper stratosphere (50 km). In this study, an intercomparison of the temperature trends from the middle troposphere to the upper stratosphere between satellite observations and the CMIP5 simulations was conducted.The goal is to understand the uncertainties and deficiencies in estimating temperature trends in the CMIP5 simulations.Section 2 describes the data sets and methodologies.The temporal analysis of the global mean temperature and the spatial variation of the global temperature trend are presented in Sections 3 and 4 respectively.Lastly, in Sections 5 and 6 multiple linear regression analysis is performed to discuss the results, and conclusions are drawn. Data and Methodology In this study, the temperature trends from the middle troposphere to the upper stratosphere in the CMIP5 climate model simulations are assessed by the SSU and MSU temperature data records.All data sets spanned the period from 1979 through 2005. SSU/MSU Data Sets The NOAA/STAR SSU Version 2 dataset developed by Zou [16] is used in this study.The SSU is a three-channel infrared (IR) radiometer designed to measure temperatures in the middle to upper stratosphere (Figure 1) in which SSU1 (channel 1) peaks at 32 km, SSU2 (channel 2) peaks at 37 km, and SSU3 (channel 3) peaks at 45 km.The MSU/AMSU-A temperature dataset was created in STAR by Zou [15].Three of the MSU/AMSU-A channels extend from the middle troposphere to the lower stratosphere (Figure 1): MSU2 (MSU channel 2 merged with AMSU-A channel 5) peaks at 6 km, MSU3 (MSU channel 3 merged with AMSU-A channel 7) peaks at 11 km, and MSU4 (MSU channel 4 merged with AMSU-A channel 9) peaks at 18 km.Similar to the previous study [3], the model temperatures on pressure levels are converted to the SSU/MSU layer-averaged brightness temperatures by applying weighting functions to the averaged vertical profile at each model grid point.The SSU/MSU weighting functions are normalized as shown in Figure 1.It is worth noting that a single weighting function over the globe shows some limitations because the weighting functions significantly depend on the latitudes, and the peak levels over the poles are different from the tropics.The best way is probably to use a fast radiative transfer model to transfer the pressure data to the SSU/MSU layers.However, based on previous studies [3,11,12], these limitations are not critical for the estimation of the temperature trends.In addition, one main problem with these satellite data is the discontinuities in the time series, due to that data from 13 different satellites have been used since 1979.Several correction have been made to compensate radiometric differences, tidal effects associated with orbit drifts [22,23], change of the vertical weighting functions due to atmospheric CO 2 changes [24], and long-term drift in the local time of measurements.So, errors associated with trend estimates are due to the uncertainties in the successive SSU adjustments and time continuity. Remote Sens. 2016, 8, 13 3 associated with orbit drifts [22,23], change of the vertical weighting functions due to atmospheric CO2 changes [24], and long-term drift in the local time of measurements.So, errors associated with trend estimates are due to the uncertainties in the successive SSU adjustments and time continuity. CMIP5 Simulations We use the IPCC models at the Program for Climate Model Diagnosis and Intercomparison (PCMDI) [8].The datasets used for this study are the historical run of 35 available models with 11 high-top models (model tops above 1 hPa), enabling the comparisons with the highest altitude SSU data for the period of 1979-2005.Table 1 lists information for all 35 models used in this study.Further details, together with access information, can be obtained at the following website [25].The "historical" (HIS) run (1850-2005) is forced by past atmospheric composition changes (reflecting both anthropogenic and natural sources) including time evolving land cover.In addition, three types of experiments including pre-industrial control runs (PI), Greenhouse gases (GHG) only runs and natural forcing only (NAT) runs have been analyzed. CMIP5 Simulations We use the IPCC models at the Program for Climate Model Diagnosis and Intercomparison (PCMDI) [8].The datasets used for this study are the historical run of 35 available models with 11 high-top models (model tops above 1 hPa), enabling the comparisons with the highest altitude SSU data for the period of 1979-2005.Table 1 lists information for all 35 models used in this study.Further details, together with access information, can be obtained at the following website [25].The "historical" (HIS) run (1850-2005) is forced by past atmospheric composition changes (reflecting both anthropogenic and natural sources) including time evolving land cover.In addition, three types of experiments including pre-industrial control runs (PI), Greenhouse gases (GHG) only runs and natural forcing only (NAT) runs have been analyzed. Methodology To facilitate the inter-comparison study, all data are first interpolated to the same horizontal resolution of 5-degrees in longitude and latitude, then the temperature of the pressure levels in CMIP5 are converted to the equivalent brightness temperatures of the six SSU/MSU layers (SSU3, SSU2, SSU1, MSU4, MSU3, MSU2) based on the vertical weighting function of the SSU/MSU measurements in Figure 1 (the weighting functions were from the NOAA/STAR website).It is clear that only the 11 high-top models can be compared to the highest layers (SSU3, SSU2) of satellite data.Similar to the processing in the previous study [4], the six SSU/MSU channels represent the temperature for broad layer from the middle tropospheric MSU2 (peak at 6 km) to the upper stratospheric SSU3 (peak at 45 km). Taylor's [26] diagram and ratio of signal to noise are used to evaluate the performance of the models.The fitting of linear least squares is used to estimate the temperature trend.The model trend uncertainty is measured by the ensemble spread, which is defined by the standard deviation among CMIP5 climate model simulations.Multiple linear regression was performed for the analysis of model performance in stratosphere and troposphere.In comparison, the troposphere (Figure 2e,f) shows a weak warming at a rate of +0.07 K/decade in MSU3, and +0.14 K/decade in the middle troposphere (MSU2). For the CMIP5 simulations, all the models are able to capture the observed temperature variability in the upper and low stratosphere (Figure 2a-d) except some models demonstrate a larger, short-lived warming than observations due to the Mount Pinatubo volcano in 1991-1992.The overestimation of the temperature response is mainly due to the fact that the observed decreases in ozone concentrations following the major volcanic eruptions were not included in the forcing data set, which is why most models lead to overestimation of the stratospheric temperature response to volcanic eruptions, especially Pinatubo [11]. Global Mean Temperature Figure 2 shows the time series of global mean temperature anomalies of the satellite observations (thick blue line) and CMIP5 simulations at the SSU/MSU six layers.The SSU channel (Figure 2a-c) indicates cooling temperature trend at a rate of approximately −0.85 K/decade in the upper stratosphere (SSU3).MSU channel 4 (MSU4) shows −0.38 K/decade in the lower stratosphere (MSU4).It is clear that all three SSU and MSU4 channels demonstrate strong anomalies during the 1982-1983 and 1991-1992 periods, which is attributed to the volcanic eruptions of El Chichón (1982) and Mt.Pinatubo (1991). In comparison, the troposphere (Figure 2e,f) shows a weak warming at a rate of +0.07 K/decade in MSU3, and +0.14 K/decade in the middle troposphere (MSU2). For the CMIP5 simulations, all the models are able to capture the observed temperature variability in the upper and low stratosphere (Figure 2a-d) except some models demonstrate a larger, short-lived warming than observations due to the Mount Pinatubo volcano in 1991-1992.The overestimation of the temperature response is mainly due to the fact that the observed decreases in ozone concentrations following the major volcanic eruptions were not included in the forcing data set, which is why most models lead to overestimation of the stratospheric temperature response to volcanic eruptions, especially Pinatubo [11].The model ensemble mean (MME: black thick line) is very similar to observations in the stratosphere.The major differences between the model ensemble mean and the observation is that the model ensemble mean overestimates the cooling in SSU3 channels and underestimates that in SSU1 where the difference reaches 0.19 K/decade.It should be noted that some models, IPSL-CM5A-LR, IPSL-CM5A-MR, IPSL-CM5B-LR, CMCC-CESM and INMCM4 even cannot reproduce strong volcanic eruptions anomalies in the stratosphere during the 1982-1983 and 1991-1992 periods, because they did not include volcanic aerosols. In contrast, in the comparison of stratosphere, the CMIP5 models show an obvious discrepancy from the MSU observations (Figure 2e,f) where the multi-model ensemble mean cannot produce the temperature variability.Some models even show an opposite phase of variability to the MSU observations, but there is no large difference in the global mean temperature trend between models and observations. Consistencies between Simulations and Observations The evaluation of the global mean temperatures in CMIP5 simulations in comparison with the SSU/MSU observations is accomplished through the Taylor-diagram.The Taylor diagram is a convenient way of evaluating different model's performance with observation using three related parameters: correlation with observed data, centered root-mean-square (RMS), and standard deviation.Models with as much variance as the observations' largest correlation and with the least RMS error are considered best performers in the Taylor diagram.In the stratosphere (Figure 3), the correlation coefficient between SSU/MSU and CMIP5 climate models have a large value ranging from 0.60 in the lower stratosphere to 0.95 in the middle to upper stratosphere.This reflects the strong The model ensemble mean (MME: black thick line) is very similar to observations in the stratosphere.The major differences between the model ensemble mean and the observation is that the model ensemble mean overestimates the cooling in SSU3 channels and underestimates that in SSU1 where the difference reaches 0.19 K/decade.It should be noted that some models, IPSL-CM5A-LR, IPSL-CM5A-MR, IPSL-CM5B-LR, CMCC-CESM and INMCM4 even cannot reproduce strong volcanic eruptions anomalies in the stratosphere during the 1982-1983 and 1991-1992 periods, because they did not include volcanic aerosols. In contrast, in the comparison of stratosphere, the CMIP5 models show an obvious discrepancy from the MSU observations (Figure 2e,f) where the multi-model ensemble mean cannot produce the temperature variability.Some models even show an opposite phase of variability to the MSU observations, but there is no large difference in the global mean temperature trend between models and observations. Consistencies between Simulations and Observations The evaluation of the global mean temperatures in CMIP5 simulations in comparison with the SSU/MSU observations is accomplished through the Taylor-diagram.The Taylor diagram is a convenient way of evaluating different model's performance with observation using three related parameters: correlation with observed data, centered root-mean-square (RMS), and standard deviation.Models with as much variance as the observations' largest correlation and with the least RMS error are considered best performers in the Taylor diagram.In the stratosphere (Figure 3), the correlation coefficient between SSU/MSU and CMIP5 climate models have a large value ranging from 0.60 in the lower stratosphere to 0.95 in the middle to upper stratosphere.This reflects the strong consistency in the global mean stratospheric temperature between the CMIP5 climate model simulations and observation.The results showed that the 11 high-top models (triangle symbol) have higher correlation coefficients and a more centralized distribution than the low-top models in the low stratosphere.It is worth noting that the high-top model CMCC-CESM does not include volcanic aerosol in the stratosphere during the 1982-1983 and 1991-1992 periods, but the high correlation coefficient in SSU3, SSU2, SSU1 and MSU4 indicated the better agreement with observation in the stratosphere, which is totally different from the poor simulations in the low top model discovered in many previous studies [27][28][29].It is obvious that the model lid height plays a very crucial role in the simulation of stratospheric temperature variability. Remote Sens. 2016, 8, 13 7 consistency in the global mean stratospheric temperature between the CMIP5 climate model simulations and observation.The results showed that the 11 high-top models (triangle symbol) have higher correlation coefficients and a more centralized distribution than the low-top models in the low stratosphere.It is worth noting that the high-top model CMCC-CESM does not include volcanic aerosol in the stratosphere during the 1982-1983 and 1991-1992 periods, but the high correlation coefficient in SSU3, SSU2, SSU1 and MSU4 indicated the better agreement with observation in the stratosphere, which is totally different from the poor simulations in the low top model discovered in many previous studies [27][28][29].It is obvious that the model lid height plays a very crucial role in the simulation of stratospheric temperature variability.In contrast, the correlations of the CMIP5 model simulations with the MSU observations in the troposphere are sharply reduced.This demonstrates a lower correlation with the MSU in the CMIP5 simulations, which indicates that the CMIP5 simulations in the troposphere are in worse agreement than those in the stratosphere.The model CMCC-CESM shows a negative correlation with the MSU observations in the two MSU channels, and CanCM4 model simulations in the troposphere show much better agreement in the troposphere than their counterpart climate models.Additionally, models have a more centralized distribution in the stratosphere than troposphere implying that most of the models have better agreement in the stratosphere. Results from the Taylor diagram suggest that models show better agreement with observation and smaller intermodel discrepancy in the stratosphere than in the troposphere, especially for the high-top models.The lower correlation in the troposphere is generally because the temperature variations in the troposphere demonstrate mostly unforced internal variability, and free-running coupled-ocean-atmosphere models will never capture the timing of that variability.Further research is therefore needed to understand the forcing of internal variability and the principle of the coupled ocean-atmosphere system to improve the performance of the CMIP5 models in the troposphere.In contrast, the correlations of the CMIP5 model simulations with the MSU observations in the troposphere are sharply reduced.This demonstrates a lower correlation with the MSU in the CMIP5 simulations, which indicates that the CMIP5 simulations in the troposphere are in worse agreement than those in the stratosphere.The model CMCC-CESM shows a negative correlation with the MSU observations in the two MSU channels, and CanCM4 model simulations in the troposphere show much better agreement in the troposphere than their counterpart climate models.Additionally, models have a more centralized distribution in the stratosphere than troposphere implying that most of the models have better agreement in the stratosphere. Results from the Taylor diagram suggest that models show better agreement with observation and smaller intermodel discrepancy in the stratosphere than in the troposphere, especially for the high-top models.The lower correlation in the troposphere is generally because the temperature variations in the troposphere demonstrate mostly unforced internal variability, and free-running coupled-ocean-atmosphere models will never capture the timing of that variability.Further research is therefore needed to understand the forcing of internal variability and the principle of the coupled ocean-atmosphere system to improve the performance of the CMIP5 models in the troposphere. Uncertainty Analysis in Model Simulations To quantitatively reveal the spread and convergence of the climate models in reproducing the Stratospheric and Tropospheric Temperature, the signal-to-noise ratio is evaluated using the method of Zhou [30]. Ensemble mean is given by the ensemble average of all model simulations, that is where N is the total number of models, x is the global mean time series, x(n, t) represents the simulation of nth model at year t of a time length T. Climatological mean is defined by σ e . x c " The standard deviation of the x e (t) is used to measure modelled temperature trends in response to external signal.The dispersion of the simulation (measured by σ i the standard deviation of x(n, t)) indicates intermodel variability, which is noise for the climate reproduction.From the definition of the two averages, we have rx e ptq ´xc s 2 (3) The signal to noise ratio (S/N) is defined as Here we compute that S/N ratio for the period of 1979-2005 from stratosphere to troposphere.Analysis of S/N (Table 2) indicates that the stratosphere stands out as having much higher S/N (3.53-57.98)than the troposphere (0.48-1.36),where the forced signal is much larger than intermodal noise, especially for SSU3.The S/N ratio reduced sharply from SSU2 (28.31) to SSU1 (5.26), which is partially due to the inclusion of low-top models increasing intermodal noise. Trend Changes with Vertical Level The vertical profile of the global mean temperature trends shows (Figure 4) that the CMIP5 model's (Figure 4a) temperature trend cooling rates in the upper stratosphere are less than the SSU observations, especially in the SSU1 channel.Also, the low-top models further underestimate the cooling trend than the high-top models in the stratosphere. The crossover points identify a transition from tropospheric warming to stratospheric cooling.It is obvious that the most of the low-top models are higher than the corresponding crossover point from the MSU observations and high-top models.On the other hand, the ensemble spread among the CMIP5 model simulations (Figure 4b) is generally between 0.04 and 0.1 K/decade from the middle troposphere to the upper stratosphere, with the maximum spread appearing at the point of the SSU1 level, which is mainly due to the low-top models. The above results clearly show that high-top models have better consistency with the SSU/MSU observations than low-top models in global mean temperature variability. The above results clearly show that high-top models have better consistency with the SSU/MSU observations than low-top models in global mean temperature variability. Trend Changes with Latitude To better understand the spatial pattern of the CMIP5 model simulations, the latitude profiles of the temperature trend have been plotted to facilitate comparing the differences between the datasets.The results indicate (Figure 5) that the linear trends are highly sensitive to the latitude of interest.All exhibit predominant cooling in the stratosphere and warming in the troposphere except for the southern high latitudes.Also, there is an extremely strong cooling trend in the three SSU channel observations over the tropics and arctic. For the upper stratosphere (Figure 5a-c), a distinguishing difference in the CMIP5 simulations from SSU observations is found over the tropics, where the cooling rates get up to −1.2 K/decade in the SSU2 layer (Figure 5b).This is approximately −0.5 K/decade lower than the value in the CMIP5 models.In addition, the cooling trend shows a sharp gradient from high to low latitude in the SSU observations. For the layer from the middle troposphere to the lower stratosphere (Figure 5d-f), the MSU data shows a consistent trend with the CMIP5 models, except some models show more cooling than that observed at MSU4 over Antarctica.An important fact worth noting is that cooling was found in the troposphere over Antarctica, with the maximum cooling trend being approximately −1.8 K/decade in the upper troposphere.At the same time, warming trends have been observed over the tropics and the whole Northern hemisphere.This layer also displayed a substantial temperature difference between the Antarctic and the rest of the areas. It is obvious that the cooling trends of the stratospheric temperature change markedly with latitude and the largest trend is found in the tropical and arctic latitudes.In contrast, the warming trend increases with latitude from south to north in the troposphere, but the spread retains a small value except for both polar areas.Conversely, the south to north latitudinal cooling trend in the stratosphere decreased.To a first order (linear), the tropospheric warming by latitude is offset by Trend Changes with Latitude To better understand the spatial pattern of the CMIP5 model simulations, the latitude profiles of the temperature trend have been plotted to facilitate comparing the differences between the datasets.The results indicate (Figure 5) that the linear trends are highly sensitive to the latitude of interest.All exhibit predominant cooling in the stratosphere and warming in the troposphere except for the southern high latitudes.Also, there is an extremely strong cooling trend in the three SSU channel observations over the tropics and arctic. For the upper stratosphere (Figure 5a-c), a distinguishing difference in the CMIP5 simulations from SSU observations is found over the tropics, where the cooling rates get up to ´1.2 K/decade in the SSU2 layer (Figure 5b).This is approximately ´0.5 K/decade lower than the value in the CMIP5 models.In addition, the cooling trend shows a sharp gradient from high to low latitude in the SSU observations. For the layer from the middle troposphere to the lower stratosphere (Figure 5d-f), the MSU data shows a consistent trend with the CMIP5 models, except some models show more cooling than that observed at MSU4 over Antarctica.An important fact worth noting is that cooling was found in the troposphere over Antarctica, with the maximum cooling trend being approximately ´1.8 K/decade in the upper troposphere.At the same time, warming trends have been observed over the tropics and the whole Northern hemisphere.This layer also displayed a substantial temperature difference between the Antarctic and the rest of the areas. It is obvious that the cooling trends of the stratospheric temperature change markedly with latitude and the largest trend is found in the tropical and arctic latitudes.In contrast, the warming trend increases with latitude from south to north in the troposphere, but the spread retains a small value except for both polar areas.Conversely, the south to north latitudinal cooling trend in the stratosphere decreased.To a first order (linear), the tropospheric warming by latitude is offset by stronger latitudinal cooling in the stratosphere indicating the atmosphere is adjusting to surface and tropospheric heating to maintain radiative balance.6) that the spreads are highly sensitive to the latitude of interest, from 0.05 K/decade in tropical and subtropical areas to 0.5 K/decade in the southern polar region.The largest layer spread locations change with vertical level; the largest spread in the lower stratosphere (SSU1, MSU4) exceeds 0.5 K/decade over Antarctic areas, which is mainly due to some models significantly overestimating the cooling. Distribution of Longitude-Latitude Trend Spread In contrast, there are some remarkable discrepancies in the tropical regions in the MSU4 layer, especially in the central Pacific region.It is worth noting that the spread of the CMIP5 simulations in the middle troposphere (MSU2) remains a relatively small value at all latitudes.The smaller spread in the MSU2 reflects the high consistency at all latitudes in CMIP5 simulations. To summarize, the cooling trends of the stratospheric temperature markedly change with latitude, and the largest trend is found in the tropics-subtropics but the largest spread is found in the South Polar Region.In contrast, the warming trend increases with latitude from south to north in the troposphere, but the spread retains a small value except for both polar areas.Conversely, the cooling trend decreases with latitude from south to north in the stratosphere consistent with latitudinal radiative balance. Discussion According to the above analysis, there is one point worth noting.All selected CMIP5 models showed a much higher correlation with SSU/MSU observations and higher intermodal consistency in the stratosphere compared to the global mean temperature in the troposphere. In order to understand the possible reasons for the difference between the stratosphere and troposphere, the model's response to different forcings and their internal variability are investigated.In particular, three types of simulations are analyzed: pre-industrial control runs (piControl: PI), Greenhouse gases (GHG) only runs and natural forcing only (historicalNat: NAT) runs.Only two out of the 11 high-top models MIROC-ESM-CHEM and the MRI-CGCM3 included all three types of simulations and thus are analyzed here. The time series of global mean temperature anomalies for SSU3 and MSU2 observational layers in GHG-only experiment, natural-only forcing run and preindustrial control run are shown in Figure 7.In SSU3, there is no significant trend in the natural-only forcing run and preindustrial control run, stratospheric cooling can only be reproduced in the model with anthropogenic forcing.Both models underestimated the cooling in the GHG-only experiment (0.91 ± 0.02 K/de) compared to all-forcing historical simulation (MRI-CGCM3 0.98 ± 0.02 K/de and MIROC-ESM-CHEM 0.96 ± 0.02 K/de) (ozone Discussion According to the above analysis, there is one point worth noting.All selected CMIP5 models showed a much higher correlation with SSU/MSU observations and higher intermodal consistency in the stratosphere compared to the global mean temperature in the troposphere. In order to understand the possible reasons for the difference between the stratosphere and troposphere, the model's response to different forcings and their internal variability are investigated.In particular, three types of simulations are analyzed: pre-industrial control runs (piControl: PI), Greenhouse gases (GHG) only runs and natural forcing only (historicalNat: NAT) runs.Only two out of the 11 high-top models MIROC-ESM-CHEM and the MRI-CGCM3 included all three types of simulations and thus are analyzed here. The time series of global mean temperature anomalies for SSU3 and MSU2 observational layers in GHG-only experiment, natural-only forcing run and preindustrial control run are shown in Figure 7.In SSU3, there is no significant trend in the natural-only forcing run and preindustrial control run, stratospheric cooling can only be reproduced in the model with anthropogenic forcing.Both models underestimated the cooling in the GHG-only experiment (0.91 ˘0.02 K/de) compared to all-forcing historical simulation (MRI-CGCM3 0.98 ˘0.02 K/de and MIROC-ESM-CHEM 0.96 ˘0.02 K/de) (ozone forcing is not included due to the dataset being unavailable).Solar variability and volcanic can be easily found in the natural-only forcing run.Similar results can be obtained in other stratospheric observation layers (Figure 8). Remote Sens. 2016, 8, 13 12 forcing is not included due to the dataset being unavailable).Solar variability and volcanic eruptions can be easily found in the natural-only forcing run.Similar results can be obtained in other stratospheric observation layers (Figure 8).In comparison, the warming trend only can be detected from the GHG-only experiment, but the difference in the trend (MRI-CGCM3 0.15 ± 0.02 K/de and MIROC-ESM-CHEM 0.09 ± 0.03 K/de)is bigger compared to simulation results from SSU3, and also no significant trend in natural-only forcing run and preindustrial control run was observed. To quantitatively link the different model performances in the stratosphere and troposphere with the internal variability of the climate model and the model response to a given external forcing, a multiple linear regression analyses [31] were performed for the all forcing historical runs using the GHG-only experiment, natural-only forcing run and preindustrial control run as regressors.Multiple linear regression attempts were made to model the relationship between two or more explanatory variables and a response variable by fitting a linear equation. Y is the dependent variable, (x1, x2 … xp) is a set of p explanatory variables.b 0 is the constant term and b 1 to b p are the coefficients relating the p explanatory variables.A regression coefficient which is significantly greater than zero indicates a detectable response to the forcing concerned.Here, we assume that model output may be written as a linear sum of the simulated responses to individual forcing (GHG, NAT) and internal variability, each scaled by a regression coefficient plus residual variability. The regression coefficients are shown in Figure 9.The multiple linear regression results with GHG, NAT and PI can accurately reproduce the amplitude and phase of historical runs in the stratosphere (Figure 10).In the troposphere, the multiple linear regression results have their own amplitude and phase variability which does not match historical runs in any detail.In comparison, the warming trend only can be detected from the GHG-only experiment, but the difference in the trend (MRI-CGCM3 0.15 ˘0.02 K/de and MIROC-ESM-CHEM 0.09 ˘0.03 K/de)is bigger compared to simulation results from SSU3, and also no significant trend in natural-only forcing run and preindustrial control run was observed. To quantitatively link the different model performances in the stratosphere and troposphere with the internal variability of the climate model and the model response to a given external forcing, a multiple linear regression analyses [31] were performed for the all forcing historical runs using the GHG-only experiment, natural-only forcing run and preindustrial control run as regressors.Multiple linear regression attempts were made to model the relationship between two or more explanatory variables and a response variable by fitting a linear equation. Y is the dependent variable, (x1, x2 . . .xp) is a set of p explanatory variables.b 0 is the constant term and b 1 to b p are the coefficients relating the p explanatory variables.A regression coefficient which is significantly greater than zero indicates a detectable response to the forcing concerned.Here, we assume that model output may be written as a linear sum of the simulated responses to individual forcing (GHG, NAT) and internal variability, each scaled by a regression coefficient plus residual variability. The regression coefficients are shown in Figure 9.The multiple linear regression results with GHG, NAT and PI can accurately reproduce the amplitude and phase of historical runs in the stratosphere (Figure 10).In the troposphere, the multiple linear regression results have their own amplitude and phase variability which does not match historical runs in any detail.The results suggest that the stratospheric temperature is dominated by external forcing and response [32].This is different to the troposphere where the temperature variability is driven by both internal variability and external forcing, and the model response to external forcing is nonlinear. That is why all selected CMIP5 models showed a much higher correlation with SSU/MSU observations and higher intermodal consistency in the stratosphere compared to the global mean temperature in the troposphere. According to the above analysis, a final important point to emphasize is that all selected CMIP5 model simulations showed high correlation with SSU/MSU observations in the stratosphere compared to the global mean temperature in the troposphere, but these models failed to reproduce the latitude-longitude pattern of the temperature trends.On the other hand, most of the CMIP5 models do not have some of the necessary physical processes.For example, most of the selected The results suggest that the stratospheric temperature is dominated by external forcing and response [32].This is different to the troposphere where the temperature variability is driven by both internal variability and external forcing, and the model response to external forcing is nonlinear. That is why all selected CMIP5 models showed a much higher correlation with SSU/MSU observations and higher intermodal consistency in the stratosphere compared to the global mean temperature in the troposphere. According to the above analysis, a final important point to emphasize is that all selected CMIP5 model simulations showed high correlation with SSU/MSU observations in the stratosphere compared to the global mean temperature in the troposphere, but these models failed to reproduce the latitude-longitude pattern of the temperature trends.On the other hand, most of the CMIP5 models do not have some of the necessary physical processes.For example, most of the selected The results suggest that the stratospheric temperature is dominated by external forcing and response [32].This is different to the troposphere where the temperature variability is driven by both internal variability and external forcing, and the model response to external forcing is nonlinear. That is why all selected CMIP5 models showed a much higher correlation with SSU/MSU observations and higher intermodal consistency in the stratosphere compared to the global mean temperature in the troposphere. According to the above analysis, a final important point to emphasize is that all selected CMIP5 model simulations showed high correlation with SSU/MSU observations in the stratosphere compared to the global mean temperature in the troposphere, but these models failed to reproduce the latitude-longitude pattern of the temperature trends.On the other hand, most of the CMIP5 models do not have some of the necessary physical processes.For example, most of the selected CMIP5 models do not include a chemistry model in the stratosphere; only the MIROC-ESM-CHEM CESM1-WACCM includes chemistry and the chemistry model was recognized as a very important component to reproduce the true atmosphere [33,34]. Remote Sens. 2016, 8, 13 14 CMIP5 models do not include a chemistry model in the stratosphere; only the MIROC-ESM-CHEM and CESM1-WACCM includes chemistry and the chemistry model was recognized as a very important component to reproduce the true atmosphere [33,34]. Conclusions Based on the satellite SSU and MSU temperature observations from 1979 through 2005, the trends and uncertainties in CMIP5 model simulations from the middle troposphere to the upper stratosphere (5-50 km) have been examined.The results are summarized as follows: The CMIP5 model simulations reproduced a common feature with cooling in the stratosphere and warming in the troposphere, but the trend exhibits a remarkable discrepancy among the selected models.The cooling rate is higher than the SSU3 measurements at the upper stratosphere and less than SSU at the lower stratosphere. Regarding the temporal variation of the global mean temperature, the CMIP5 model simulations significantly reproduced the volcanic signal and were highly correlated with the SSU measurements in the upper stratosphere during the study period.However, these models have lower temporal correlation with observations in the middle-upper troposphere. Regarding the regional variation of the global temperature trends, the CMIP5 simulations displayed a different latitudinal pattern compared to the SSU/MSU measurements in all six layers from the middle troposphere to the upper stratosphere. Generally, the high-top models show better agreement with observations than the low-top model, especially in the lower stratosphere.The temperature trends and spread show marked changes with latitude; the greatest cooling is found in the tropics in the upper stratosphere and the greatest warming appears in the Arctic in the middle troposphere.The CMIP5 simulations underestimated the stratospheric cooling in the tropics compared to the SSU observations and remarkably overestimated the cooling in the Antarctic from the upper troposphere to the lower Conclusions Based on the satellite SSU and MSU temperature observations from 1979 through 2005, the trends and uncertainties in CMIP5 model simulations from the middle troposphere to the upper stratosphere (5-50 km) have been examined.The results are summarized as follows: The CMIP5 model simulations reproduced a common feature with cooling in the stratosphere and warming in the troposphere, but the trend exhibits a remarkable discrepancy among the selected models.The cooling rate is higher than the SSU3 measurements at the upper stratosphere and less than SSU at the lower stratosphere. Regarding the temporal variation of the global mean temperature, the CMIP5 model simulations significantly reproduced the volcanic signal and were highly correlated with the SSU measurements in the upper stratosphere during the study period.However, these models have lower temporal correlation with observations in the middle-upper troposphere. Regarding the regional variation of the global temperature trends, the CMIP5 simulations displayed a different latitudinal pattern compared to the SSU/MSU measurements in all six layers from the middle troposphere to the upper stratosphere. Generally, the high-top models show better agreement with observations than the low-top model, especially in the lower stratosphere.The temperature trends and spread show marked changes with latitude; the greatest cooling is found in the tropics in the upper stratosphere and the greatest warming appears in the Arctic in the middle troposphere.The CMIP5 simulations underestimated the stratospheric cooling in the tropics compared to the SSU observations and remarkably overestimated the cooling in the Antarctic from the upper troposphere to the lower stratosphere (MSU3-SSU3).The largest trend spread among the CMIP5 simulations is seen in both the Arctic and Antarctic in the stratosphere and troposphere, and the CMIP5 retain similar spread values in the tropics in both the troposphere and stratosphere. Figure 1 . Figure 1.The weighting functions for the satellite Microwave Sounding Unit (MSU) and the Stratospheric Sounding Unit (SSU). Figure 1 . Figure 1.The weighting functions for the satellite Microwave Sounding Unit (MSU) and the Stratospheric Sounding Unit (SSU). Figure 2 Figure 2 shows the time series of global mean temperature anomalies of the satellite observations (thick blue line) and CMIP5 simulations at the SSU/MSU six layers.The SSU channel (Figure 2a-c) indicates cooling temperature trend at a rate of approximately ´0.85 K/decade in the upper stratosphere (SSU3).MSU channel 4 (MSU4) shows ´0.38 K/decade in the lower stratosphere (MSU4).It is clear that all three SSU and MSU4 channels demonstrate strong anomalies during the 1982-1983 and 1991-1992 periods, which is attributed to the volcanic eruptions of El Chichón (1982) and Mt.Pinatubo (1991).In comparison, the troposphere (Figure2e,f) shows a weak warming at a rate of +0.07 K/decade in MSU3, and +0.14 K/decade in the middle troposphere (MSU2).For the CMIP5 simulations, all the models are able to capture the observed temperature variability in the upper and low stratosphere (Figure2a-d) except some models demonstrate a larger, short-lived warming than observations due to the Mount Pinatubo volcano in 1991-1992.The overestimation of the temperature response is mainly due to the fact that the observed decreases in ozone concentrations following the major volcanic eruptions were not included in the forcing data set, which is why most models lead to overestimation of the stratospheric temperature response to volcanic eruptions, especially Pinatubo[11]. Figure 2 . Figure 2. Global temperature anomalies time series (K) in the period of 1979-2005 at (a) SSU3, (b) SSU2; (c) SSU1.Note SSU1~SSU3 represent the SSU observational layers.Blue thick line represents the STAR observation.Gray and orange lines indicate the low-top and high-top CMIP5 model, respectively, Black thick line represents model ensemble mean.Figure 2b is the same as Figure 2a except for the MSU observation (d) MSU4; (e) MSU3; (f) MSU2.Note MSU2-MSU4 represents the MSU observational layers.MME is multiple model ensemble mean. Figure 2 . Figure 2. Global temperature anomalies time series (K) in the period of 1979-2005 at (a) SSU3, (b) SSU2; (c) SSU1.Note SSU1~SSU3 represent the SSU observational layers.Blue thick line represents the STAR observation.Gray and orange lines indicate the low-top and high-top CMIP5 model, respectively, Black thick line represents model ensemble mean.Figure 2b is the same as Figure 2a except for the MSU observation (d) MSU4; (e) MSU3; (f) MSU2.Note MSU2-MSU4 represents the MSU observational layers.MME is multiple model ensemble mean. Figure 3 . Figure 3. Taylor diagram for time series of observed and simulated global mean temperature.Normalized Standard deviation, Correlation coefficient, and The Root Mean Squared Deviation (RMSD) are presented in one diagram.An ideal model has a standard deviation ratio of 1.0 and a correlation coefficient of 1.0.Triangle symbols represent high-top models (correlations greater than 0.14 are statistically significant at the 99% confidence level). Figure 3 . Figure 3. Taylor diagram for time series of observed and simulated global mean temperature.Normalized Standard deviation, Correlation coefficient, and The Root Mean Squared Deviation (RMSD) are presented in one diagram.An ideal model has a standard deviation ratio of 1.0 and a correlation coefficient of 1.0.Triangle symbols represent high-top models (correlations greater than 0.14 are statistically significant at the 99% confidence level). Figure 4 . Figure 4. Vertical profile of global mean temperature trends and spread among the data sets for CMIP5 model simulations.Note SSU1-3 and MSU2-4 represent the SSU/MSU observational layers (unit K/decade).(a) Vertical profile temperature trends; (b) spread among models. Figure 4 . Figure 4. Vertical profile of global mean temperature trends and spread among the data sets for CMIP5 model simulations.Note SSU1-3 and MSU2-4 represent the SSU/MSU observational layers (unit K/decade).(a) Vertical profile temperature trends; (b) spread among models. Figure 6 Figure 6 displays the latitudinal-longitudinal variation of global temperature trend spreads for 1979-2005 for the layers from the middle troposphere (6 km height) to the upper stratosphere (45 km height).The results indicate (Figure6) that the spreads are highly sensitive to the latitude of interest, from 0.05 K/decade in tropical and subtropical areas to 0.5 K/decade in the southern polar region.The largest layer spread locations change with vertical level; the largest spread in the lower stratosphere (SSU1, MSU4) exceeds 0.5 K/decade over Antarctic areas, which is mainly due to some models significantly overestimating the cooling.In contrast, there are some remarkable discrepancies in the tropical regions in the MSU4 layer, especially in the central Pacific region.It is worth noting that the spread of the CMIP5 simulations in the middle troposphere (MSU2) remains a relatively small value at all latitudes.The smaller spread in the MSU2 reflects the high consistency at all latitudes in CMIP5 simulations.To summarize, the cooling trends of the stratospheric temperature markedly change with latitude, and the largest trend is found in the tropics-subtropics but the largest spread is found in the South Polar Region.In contrast, the warming trend increases with latitude from south to north in the troposphere, but the spread retains a small value except for both polar areas.Conversely, the cooling trend decreases with latitude from south to north in the stratosphere consistent with latitudinal radiative balance. Figure 6 . Figure 6.Spatial distribution of spread of temperature trends in CMIP5 model simulations.Note SSU1-3 and MSU2-4 represent the SSU/MSU observational layers (unit K/de). Figure 6 . Figure 6.Spatial distribution of spread of temperature trends in CMIP5 model simulations.Note SSU1-3 and MSU2-4 represent the SSU/MSU observational layers (unit K/de). Figure 7 . Figure 7. Global mean temperature anomalies for SSU3 and MSU2 observational layers in GHG-only historical experiment, Natural-only forcing run and preindustrial control run. Figure 7 . Figure 7. Global mean temperature anomalies for SSU3 and MSU2 observational layers in GHG-only historical experiment, Natural-only forcing run and preindustrial control run. Figure 9 . Figure 9. Regression coefficients derived from the regression of historical global mean temperatures' time series with greenhouse gases forced run (GHG) natural forced run (NAT) and pre-industrial control run (PI) (error bar: 95% confidence intervals for the coefficient estimates). Figure 9 . Figure 9. Regression coefficients derived from the regression of historical global mean temperatures' time series with greenhouse gases forced run (GHG) natural forced run (NAT) and pre-industrial control run (PI) (error bar: 95% confidence intervals for the coefficient estimates). Figure 9 . Figure 9. Regression coefficients derived from the regression of historical global mean temperatures' time series with greenhouse gases forced run (GHG) natural forced run (NAT) and pre-industrial control run (PI) (error bar: 95% confidence intervals for the coefficient estimates). Figure 10 . Figure 10.Global mean temperature anomalies simulated by MRI-CGCM3 at SSU1-SSU3 and MSU2-4.Red dash line represents the historical run (Hist).Blue line indicates the multiple linear regression results (Reg) with greenhouse gases forced run (GHG) natural forced run (NAT) and pre-industrial control run (PI). Figure 10 . Figure 10.Global mean temperature anomalies simulated by MRI-CGCM3 at SSU1-SSU3 and MSU2-4.Red dash line represents the historical run (Hist).Blue line indicates the multiple linear regression results (Reg) with greenhouse gases forced run (GHG) natural forced run (NAT) and pre-industrial control run (PI). Table 1 . The CMIP5/IPCC Data Sets and Selected Information. Table 2 . Signal-to-noise ratio (S/N) for stratospheric and tropospheric air temperature.
10,446.8
2015-12-24T00:00:00.000
[ "Environmental Science", "Physics" ]
Forbidden vector-valued intersections We solve a generalised form of a conjecture of Kalai motivated by attempts to improve the bounds for Borsuk's problem. The conjecture can be roughly understood as asking for an analogue of the Frankl-R\"odl forbidden intersection theorem in which set intersections are vector-valued. We discover that the vector world is richer in surprising ways: in particular, Kalai's conjecture is false, but we prove a corrected statement that is essentially best possible, and applies to a considerably more general setting. Our methods include the use of maximum entropy measures, VC-dimension, Dependent Random Choice and a new correlation inequality for product measures. Introduction Intersection theorems have been a central topic of Extremal Combinatorics since the seminal paper of Erdős, Ko and Rado [9], and the area has grown into a vast body of research (see [2], [4] or [19] for an overview). The Frankl-Rödl forbidden intersection theorem is a fundamental result of this type, which has had a wide range of applications to different areas of mathematics, including discrete geometry [12], communication complexity [28] and quantum computing [6]. To state their result we introduce the following notation. Let k and t ∈ [n] let A × t A be the set of all (A, B) ∈ A × A with |A ∩ B| = t. Note that [n] k × t Although the bounds from Conjecture 1.2 in general do not hold, it is still natural to ask whether we can find any (t, w)-intersection in such 'exponentially dense' subsets A ⊂ [n] k,s . If so, what is the optimal lower bound on |A × (t,w) A|? This paper investigates these questions; in particular, we give a natural correction to Conjecture 1.2. Our results will apply to the following more general setting of vector-valued set 'sizes': given vectors V = (v i : i ∈ [n]) in R D , we define the V-size of A ⊂ [n] by We note that the Frankl-Rödl theorem concerns V-sizes where D = 1 and all v i = 1, and the Kalai conjecture concerns V-sizes where D = 2 and v i = (1, i). Vector-valued intersections In order to prove our forbidden V-intersection theorem, we need to work over a general alphabet, where we associate a vector with each possible value of each coordinate, as follows. We also introduce a class of norms on R D to account for the possibility that different coordinates of vectors in V may operate at different scales. In the following definition we think of R as a scaling; e.g. for the Kalai vectors (1, i), we take R = (1, n). Our V-intersection theorem requires two properties of the set of vectors V. The first property, roughly speaking, says that any vector in Z D can be efficiently generated by changing the values of coordinates, and that furthermore this holds even if a small set of coordinates are frozen, so that no coordinate is overly significant. To see why such a condition is necessary, suppose that D = 1 and almost all coordinates have only even values: then there are large families where all intersections have a fixed parity. Definition 1.7. Let V = (v i j ) be an (n, J)-array in Z D . We say that V is γ-robustly (R, k)-generating in Z D if for any v ∈ Z D with v R ≤ 1 and T ⊂ [n] with |T | ≤ γn there is S ⊂ [n] \ T with |S| ≤ k and j i , j ′ i ∈ J for all i ∈ S such that v = i∈S (v i Note that if V = (v i : i ∈ [n]), considered as an (n, {0, 1})-array, then Definition 1.7 says that for all such v and T there are disjoint S, S ′ ⊂ [n]\T with |S|+|S ′ | ≤ k such that v = i∈S v i − i∈S ′ v i . We also make the following 'general position' assumption for V. R d . We say that V is γ ′ -robustly (γ, R)-generic if for any X ⊂ [n] with |X| > γ ′ n, some I ⊂ X is (γ, R)-generic for V. We are now in a position to state our main theorem. It shows that, under the above assumptions on V, there are only two obstructions to a set X = ({0, 1} n ) V z satisfying a supersaturation result as in Kalai's conjecture (case i): either (case ii) there is a small set B f ull ⊂ X responsible for almost all w-intersections in X , or (case iii) there is a large set B empty ⊂ X containing no w-intersections. Furthermore, in case ii we obtain optimal supersaturation relative to B f ull . -generic and γ i -robustly (R, k)-generating for i = 1, 2. Let z, w ∈ Z D with z = w and let X = ({0, 1} n ) V z . Then one of the following holds: iii. There is B empty ⊂ X with |B empty | ≥ ⌊(1 − ε) n |X |⌋ satisfying (B empty × B empty ) V∩ w = ∅. Furthermore, if ii holds and iii does not then any Remark 1.10. i. Theorem 1.9 applies to (t, w)-intersections in [n] k,s , as we have shown above that its hypotheses hold for the Kalai vectors. ii. As indicated above, cases ii and iii of Theorem 1.9 may simultaneously hold (see counterexample 1 of Section 5). iii. The assumption that V is γ 1 -robustly (R, k)-generating is redundant, as it is implied by γ 2robustly (R, k)-generating, but the assumptions of γ ′ i -robustly (γ i , R)-generic for i = 1, 2 are incomparable, and our proof seems to require this 'multiscale general position'. We have highlighted Theorem 1.9 as our main result for the sake of giving a clean combinatorial statement. However, we will in fact obtain considerably more general results in two directions, whose precise statements are postponed until later in the paper. • Our most general result, Theorem 6.2, implies cross-intersection theorems for two or more families and applies to families of vectors over any finite alphabet. • Theorem 1.9 leaves open the question of how many w-intersections are guaranteed in large subsets of X when case (ii) holds; this is answered by Theorem 11.1. It is natural to ask under which conditions the alternate cases of Theorem 1.9 hold. These conditions are best understood in relation to our proof framework, so we postpone this discussion to section 1.4, after we have introduced the two principal components of the proof. A probabilistic forbidden intersection theorem A key paradigm of our approach is that V-intersection theorems often have equivalent formulations in terms of certain product measures (the maximum entropy measures described in the next subsection), and that the necessary condition for these theorems appears naturally as a condition on the product measures. (A similar idea arose in the new proof of the density Hales-Jewett theorem developed by the first Polymath project [26], although in this case the natural 'equal slices' distribution was not a product measure.) To illustrate this point, we recast the Frankl-Rödl theorem in such terms. Again we identify subsets of [n] with their characteristic vectors in {0, 1} n , on which we introduce the product measure where q 1,1 = t/n, q 0,1 = q 1,0 = (k − t)/n and q 0,0 = (n − 2k + t)/n. It follows from our general large deviation principle in the next subsection (or is easy to see directly in this case) that the hypothesis of Theorem 1.1 is essentially equivalent to µ p (A) > (1 − δ) n and the conclusion to µ q (A × t A) > (1 − ε) n . Furthermore, the assumption on t can be rephrased as q j,j ′ ≥ ε for all j, j ′ ∈ {0, 1}, and this indicates the condition that we need in general. Let us formalise the above discussion of product measures in a general context. Although we only considered the cases when the 'alphabet' J is {0, 1} or {0, 1} × {0, 1}, we remark that it is essential for our arguments to work with general alphabets, as the proofs of our results even in the binary case rely on reductions that increase the alphabet size. Definition 1.11. Suppose p = (p i j : i ∈ [n], j ∈ J) with all p i j ∈ [0, 1] and j∈J p i j = 1 for all i ∈ [n]. The product measure µ p on J n is given, for a ∈ J n , by µ p (a) = i∈[n] p i a i . Given an (n, J)-array V and a measure µ on J n , we write V(µ) = E a∼µ V(a). Suppose µ q is a product measure on ( s∈S J s ) n , with q = (q i j 1 ,...,js : i ∈ [n], j 1 ∈ J 1 , . . . , j S ∈ J S ). For s ∈ [S] the s-marginal of µ q is the product measure µ ps on J n s with (p s ) i j = q i j 1 ,...,j S for all i ∈ [n], j ∈ J s , where the sum is over all (j 1 , . . . , j S ) with j s = j. We say that µ q has marginals (µ ps : s ∈ S). We say that µ q is κ-bounded if all q i j,j ′ ∈ [κ, 1 − κ]. Note that if µ q is κ-bounded then so are its marginals. A rough statement of our probabilistic forbidden intersection theorem (Theorem 1.14 below) is that if A has 'large measure' then the set of w-intersections in A has 'large measure'. We will combine this with an equivalence of measures discussed in the next subsection to deduce our main theorem. First will highlight two special cases of Theorem 1.14 that have independent interest. The first is the following result, which ignore the intersection conditions, and is only concerned with the relationship between the measures of A and A × A; it is a new of correlation inequality (see Theorem 7.1 for a more general statement that applies to several families defined over general alphabets). Theorem 1.12. Let 0 < n −1 , δ ≪ κ, ε < 1 and µ q be a κ-bounded product measure on Next we consider the problem of finding V-intersections that are close to w, which is also natural, and somewhat easier than finding V-intersections that are (exactly) w. We require some notation. Theorem 1.13 naturally fits into the wide literature on forbidden L-intersections in extremal set theory (see [2], [4] or [19]). Here one aims to understand how large certain families of sets can be if all intersections between elements of A are restricted to lie in some set L. For example, the Erdős-Ko-Rado theorem [9] can be viewed as an L 0 -intersection theorem for families A ⊂ n k , where L 0 = {l ∈ N : 1 ≤ l ≤ k}. Similarly, Katona's t-intersection theorem [22] can be viewed as an L ≥t -intersection theorem for families A ⊂ P[n], where L ≥t = {l ∈ N : l ≥ t}. Now we state our probabilistic forbidden intersection theorem: if V is robustly generated then Theorem 1.13 can be upgraded to find fixed V-intersections. Maximum entropy and large deviations Next we will discuss an equivalence of measures that will later combine with Theorem 1.14 to yield Theorem 1.9. Here we are guided by the maximum entropy principle (proposed by Jaynes [18] in the context of Statistical Mechanics) which suggests considering the distribution with maximum entropy subject to the constraints of our problem, as defined in the following lemma (the proof is easy, and will be given in Section 2). We will show that µ V w is equivalent to the uniform measure on (J n ) V w , in the sense of exponential contiguity, defined as follows. (It is reminiscent of, but distinct from, the more well-known theory of contiguity, see [17,Section 9.6].) Definition 1.16. Let µ = (µ n ) n∈N and µ ′ = (µ ′ n ) n∈N , where µ n and µ ′ n are probability measures on a finite set Ω n for all n ∈ N. Let F = (F n ) n∈N where each F n is a set of subsets of Ω n . We say that µ ′ exponentially dominates µ relative to F, and write We say that µ and µ ′ are exponentially contiguous relative to F, and write µ ≈ F µ ′ if µ F µ ′ and µ ′ F µ. If ∆ = (∆ n ) n∈N with each ∆ n ⊂ Ω n then we write µ ∆ µ ′ if µ F µ ′ , where F n is the set of all subsets of ∆ n ; we define µ ≈ ∆ µ ′ similarly. Note that F is a partial order and ≈ F is an equivalence relation. The following result establishes the required equivalence of measures under the same hypotheses as in the previous subsection. It can be regarded as a large deviation principle for conditioning x ∈ J n on the event V(x) = w (see [8] for an overview of this area). To apply Theorem 1.17 under combinatorial conditions, we will use the following lemma which shows that µ p V w is κ-bounded under our general position condition on V. (See also Section 4 for a more general result based on VC-dimension that applies to larger alphabets.) Alexander Barvinok remarked (personal communication) that similar results to Theorem 1.17 and Lemma 1.18 were obtained by Barvinok and Hartigan in [3]. Theorem 3 of [3] gives stronger bounds on |({0, 1} n ) V w | where applicable, but their assumptions are very different to ours (they assume bounds for quadratic forms of certain inertia tensors), and they also require that the vectors all operate at the 'same scale', so their results do not apply to the Kalai vectors. Although our bounds are weaker, our proofs are considerably shorter, and furthermore, stronger bounds here would not give any improvements elsewhere in our paper, as they account for a term subexponential in n, while our working tolerance is up to a term exponential in n. Supersaturation We now give a brief overview of the strategy for combining the results of the previous two subsections to prove supersaturation, and also indicate the conditions that determine which case of Theorem 1.9 holds. Under the set up of Theorem 1.9, a telegraphic summary of the argument is: where µ q is chosen to optimise the lower bound on |(A × A) V∩ w | implied by the final inequality. The best possible supersaturation bound (case i of Theorem 1.9) arises when Theorem 1.14 is applicable with µ q equal to the maximum entropy measure µ q that represents (X × X ) V∩ w : this case holds when µ q is κ-bounded and has marginals µ p close to µ p := µ p V z . Case ii of Theorem 1.9 holds if µ q is κ-bounded but µ p is not close to µ p : then µ p is concentrated on a small subset B f ull of X , which is responsible for almost all w-intersections in X . Lastly, case iii of Theorem 1.9 holds if µ q is not κ-bounded. The key to understanding this case is the well-known [31] Vapnik-Chervonenkis dimension, defined as follows. Definition 1. 19. We say that A ⊂ J n shatters X ⊂ [n] if for any (j x : x ∈ X) ∈ J X there is a ∈ A with a x = j x for all x ∈ X. The VC-dimension dim V C (A) of A is the largest size of a subset of [n] shattered by A. To see why it is natural to consider the VC-dimension, consider the problem of finding an intersection of size n/3 among subsets of [n] of size 2n/3. The conditions of the Frankl-Rödl theorem are not satisfied, and indeed the conclusion is not true: take 2n/3 × n/3 [n] 2n/3 as a subset of ({0, 1} × {0, 1}) n , we see that no coordinate can take the value (0, 0), so there is not even a shattered set of size 1! Modifying this example in the obvious way we see that it is natural to assume a bound that is linear in n. We also note that this example shows that the 'Frankl-Rödl analogue' of Conjecture 1.2 is not true, and hints towards a counterexample for Kalai's conjecture. More generally, we will prove that κ-boundedness of µ q is roughly equivalent to the VC-dimension of (X × X ) V∩ w being large as a subset of ({0, 1} × {0, 1}) n (see Lemma 4.8). Case iii of Theorem 1.9 will apply when (X × X ) V∩ w has low VC-dimension. The above outline also gives some indication of how the values in Theorem 1.3 arise. As described above, the supersaturation conclusion desired by Conjecture 1.2 (case i of Theorem 1.9) needs µ q to have marginals µ p close to µ p := µ p V z . We can describe µ q and µ p explicitly using Lagrange multipliers: they are Boltzmann distributions (see Lemma 10.1). In general, it is not possible for one Boltzmann distribution to be a marginal of another, which explains why Conjecture 1.2 is generally false. An analysis of the special conditions under it is possible gives rise to the characterisation of Γ in Theorem 1.3. The outline also suggests a possible characterisation of the optimal level of supersaturation in all cases (i.e. including those for which Kalai's conjecture fails). Any choice of µ q satisfying the hypotheses of Theorem 1.14 with marginal distributions µ V z gives a lower bound on |(A × A) V∩ w |, and the optimal such lower bound is obtained by taking such a measure with maximum entropy. Is this essentially tight? We wil give a positive answer to this question by proving a matching upper bound in Section 11. Finally, we remark that our method allows different vectors defining the sizes of intersections from those defining the sizes of sets in the family, i.e. V ′ -intersections in ({0, 1} n ) V z ; in Section 6.3 we show such an application to give a new proof of a theorem of Frankl and Rödl [11,Theorem 1.15] on intersection patterns in sequence spaces. Organisation of the paper In the next section we collect some probabilistic methods that will be used throughout the paper. We prove the large deviation principle (Theorem 1.17) in Section 3. In Section 4 we establish the connection between VC-dimension and boundedness of maximum entropy measures. Section 5 is expository: we give two concrete counterexamples to Kalai's Conjecture 1.2. Next we introduce a more general setting in Section 6, state our most general result (Theorem 6.2), and show that it implies our probabilistic intersection theorem (Theorem 1.14). In Section 7 we prove a correlation inequality needed for the proof of Theorem 6.2; as far as we are aware, the inequality is quite unlike other such inequalities in the literature. We prove Theorem 6.2 in Section 8, and then deduce our main theorem (1.9) in Section 9. Our corrected form of Kalai's conjecture (Theorem 1.3) is proved in Section 10; we also show here in much more generality that supersaturation of the form conjectured by Kalai is rare. In Section 11 we give a complete characterisation of the optimal level of supersaturation in terms of a certain optimisation problem for measures. Lastly, in section 12 we recast our results in terms of 'exponential continuity': a notion that arises naturally when comparing distributions according to exponential contiguity, and may be interpreted in terms of robust statistics for social choice: this point and several potential directions for future research are addressed in the concluding remarks. Notation We identify subsets of a set with their characteristic vectors: A ⊂ X corresponds to a ∈ {0, 1} X , where a i = 1 ⇔ i ∈ A. The Hamming distance between vectors a and a ′ in a product space J n is d(a, a ′ ) = |{i ∈ [n] : a i = a ′ i }|. Given a set X, we write X k = {A ⊂ X : |A| = k}. We write δ ≪ ε to mean for any ε > 0 there exists δ 0 > 0 such that for any δ ≤ δ 0 the following statement holds. Statements with more constants are defined similarly. We write a = b ± c to mean b − c ≤ a ≤ b + c. Throughout the paper we omit floor and ceiling symbols where they do not affect the argument. All vectors appear in boldface. Probabilistic methods In this section we gather several probabilistic methods that will be used throughout the paper: concentration inequalities, entropy, an application of Dependent Random Choice to the independence number of product graphs, and an alternative characterisation of exponential contiguity. Concentration inequalities We start with the well-known Chernoff bound (see e.g. [1, Appendix A]). An easy consequence is the following concentration inequality for random sums of vectors. , so the lemma follows from a union bound. We will also use the following consequence of Azuma's martingale concentration inequality (see e.g. [25]). We say that f : J n → R is b-Lipschitz if for any a, a ′ ∈ J n differing only in a single coordinate we have |f (a) − f (a ′ )| ≤ b. Entropy In this subsection we record some basic properties of entropy (see [7] for an introduction to information theory). The entropy of a probability distribution p = (p 1 , . . . , p n ) is H(p) = − i∈[n] p i log 2 p i . The entropy of a random variable X taking values in a finite set S is H(X) = H(p), where p = (p s : s ∈ S) is the law of X, i.e. p s = P(X = s). When p = (p, 1 − p) takes only two values we write Entropy is subadditive: if X = (X 1 , . . . , X n ) then H(X) ≤ n i=1 H(X i ), with equality if and only if the X i are independent. An equivalent reformulation is the following lemma. It is easy to deduce Lemma 1.15 from Lemma 2.4. Indeed, consider µ ∈ M V w with maximum entropy. Let with equality if and only if µ = µ p . As M V w is convex, uniqueness follows from strict concavity of the entropy function, which we will now explain. It is often convenient to use the notation and L ′′ (p) = − 1 p log 2 < 0, so L is strictly concave. The following lemma is immediate from these formulae and the mean value form of Taylor's theorem: We deduce the following 'stability version' of the uniqueness of the maximum entropy measure, which quantifies the decrease in entropy in terms of distance from the maximiser. We conclude this subsection with a perturbation lemma. Dependent Random Choice We will use the following version of Dependent Random Choice (see [23,Lemma 11] for a proof and [10] for a comprehensive survey of the method). We write for the set of common neighbours of u and u ′ in a graph G. The following is an immediate consequence of Lemma 2.8, applied with t = ⌈2/cε⌉. We say that S ⊂ V (G) is independent if it contains no edges of G. The independence number α(G) of G is the maximum size of an independent set in G. Given graphs G 1 , . . . , G k , we write By repeated application of the previous lemma, we obtain the following corollary. Exponential Contiguity We conclude this section with an alternative characterisation of exponential contiguity. Large deviations of fixed sums In this section we prove Theorem 1.17. Our first lemma will be used to show that the maximum entropy measure is exponentially dominated by the uniform measure. . As these random variables are independent and EX = −H(µ p ), the bound on µ p (B) follows from Chernoff's inequality. Our next lemma gives a lower bound for point probabilities of maximum entropy measures, which implies an upper bound on the number of solutions of V(x) = w. Our final lemma will give an approximate formula for the number of solutions of V(x) = w (as mentioned in the introduction, [3,Theorem 3] gives stronger bounds under different hypotheses). First we require a small set that efficiently generates Z D , as described by the following definition and associated lemma, which shows that such a set exists under the mild assumption of polynomial growth for the coordinate scale vector R (this will also be used later in Theorem 6.2). Proof. We first note that the final statement of the lemma follows from the first: the latter gives the lower bound, as EV(x) = w when x ∼ µ p V w , and the upper bound follows from Lemma 3.2. It remains to prove the first statement of the lemma. Let F be the set of x ∈ J n such that there is We claim that µ p (F) > 1/2. First we assume the claim and deduce the lower bound. By double-counting pairs ( Note that log 2 µ p (F ′ ) ≤ log 2 |F ′ | − H(µ p ) + δ 2 n, and by Lemma 3.1 and the claim we have µ p ( To prove the claim, we consider x ∼ µ p and show that with probability at least 1/2 there is ). Let B 2 be the event that for some m we have |{t : This completes the proof of the claim, and so of the lemma. We deduce Theorem 1.17, which states that under the hypotheses of the above lemmas, we have Boundedness, feasibility and universal VC-dimension In this section we will give several combinatorial characterisations of the boundedness condition on maximum entropy measures required in our probabilistic intersection theorem. The characterisations hold under the following 'multiscale general position' assumption, which extends Definition 1.8 to all finite alphabets (by 'multiscale' we mean that the parameter γ can be arbitrary, which is true of the Kalai vectors). We say that a sequence (V n , R n ) of (n, J)-arrays and scalings is robustly generic if V n is γ ′ -robustly (γ, R n )-generic whenever n −1 ≪ γ ≪ γ ′ . It will also be convenient to use the following sequence formulation of Definition 1.7. Definition 4.2. We say that (V n , R n ) is robustly generating if there are γ > 0 and k, n 0 ∈ N such that V n is γ-robustly (R n , k)-generating for all n > n 0 . Next we will define the combinatorial conditions that appear in our characterisation. We recall the definition of VC-dimension and also define a universal variant that will be important in the proof of Theorem 1.9 in section 9. Next we give a feasibility condition, which can be informally understood as saying that we can solve any small perturbation of the equation V n (x) = z n . Definition 4.4. Let (V n , R n , z n ) be a sequence of (n, J)-arrays, scalings and vectors in Z D . We say (V n , R n , z n ) is λ-feasible if there is n 0 such that for any n > n 0 , any z ′ n ∈ Z D with z ′ n − z n Rn ≤ λn, and any (n ′ , J)-array V ′ n ′ obtained from V n by deleting at most λn co-ordinates, we have (J n ′ ) Our final property appears to be a substantial weakening of our κ-boundedness condition, so it is quite surprising that it also gives a characterisation. Definition 4.5. Suppose µ p is a product measure on J n . We say that µ p is κ-dense if there are at least κn coordinates i ∈ [n] such that p i j ≥ κ for all j ∈ J. Now we can state the main theorem of this section. The sense of the equivalences in the statement is that the implied constants are bounded away from zero together. For example, the implication ii ⇒ i means that for any δ > 0 there is ε > 0 such that if µ Vn zn is δ-dense then µ Vn zn is ε-bounded. Theorem 4.6. Let (V n , R n ) be a robustly generic and robustly generating sequence of (n, J)-arrays and scalings in Z D , and (z n ) a sequence of vectors in Z D . The following are equivalent: The main step in the proof of Theorem 4.6 is Lemma 4.8, which provides the implication iii ⇒ i. It also implies Lemma 1.18, as for binary vectors the following coarse version of the Sauer-Shelah theorem shows that linear VC-dimension is equivalent to exponential growth. The proof of Lemma 4.8 is immediate from the next two lemmas, which give the implications iii ⇒ ii and ii ⇒ i of Theorem 4.6. Consider a product measure µ p ′ where for some t > 0 we have p ′i Proof of Theorem 4.6. It remains to prove the implications i ⇒ v and v ⇒ iv (note that iv ⇒ iii is trivial). For i ⇒ v, let n −1 ≪ λ ≪ κ ≪ γ, k −1 , suppose V n is γ-robustly (R n , k)-generating, µ Vn zn is κ-bounded, z ′ n ∈ Z D with z ′ n − z n Rn ≤ λn, and V ′ is obtained from V n by deleting S ⊂ [n] with |S| ≤ λn. Then V ′ is R n -bounded and (γ/2)-robustly (R n , k)-generating. Also, the restriction For v ⇒ iv, let n −1 ≪ κ, and suppose (V n , R n , z n ) is κ-feasible. Fix S ⊂ [n] with |S| = κn and y ∈ J S . We need to show that there is x ∈ (J n ) Vn zn with x| S = y. Let V ′ , V 0 be obtained from V n by respectively deleting, retaining the coordinates of S. Let z ′ n = z n − V 0 (y). Then z ′ n − z n Rn ≤ κn, so by definition of κ-feasibility we can find We conclude this section by noting the following lemma which is immediate from the preceding proof and Lemma 4.8. Let The first example also shows that cases ii and iii can hold simultaneously. The general setting In this section we state our most general result, Theorem 6.2; we will defer the proof to section 8. This is in fact the main result of the paper in some sense, as we will show in this section that it implies Theorem 1.14 (in a more general cross-intersection form). However, the hypothesis of 'transfers' in Theorem 6.2 appears to be quite strong at first sight, and it will take some work to show that it follows from the hypotheses of Theorem 1.14 (it is here that the idea of enlarging the alphabet comes into play). We state our result in the next subsection and then deduce Theorem 1.14 in the following subsection. A second application of Theorem 6.2 is given in subsection 6.3, where we use it to give a short proof of a theorem of Frankl and Rödl on forbidden intersection patterns. Statement of the general theorem Before stating our theorem, we require the following definition, which describes a situation when for any vector u in some specific set (which will be given by the following definition), there are many ways of choosing a coordinate and two particular alterations of its value: one does not change the associated vector, and the other changes it by u. We say that V has γ-robust transfers for U if it has transfers for (P, U ) for some P such that |P m | ≥ γn for all m ∈ [M ]. Note that an (n, s∈S J s )-array in Z D has transfers for (P, U ) if it has them as an (n, J ×L)-array, where J = s∈S ′ J s and L = s∈S\S ′ J s for some S ′ ⊂ S. We can now state our general theorem. (Recall that U exists by Lemma 3.4.) 6.2 Proof of Theorem 1.14 Now we assume Theorem 6.2 and prove Theorem 1.14; in fact we prove the more general crossintersection theorem. The strategy is to fuse together suitable co-ordinates and enlarge the alphabet. We let N = ⌊n/k⌋ and partition [n] into sets T 1 , . . . , T N each of size k and a remainder set R with 0 ≤ |R| ≤ k − 1, such that each S mj ∪ S ′ mj is contained in some T i . We let P = (P m : m ∈ [M ]), where each P m is the set of i ∈ [N ] such that T i contains some S mj ∪ S ′ mj . We start by reducing to the case R = ∅ and k|n. For R s ⊂ R we let A Rs s = {A ∈ A s : A∩R = R s } for s = 1, 2. By the pigeonhole principle we can fix (R 1 , R 2 ) so that µ p ) according to some fixed bijection of T i with [k]. We will apply Theorem 6.2 with N in place of n, with S = {1, 2} and J 1 = J 2 = {0, 1} k , and A ′ s (naturally identified) in place of A s . We let W = (w i J 1 ,J 2 ) be the (N, {0, 1} k × {0, 1} k )-array in Z D defined by w i J 1 ,J 2 = j∈J 1 ∩J 2 v j for J 1 , J 2 ⊂ T i . Note that V ∩ (x, y) = W(x, y) for all x, y in {0, 1} n (naturally identified). We also note that W has transfers for (U , P). To see this, consider i ∈ P m with S mj ∪ S ′ mj ⊂ T i . To summarise, after the above reductions, we have , so the theorem follows from Theorem 6.2. Application to a theorem of Frankl and Rödl In this subsection we give another application of Theorem 6.2, which illustrates an additional flexibility, namely that our method allows different vectors defining the sizes of intersections from those defining the sizes of sets in the family. We will give a new proof of a theorem of Frankl and Rödl [11, Theorem 1.15] on intersection patterns in sequence spaces. (To align with notation from the rest of the paper, our notation differs from that of [11].) Given non-negative integers l 1 , . . . , l s with i l i = n, let k 1 ,...,kt , the intersection pattern of x and y is given by an s times t matrix M , with M j 1 ,j 2 = |{i ∈ [n] : l 1 ,...,ls and A 2 ⊂ [n] k 1 ,...,kt we let A 1 × M A 2 denote the set of pairs (x, y) ∈ A 1 × A 2 with intersection pattern M . We wish to emphasize two aspects of the above proof. Firstly, it is crucial that the arrays V 1 , V 2 and V can differ. Secondly, the arrays V i are not |J i | −1 -robustly (γ, R)-generic for any γ > 0 for i = 1, 2, so we cannot apply Lemma 4.8, but we were able to see directly that µ p 1 and µ p 2 are κ-bounded. Thus Theorem 6.2 has useful consequences even for arrays that are not robustly generic. Correlation on product sets In this section we will prove the following correlation inequality which will be used in the proof of Theorem 6.2; it can also be interpreted as an exponential contiguity result for product measures (see Theorem 7.2). Let M = E µq (f ). We claim that M ≤ (2δ + α)n ≤ 2αn. To see this, we apply a well-known concentration argument. For I ⊂ R, let This completes the proof in this case. Now we deduce the general case by induction on |S|. Suppose the theorem is known for |S| = k−1 and we wish to prove it for |S| = k. Fix s ∈ S and let S ′ = S \ {s}. We view ( s∈S J s ) n as (J s × J ′ ) n , where J ′ = s ′ ∈S ′ J s ′ . Let µ p ′ be the product measure on J ′n defined by µ p S ′ (x ′ ) = µ q ((J 1 ) n × {x ′ }). Then µ p ′ is κ-bounded and has marginals (µ p s ′ ) s ′ ∈S ′ , so by induction hypothesis, Also, we can view µ q as a product measure on (J s × J ′ ) n , with marginals µ ps and µ p ′ . Since from the |S| = 2 case of the theorem we obtain µ q ( s∈S A s ) ≥ (1 − ε) n , as required. Next we will apply Theorem 7.1 to show exponential contiguity of µ q and s∈S µ ps , defined by ( s∈S µ ps )(x s : s ∈ S) = s∈S µ ps (x s ). Here the subscript Π indicates exponential contiguity relative to product sets, i.e. we apply Definition 1.16 in the case Ω n = s∈S J s n and F = Π = (Π n ) n∈N , where Π n = {(A n,s : s ∈ S) : all A n,s ∈ J n s }. Theorem 7.2. Let 0 < n −1 ≪ κ ≪ 1 and µ q be a κ-bounded product measure on ( s∈S J s ) n with marginals (µ ps : s ∈ S). Then µ q ≈ Π s∈S µ ps . Proof. As in the proof of Theorem 7.1, it suffices to consider the case S = [2]. By Theorem 7.1 we have µ p 1 × µ p 2 Π µ q . Conversely, consider A s ⊂ J n s for s ∈ [2]. By the Cauchy-Schwarz inequality, writing for x 1 ∈J n 1 ,x 2 ∈J n 2 , we have 1 xs∈As 2 ≤ s∈ [2] µ q (x 1 , x 2 )1 xs∈As = s∈ [2] µ ps (A s ), We conclude this section by giving the easy deduction of Theorem 1.13 from Theorem 7.1. Proof of the general theorem In this section we prove Theorem 6.2. We start by reducing to the case |S| = 2. Proof. First note that if V has γ-robust transfers for U then it has them as an (n, L 1 × L 2 )-array, where each L j = s∈S j J s for some partition (S 1 , S 2 ) of S. Now let µ p S 1 denote the product measure on L n 1 defined by Now we will prove a succession of special cases of Theorem 6.2, where the proof of each case builds on the previous cases, culminating in the proof of the general case. We assume without further comment that S = {1, 2}. We claim that we can fix K with K m = (2α ± κ/4)|P m | for all m ∈ [M ] such that µ p (A ∩ B K ) > (1 − δ) n . Indeed, by assumption we have µ p (A) > (1 − δ) n/2 . Also, for a ∼ µ p and X m = i∈Pm a i we have EX m = 2α|P m |, so by Chernoff's inequality P(|X m − EX m | > κ|P m |/4) ≤ 2e −(κ|Pm|/4) 2 /2|Pm| ≤ 2e −κ 2 γn/32 . There are at most n M choices of K, so by a union bound and the pigeonhole principle there is some K with all We note for any a and a ′ in B K,z that V(a, a ′ ) is determined by the values t m = i∈Pm a i a ′ i . Indeed, for each i ∈ P m , as u m is an i-transfer, we may suppose that v i It remains to show that we can find such a and a ′ . We consider the graph Write F K 1 ,K 2 for the set of all a ∈ {0, 1} n such that i∈B j a i = K j for j = 1, 2. As in the proof of Lemma 8.2, we write Consider the bipartite graph G with parts We will now find F ⊂ B × B with µ q (F) > (1 − ε/2) n (also writing µ q for its restriction to This will suffice to prove the lemma, as then Then µ q (E) < 2De −ζ 2 n/2 by Lemma 2.2. We choose F = (B×B)\E. By Theorem 7.1 we have µ q (B×B) It remains to show for fixed . To see this, it suffices to verify the hypotheses of Lemma 8.2, applied with N G (b 1 , b ′ 1 ) in place of A, restricting µ q to ({0, 1} × {0, 1}) B 2 , and with V 2 in place of V. We note that V 2 has transfers for the same (P, U ), and P = (P m : replacing ζ by 2ζ we see that all hypotheses hold, so the proof of the lemma is complete. Note that r 1 , . . . , r n are independent, so r defines a product measure µ q ′′ on ( 1].) For fixed h := (S, r ′ ) and j = 1, 2 let Since q ′′ = q, we have µ p j (A j ) = E h (ν(F h j )) for j = 1, 2 and µ q ((A × A) V w ) = E h (ν(F h w )). In the remainder of the proof we will show that P h (ν(F h w ) > (1 − ε/2) n ) > (1 − δ ′ ) n , where δ ≪ δ ′ ≪ ζ. This will imply the Theorem, as then µ q (( To achieve this, we will show that for 'good' h we can apply Lemma 8.4 to F h 1 and F h 2 , with uniform product measure and the array X h := (v i j,j ′ : i ∈ S, j, j ′ ∈ {0, 1}). As . First we define some bad events for h and show that they are unlikely. The last bad event is that we do not have robust transfers. Let P h = (P h m : m ∈ [M ]), where P h m is the set of i ∈ P m such that u m is an i-transfer in X h . Recalling that u m is an i-transfer in V via (0, 1) and (0, 1) for all i ∈ P m , we have i ∈ P h m whenever i ∈ S, so E|P h m | ≥ κγn. By Chernoff's inequality, the bad event B 3 that some |P h m | < κγn/2 satisfies P(B 3 ) < 2M e −κ 2 γ 2 n/8 . Now let G be the good event for h that ν(F h 1 )ν(F h 2 ) > (1 − δ ′ ) n . By Cauchy-Schwarz and Theorem 7.1 we have as required to prove the theorem. 9 Proof of Theorem 1.9 In this section we will prove Theorem 1.9. Let X = ({0, 1} n ) V z , as in the statement of Theorem 1.9. The proof will split naturally into two pieces according to the VC-dimension of (X × X ) V∩ w . The next subsection shows that for high VC-dimension cases i or ii of Theorem 1.9 hold; the following subsection shows that case iii holds in the case of small VC-dimension. Large VC-dimension Here we implement the strategy discussed in subsection 1.4: we consider the maximum entropy measure µ q that represents (X × X ) V∩ w , and distinguish cases i or ii from Theorem 1.9 according to whether its marginals µ p are close to µ p := µ p V z . Throughout this subsection we use the following notation. where 0 denotes the zero vector in Z D . Let z, w ∈ Z D and X = ({0, 1} n ) V z . We identify (X × X ) V∩ w with (J n ) V x , where x := (z, z, w). We define We denote the marginals of µ q by µ p (both marginals are equal). Next we show κ-boundedness of the above measures under our usual assumptions on V (and justify the final statement of the above definition). We conclude with the main result of this subsection, that there is a large subset of X with no w-intersection. Proof. We may assume |X | ≥ (1 + ε) n as otherwise we can take B empty = ∅. Take α and ξ such that Then |X 0 ∪ X 1 | ≥ |X |/2 by Lemma 9.4. The remainder of the proof splits into two similar cases according to which X j is large; we will give full details for the case j = 1 and then indicate the necessary modifications for j = 0. Next we will define B empty . We randomly select S ⊂ [n] with |S| = ξn, and let C = {x ∈ X ′′ : S ⊂ S 1 (x)}. We say that x ∈ C is isolated if there is no x ′ ∈ C with V ∩ (x, x ′ ) = w. We let B empty be the set of isolated x ∈ C. Then by definition we have (B empty × B empty ) V∩ w = ∅. Now we will show that E|B empty | ≥ (1−ε) n |X |. As E(|C|) = t ξn n ξn −1 |X ′′ |, |X ′′ | ≥ (1−ξ 1/2 ) n |X | and ξ ≪ ε, it suffices to show P(x is isolated | x ∈ C) ≥ 1/2 for all x ∈ X ′ . To see this, we condition on x ∈ C and note that S is equally likely to be any subset of S 1 (x) of size ξn. Consider any w (x), we have |S 1 (x)\S 1 (y)| ≥ ξn by definition of X ′′ , and x ′ ∈ C ⇔ S ⊂ S 1 (y). For fixed y we have P(S ⊂ S 1 (y)) ≤ t−ξn ξn t ξn −1 ≤ (1 − ξ) ξn . By definition of X 1 we have a union bound over at most (1 + α) n choices of y ∈ N 1 w (x), so as α ≪ ξ, the probability that x is not isolated given x ∈ C is o(1), so at most 1/2, as required. Similarly, if |X 0 | ≥ |X |/4, we define X ′ and X ′′ in the same way for X 0 , and let C = {x ∈ X ′′ : S ⊂ S 0 (x)}. We use the same definition of B empty as before, and bound the probability that x is not isolated given x ∈ C by taking a union bound over at most (1+α) n choices of y := x ′ | S 0 (x) ∈ N 0 z−w (x). The remaining details of this case are the same, so we omit them. Solution of Kalai's Conjecture In this section we prove Theorem 1.3, which is our solution to Kalai's Conjecture 1.2. We give the proof in the first subsection, then generalise it in the following subsection to show that supersaturation of the type conjectured by Kalai is quite rare. Proof of Theorem 1.3 As described in subsection 1.4, the supersaturation conclusion desired by Conjecture 1.2 (case i of Theorem 1.9) needs the maximum entropy measure µ q that represents (X × X ) V∩ w to have marginals µ p close to µ p := µ p V z . Recall that in Definition 9.1 we constructed µ q as µ V x , where V is a certain (n, {0, 1} × {0, 1})-array in Z 3D and x := (z, z, w). In this subsection we work with the Kalai vectors V = (v i ) i∈ [n] with v i = (1, i), so D = 2. In the notation of Conjecture 1.2 we have z = (k, s) and w = (t, w). Sometimes we will indicate the dependence on n as a subscript in our notation, e.g. writing z n = (k n , s n ) = (⌊α 1 n⌋ , α 2 n 2 ). Our proof will use the following concrete description of the maximum entropy measures as Boltzmann distributions. Proof. By the theory of Lagrange multipliers, p is a stationary point of gives the stated formula. Proof. We suppose that case (i) does not hold and prove that case (ii) holds. We can fix κ > 0 and a sequence n m → ∞ such that each µ q (nm) is κ-bounded. By Lemma 10.1, we have π nm ∈ R 4 such that q (nm) = q (nm) π nm . By κ-boundedness, each π nm ∈ [−C, C] 4 for some C = C(κ) ∈ R. By compactness of [−C, C] 4 , we can pass to a convergent subsequence, so by relabelling we can assume π nm → π ∈ R 4 . The first equivalence is immediate from the above proof, and the second from Theorem 4.8. We conclude this subsection with the solution to Kalai's conjecture. Uniqueness in higher dimensions In this subsection we illustrate how the method used to prove Theorem 1.3 can be applied in a broader context. Throughout this subsection we work with the following setting. • Write X n = ({0, 1} n ) Vn zn ) and suppose that |X n | > (1 + η) n , where η = η(α) > 0 is fixed. • The arrays V n have a 'scaling limit': there is a positive measurable function p : [0, 1] D → R with [0,1] D p(x)dx = 1 such that for any measurable set B ⊂ [0, 1] D we have The assumption that (V n , z n ) is robustly generic is in fact redundant, as it can be shown to follow from the scaling limit assumption, but for the sake of brevity we omit this deduction. We say that (α, β) is (n, δ, ε)-good if the corresponding V n -intersection problem exhibits 'full supersaturation' analogous to that in Conjecture 1.2, i.e. any A ⊂ X n with |A| ≥ (1 − δ) n |X n | satisfies (A × A) . We will outline the proof of the following analogue of Theorem 1.3, which shows that if we exclude the case of 'uniformly random sets' (i.e. α = 1/2 d∈[D] ) then 'full supersaturation' only occurs for one specific value of β. Theorem 10.5. In the above setting, if α = 1/2 d∈[D] then there is β * = β * (α) ∈ (0, 1) D such that for Similarly to the previous subsection, we wish to determine when µ q = µ V x (with V and x as in Definition 9.1) has marginals close to µ p = µ V z (here we are omitting the subscript n from our notation). If these measures are κ-bounded, Lemma 10.1 gives λ ∈ R D such that and π 1 , π 2 ∈ R D such that q i j,j ′ = (q (n) with Z π1,π 2 (x) = 1 + 2e π 1 ·x + e π 2 ·x . Again we study the marginal problem for µ q and µ p via the limit marginal problem of characterising λ and π such that f λ (x) = g π 1 ,π 2 0,1 i . The limit versions are h(λ) = α and h * (π 1 , π 2 ) = β, where Our next lemma is analogous to Lemma 10.2. Lemma 10.6. i. h is a homeomorphism between R D and h(R D ). ii. For large n we have µ Vn zn = µ p , for some p = p (n) λ (n) where λ (n) → λ = h −1 (α). We omit the proof of Lemma 10.6, as it is the same as that of Lemma 10.2, except in one detail which we will now check, namely that the principal minors of the Jacobian of h are positive. To see this, note that the Jacobian J has entries For any y ∈ R D we have y T Jy = x∈[0,1] D | x, y | 2 f λ (x)(1+ e λ.x ) −1 p(x)dx. As f λ and p are positive, we have y T Jy > 0 whenever y = 0, as required. We also have the following analogue of Lemma 10.3; again, we omit the similar proof. The uniqueness in Theorem 10.5 is explained by the following lemma which solves the limit marginal problem. We fix a partition of [n] into sets S 1 , . . . , S M so that Note that |B| ≥ |X |/n M . The following lemma shows that µ p ′ is a good approximation to the maximum entropy measure µ p . We use a similar construction of an empirical measure that represents w-intersections. Let G be the graph with V (G) = B where AB ∈ E(G) if |A ∩ B| V = w. We define the type of AB ∈ E(G) Note that each µ q(t) has both marginals µ p ′ and V ∩ (µ q(t) ) − w R ≤ p − p ′ 1 ≤ κn. We can assume H(µ q ) < log 2 |E(G)| − εn/2 for all q ∈ Q, otherwise the proof is complete. We fix a type t occurring at least e(G)/n 2M times and set q = q( t). Then H(µ q ) ≥ log 2 (e(G)/n 2M ), so q / ∈ Q by (3). The following lemma will show that all empirical measures associated to edges of G are close to q; we will then use this and q / ∈ Q in Lemma 11.5 to find a large independent set in G, which will complete the proof of Theorem 11.1. For the proof we require the following bound analogous to (3) for a wider class of measures. Proof. We will obtain the required bound from (3) a measure in Q close to µ q ′ . Recall that p ′ is κ ′ -bounded and p − p ′ 1 ≤ κn Consider q ′′ that minimises q ′′ − q ′ 1 subject to µ q being κbounded and having marginals µ p . For each i we can construct {(q ′′ ) i j,j ′ } from {(q ′ ) i j,j ′ } by moving probability mass |p i − p ′ i | to create the correct marginals, and moving a further mass of at most 2κ while maintaining the same marginals to ensure κ-boundedness. Therefore q ′′ − q ′ 1 ≤ 6κn. Now we will perturb q ′′ to obtain q ∈ Q, i.e. we maintain κ-boundedness and the same marginals µ p , and obtain V ∩ (µ q ) = w. The following lemma completes the proof of Theorem 11.1. Therefore, for any U ⊂ T 1 of size u = ⌈4λ 1/2 n⌉, the family A U := {B ∈ B * : U ⊂ B} forms an independent set in G * . Consider a uniformly random choice of such U . For any B ∈ B * , as |B ∩ T 1 | ≥ κ ′ |T 1 |/2 we have P(B ∈ A U ) ≥ (κ ′ /4) u ≥ (1 − λ 1/3 ) n , as λ ≪ κ ′ . Therefore E U |A U | = B∈B * P(B ∈ A U ) ≥ (1 − ε) n |X |. Thus for some U we obtain an independent set A U of at least this size, which completes the proof of Case 1. Concluding remarks There are several natural directions in which to explore potential generalisations of our results: instead of associating vectors in Z D to each coordinate we may consider values in another (abelian) group G, and we may consider more general functions of the coordinate values, e.g. a (low degree) polynomial (e.g. a quadratic for application to the Borsuk conjecture) rather than a linear function (is there a 'local' version of Kim-Vu [24] polynomial concentration?). Even for linear functions in one dimension, our setting seems somewhat related to some open problems in Additive Combinatorics, such as the independence number of Paley graphs, but here our assumptions seem too restrictive (one cannot use transfers). We may also ask when better bounds hold, e.g. for G = Z/6Z we recall an open problem of Grolmusz [15]: is there a subexponential bound for set systems where the size of each set is divisible by 6 but each pairwise intersection is not divisible by 6? Our results may interpreted as giving robust statistics in the theory of social choice. Suppose that we represent a voter by an opinion vector x ∈ J n , where each x i represents an opinion on the ith issue, for example, when |J| = 2 each issue could be a question with a yes/no answer. Then we can represent a population of voters by a probability measure µ on J n , where µ(x) is the proportion of a voters with opinion x. Now suppose that we want to compare two (or more) voters. One natural measure of comparison is to assign a score to each opinion and calculate the total score on opinions where they agree. If this is too simplistic, then we could assign score vectors in some R D , where D is small enough to give a genuine compression of the data, but large enough to capture the varied nature of the issues: we compare x and x ′ according to V ∩ (x, x ′ ). Taking the perspective of robust statistics (see [16]), it is natural to ask whether this statistic is sensitive to our uncertainty in the probability measure that represents the population as a whole: Theorem 12.2 (with the remark following it) gives one possible answer.
14,086.8
2017-06-12T00:00:00.000
[ "Mathematics" ]
Constellation research and sociology of philosophy Sociology of philosophy is the name of a theory-based, empirical sociological subdivision, whose basic approach is praxeological: doing philosophy involves a variety of socially situated practices that in some cases result in philosophical arguments and doctrines fixed in text. Constellation research is an approach developed by the German philosopher Dieter Henrich (1927–2022) for investigations into the history of philosophy. It combines a historical and a systematic intent and interest. Constellation research has primarily been tried out on the rapid development of post-Kantian idealism in Germany in the 1790s. The approach has some obvious similarities with different approaches within the sociology of philosophy. The article first gives a condensed presentation of constellation research as a research program. In a second step, it tries to sort out similarities and dissimilarities with key concepts, topics and approaches within the sociology of philosophy. The argument put forward is that constellation research can provide the sociology of philosophy with some novel ideas and that the sociology of philosophy may prove to be a useful resource to constellation research. In short, there is a potential for cross-fertilization between the two approaches. Introduction Sociology of philosophy is the name of a theory-based, empirical sociological subdivision.It is related to fields such as sociology of knowledge, sociology of science, science studies and 'the new sociology of ideas' (Camic and Gross, 2004), which study the social dimensions of both human knowledge and belief in general and scientific knowledge and ideas in particular.The sociology of philosophy has a narrower focus: it studies philosophical activity and knowledge production.The point of departure for the sociology of philosophy is not a bloodless knowing subject, but rather the whole human being endowed with 'the manifold powers of a being that wills, feels and thinks' (Dilthey, 1989: 50).An initial definition can be formulated as follows: the sociology of philosophy is the study of philosophical practice as a socially organized activity rooted in various historical and social contexts, an activity that involves the production of philosophical knowledge, that is, propositions and arguments that make claims to validity.The degree of social organization can vary from modern university departments and joint research projects to informal face-to-face groups of like-minded people and geographically dispersed networks that communicate mainly through correspondence.Sociology of philosophy is about philosophy in the making just as much as about philosophy as a finished product of thought.The basic approach is praxeological: doing philosophy involves a variety of socially situated practices that in some cases result in philosophical arguments and doctrines fixed in text.Important work in the field of sociology of philosophy has been done, for example, by Pierre Bourdieu, Randall Collins, Martin Kusch, Neil Gross and Patrick Baert.Especially the work of Bourdieu has inspired several studies, for example, by Jean-Louis Fabiani, Anna Boschetti and Toril Moi. 1 Constellation research is an approach developed by the German philosopher Dieter Henrich (1927Henrich ( -2022) ) for investigations into the history of philosophy.It combines a historical and a systematic intent and interest.Constellation research has primarily been tried out on the rapid development of post-Kantian idealism in Germany in the 1790s.This was a decade of astonishing philosophical creativity.In many cases very young people came up with new bids about how to take philosophy a step further to give it a solid foundation.This rapid development has been the object of study for a research group mainly situated in Munich under the leadership of Henrich. 2 The approach called constellation research has some obvious similarities with different approaches within the sociology of philosophy.In the following, I will first give a condensed presentation of constellation research as a research program.In a second step, I will try to sort out important similarities and dissimilarities with some key concepts, topics and approaches within the sociology of philosophy.This far constellation research has to my knowledge not been much received and discussed within the sociology of philosophy; in fact it is probably unknown to most sociologists, whereas the sociology of philosophy has gained some attention from the side of constellation research (Mulsow, 2005a).I will also make a plea for a kind of cooperative research projects in which both philosophers and sociologists participate. The sociology of philosophy is a relatively young subdivision in sociology, and all inputs that can improve its theory and practice are welcome.It is my intention to stage a first encounter between constellation research and sociology of philosophy.Thus, a first aim of this article is to familiarize sociologists with constellation research.My main aim, however, is to contribute to the research question: what can constellation research and sociology of philosophy learn from each other?How can they enrich one another?The argument I will put forward is that constellation research can provide the sociology of philosophy with some novel ideas and that the sociology of philosophy may prove to be a useful resource to constellation research.In short, I see here a potential for cross-fertilization.Such a cross-fertilization will contribute to a better sociology of philosophy as well as a better constellation research. Constellation research The basic move of constellation research is to shift focus from individual authors and texts to various forms of constellations.The story of German idealism is not a story about four major philosophers -Kant, Fichte, Schelling and Hegeland their major works.More than that, the story cannot be told in this way.Instead focus must be on to begin with constellations of persons/authors, both first and second ranks.Today more or less forgotten persons/authors may very well have played an important role in a certain intellectual context, for example, as a originator or transmitter of ideas or personal contacts.Furthermore, focus must be on constellations of texts: published as well as unpublished, for example, in the form of correspondences and unfinished manuscripts.In a particular constellation, the various texts tend to relate to one another as in an oral dialoguequestion and answer, challenge and Heidegren: Constellation research and sociology of philosophy response, thesis and antithesisand cannot be understood in isolation.Finally, focus should be directed at constellations of problems, ideas, theories and concepts.In this case too, it is often difficult to understand what problem is under discussion or what a certain theory really says without the broader context of problems and theories.In short, a constellation can be described as a constellation of constellations. A prerequisite for being able to undertake constellation research is a certain density of the available material, primarily in the form of texts but also in terms of biographical and historical knowledge (Mulsow, 2005a: 94).For this reason, the whole of Greek philosophy and major parts of medieval philosophy is out of reach for this approach in question.Furthermore, a constellation often has one or more geographical centre, thereby indicating the importance of co-presence and face-to-face interaction.In the case of post-Kantian philosophy in the 1790s, three such centre stands out: Tübingen and Jena, two small university towns, and Homburg vor der Höhe, the latter a small town just north of Frankfurt am Main. A second important concept, beside that of constellation, is space of thought (Henrich, 1991: 220f.;Stamm, 2005: 35ff.).Sometimes the expression space of resonance is used (Stamm, 2005: 57). 3 A space of thought is characterized by certain available resources in terms of potential positions, theoretical options and argumentative moves.Beside the theoretical positions taken, there are other possible positions inherent in a certain space of thought, and besides the argumentative moves made, there are other possible arguments that might have been brought forward.A central task of constellation research is to reconstruct a particular space of thought.This task includes its coming into being, i.e. the reconstruction of initial motivations, knowledge interests and problems.At the temporal beginning of a space of thought is often found a proto-theory that in the following is developed, refined, revised or abandoned. Constellation research is very much about philosophy in the making.A space of thought is a constellation in motion.Thus, a constellation is not something static, like a constellation of stars, but rather something highly dynamic.This means that a constellation in its dynamic development has something corresponding to a plot structure: it is a success story, a story with a tragic outcome, a story about finding and losing one another and so on (Mulsow, 2005a: 76ff.).Thus, the story told about a constellation in many ways has the form of a dramatic narrative.Furthermore, there are at least two basic developmental patterns (Stamm, 2005: 44ff.).On the one hand, we have the internal differentiation of a space of thought, for example, in the form of introducing new distinctions or as internal critique, i.e. using the resources immanent in the space of thought for self-critical purposes.This is called an analytical constellation.This developmental path involves conceptual refinement and adds internal complexity.On the other hand, we have the going beyond a given space of thought by way of extension and incorporation.This is called a synthetic constellation.This kind of developmental path is often found at the margins of a given constellation and initiated by contacts between different constellations.In the latter case, it can be said that a kind of cultural transfer takes place (Mulsow, 2005a: 78). A specific type of constellation is called antagonistic (Stamm, 2005: 42f.).This is a constellation characterized by harbouring within itself tensions that emanates from conflicting influences.The post-Kantian philosophy of the 1790s in Germany was, on the one hand, convinced of the importance of the Kantian notion of human freedom (as opposed to the realm of nature) and, on the other, inspired by a type of theory that involved a monistic position (Spinozism) in which everything takes place according to necessity.These two inputs were equally strong and at the same time at first sight difficult to reconcile with one another.Furthermore, this double input was arguably a key to the rapid philosophical development that took place in this decade.Thus, there is a reason to believe that antagonistic constellations tend to release philosophical creativity. Furthermore, a distinction is made between the surface process and the depth dynamics of a constellation.The existence or finding of erratic documents, i.e. texts that does not fit into the established picture (for example, from Kant to Hegel via Fichte and Schelling), indicates that there is something we do not know and which points to the existence of missing links, hidden motives or deep-seated convictions situated below the explicit motives and aims. An important aspect of constellation research is the analysis of arguments.This is what makes it into a philosophical undertaking.The analysis of arguments involves an investigation into the possibilities inherent in a certain space of thought.And this may very well mean going beyond the arguments that was actually put forward by uncovering unrealized possibilities and options.In this way constellation research may indirectly contribute to on-going contemporary discussions.Thus, constellation research is motivated not only by a historical but also by a systematic interest.It combines a participant and an observer perspective: reconstructing a past philosophical development and participating in an on-going discussion.The systematic intent allows constellation research to gain a certain distance from the self-interpretations of the authors under study. 4 What are the limits of a particular constellation (Stamm, 2005: 50f.)?Who and what belongs to it?What is inside and what is outside it?One answer goes: part of a constellation is them who are making moves within one and the same space of thought (analytical constellation).Another answer is that the borders of a constellation are in general porous and fleeting (synthetic constellation).There tend to be a centre and a periphery, the latter, for example, in the form of persons only loosely attached to the constellation and who may also be partaking in other constellations.Nevertheless, some preliminary delimitations must be made to know what the object under study is and some preliminary decision of who and what does belong and does not belong to a particular constellation.This decision may of course be revised at a later stage in the research process. 5 Sociology of philosophy Whereas the sociology of knowledge is at least a hundred years old, the sociology of philosophy is of more recent origin.In a sense one can say that the sociology of knowledge consists in continuous refinements of Marx proposition that it is not consciousness that determines social being but the other way around (Marx, 1974: 9).The two front figures in the more recent sociology of philosophy are Pierre Bourdieu, whose study L'ontologie politique de Martin Heidegger was first published in 1975 (expanded version in 1988, English translation in 1991), and Randall Collins, who published his monumental The Sociology of Philosophies in 1998.In the following I will confront constellation research with some key concepts and topics stemming from the tradition of sociology of philosophy, to find out similarities and dissimilarities and to deliver some arguments about why both approaches may profit from taking each other into consideration. Constellation research and the sociology of philosophy share a focus on interaction, networks and communication, including cooperation, rivalry and conflict.The formula competition among friends describes quite well what goes on in many constellations; thereby competition may escalate into conflict and the parting of ways. 6With Collins the former furthermore share an interest in creative milieus, highlighting the interactive and dialogical dimensions of creativity.Common is also the primacy of relations.Relationalism means that an entity of any kind has its position only in relation to other entities occupying a place in the same space or on the same field.Not authors, texts, problems and ideas in isolation but in a network of relations is the object of the study.Constellation research strongly underlines the relationalism of abstract entities such as problems, ideas and theories.In this sense constellation research is through and through relational.This goes beyond, for example, Norbert Elias' notion of social configurations, which relates to individuals, groups or class fractions and their various forms of mutual interdependencies.A configurationspanning from harmonious and friendly to unfriendly and hostile relationsis ultimately always made up of individuals (Elias, 2002: 51 and 244f.;cf. Mulsow, 2005a: 81ff.). Another communality is a certain preference for spatial metaphors (Füssel, 2005: 197), be it space of thought, field of forces (Henrich, Stamm), attention space (Collins) or social space and philosophical field (Bourdieu).Thus, doing philosophy involves making moves in a certain space or on a certain field.For Bourdieu the question of the delimitation of a field must be answered empirically: 'The limits of the field are situated at the point where the effects of the field cease' (Bourdieu and Wacquant, 1992: 100).Furthermore, he stresses that a field has dynamic borders which often are 'the stake of struggles within the field itself' (Bourdieu and Wacquant, 1992: 104).These are points of view that may be incorporated into constellation research.One may, for example, speak of constellation effects: when a certain argumentative move in a constellation does not provoke any positive or negative reaction, the limits of the constellation in question have been transcendedinstead of resonance there is silence. More difficult is finding an equivalent to Bourdieu's multi-faceted notion of capital in constellation research.When the latter talks about resources, it has primarily intellectual resources in mind (theories and arguments), but an extension could easily be made to include also social network contacts (social capital).The intellectual resources inherent in a constellation make up an opportunity structure in terms of structural possibilities and possible moves: what theoretical positions are within reach?What argumentative roads might be taken?Especially Collins stresses the importance of being at the right place at the right time to be able to take the next step in philosophy.Being part of a vibrant constellation is a good starting point if you want to become, with Collins' expression, a 'major' philosopher.Furthermore, taking part in such a vibrant intellectual constellation is certainly a case of philosophical timing, being at the right place at the right time.Also in Patrick Baert's so-called positioning theory is the issue of timing of great importance (Baert, 2015). 7 A difference between Baert's approach and constellation research is that the former wants to blend out the motivations behind a certain intellectual intervention and instead focus on the more tangible effects and consequences of an intervention.Baert writes about a motivational bias, 'the elusive search for what went on in people's mind', and suggests 'abandoning a vocabulary of intentions for a vocabulary of effects', provided that 'the empirical evidence for claims regarding intentionality is wanting' (Baert, 2015: 181-182).The starting point for constellation research is on the contrary the initial motivations, knowledge interests and proto-theories that goes into a space of thought and which must be reconstructed.In this respect constellation research is closer to Neil Gross' notion of 'intellectual self-concept', i.e. the kind of philosopher someone wants to be (for example, engaged in public affairs or a distanced scientist).A philosopher's self-concept is about the original motivations that brought him or her into philosophy (Gross, 2008). In his later work, Erving Goffman introduced the notions of frame and framing.They relate to the organization of experience, and 'frame analysis is to be understood to apply to experience of any kind, including the merely cerebral' (Goffman, 1986: 316).The basic idea is that a given social situation is not simply wide open for various definitions, but is normally already framed in a certain way and thereby indicating answers to questions like 'What kind of situation is this?' and 'What is it that's going on here?'.In a similar way an intellectual constellation can be said to be framed in a certain way (Mulsow, 2005a: 80).To enter it we must have at least preliminary answers to questions like 'What is the problem under discussion?', 'Why is this an important issue?' and 'What meta-theoretical motivations are presupposed?'.These are questions that indicate what we must know to enter and take part in a particular space of thought.A space of thought is always framed in a certain way.A sufficient understanding of this frame is a precondition for understanding what it is that's going on in the given space of thought.The phenomenon of productive misunderstandings may, at least in some cases, have an origin in 'misframings'.In the extreme case, a constellation may approach being a sub-culture whose philosophical slang is understandable only to insiders, but this is normally counteracted by the conviction of having something important to tell the intellectual world, i.e. a wish to communicate and transmit a message. The notions of thought style and thought collective, as developed by Karl Mannheim and Ludwik Fleck in the 1920s and 1930s, may be activated within constellation research (Mannheim, 1986;Fleck, 1979).The output of a given intellectual constellation is not simply the achievement of individuals but has rather the character of a joint effort and may also be an example of a particular thought style.Fleck's stress on the role of proto-ideas and of collective moods, including emotional components, is very much in line with constellation research.However, I see a certain risk that talking in terms of thought style and thought collective may be a bit too homogenizing, with the effect of making a constellation less complex and dynamic than it is.A constellation of individuals is not a collective, and the output of a constellation is not the result of thinking in the same way. 8But if the two notions are explicated in a way that allows for a plurality of different voices and strivings, they may very well be introduced into constellation research. Another point of contact between constellation research and the sociology of philosophy is the concept of generation (Henrich, 2005: 24f.).Dieter Henrich has in a different context introduced the concept of 'motivational history' (Henrich, 1996) as referring to the meta-theoretical convictions and aims that characterize major representatives of a philosophical generation, in his own case the generation of young Germans that took up higher studies in the years after 1945.They give a generation a philosophical profile in common.In the sociology of knowledge/philosophy, the concepts of generation and generational units, the latter being constituted by a similar reaction to common significant events and formative experiences, have played an important role ever since Karl Mannheim's classical study from 1928 (Mannheim, 1972). It seems rather obvious that Randall Collins' key concepts, such as networks, interaction ritual chains, cultural capital, emotional energy, transmission of emotional energy and energy star, are close to the approach of constellation research (Mulsow, 2005a: 84ff.). 9A constellation of persons/authors is a concentration of emotional energy generated mainly in direct contact with one another, often by way of face-to-face interaction. 10In such situations it is also close at hand that one or more individuals develop temporarily into an energy star, becoming a centre which transmits creative energy to a whole milieu.For example, Fichte was the undisputed energy star in the Jena-constellation of the mid-1790s.Furthermore, each intellectual constellation probably tends to develop its own interaction rituals.Even Collins' ambitious idea of developing a sociology of thinking, that to a certain extent allows us, this is the claim, to make predictions about what a particular intellectual will think in his or her next move, is not quite foreign to constellation research.'Social structure is everywhere, down to the micro level.In principle, who will say what to whom is determined by social processes.And this means there is not only a sociology of conversation but a sociology of thinking' (Collins, 1998: 46).However, by Collins this is hardly more than a vague promise.Constellation research, being interested in the oral dimension of philosophy, has the ambition to reconstruct conversations and discussions that must have taken place although they are not documented in the available material.Given what we know about a certain space of thought, it is deemed possible to fill out missing argumentative moves that with a high probability must have taken place but also the moves that might have taken place.To network contacts, interaction rituals, and transmitted emotional energy must be added the argumentative moves that were/are possible given the structure of a particular space of thought.This also allows for continuing lines of argument, beyond the ones that can be historically reconstructed, into contemporary debates.Moving in this direction would contribute to making the sociology of philosophy more attractive to philosophers. Next, I would like to point to some important dissimilarities.The sociology of philosophy tends to remain on a rather high level of abstraction, i.e. it often does not reach down to the micro-processes in which philosophical conceptions and theories take shape.This can be seen in Collins' magistral book The Sociology of Philosophies.His key concepts, and the ambition to contribute to a sociology of thinking points in the direction of micro-sociological investigations.What we find is a program for a micro-sociology that, however, is poor in its execution.Looking at things from a high altitude, it displays no method of discovery; it can confirm but not revise the history of philosophy.We find, for example, no suggestions on how to improve or revise our understanding of the development of post-Kantian philosophy in the 1790s.Martin Mulsow has suggested that Collins' method may be useful for 'the comparative analysis of large-scale types of networks' (Mulsow, 2005a: 89), but not for detailed studies of specific constellations.Collins himself seems to be fully aware of the fact that he does not reach down to 'the flow of micro-situations', because 'as we examine the history of intellectual networks, we generally find that intimate materials on the micro-level of the sociology of thinking are not available … What we glimpse, at best, are the long-term contours of interactional chains and their products …' (Collins, 1998: 53).Here, constellation research may lend the sociology of philosophy a helping hand. Heidegren: Constellation research and sociology of philosophy Furthermore, the sociology of philosophy often has a disclosing intention (Gross, 2008: 237f.).It excels in the uncovering of unspoken strategies and hidden interests at work.Its primary target tends to be what Bourdieu calls the scholastic reason: pure philosophy as uncontaminated by the world, i.e. the illusion that finding out the truth is the sole interest of those involved in philosophy (Bourdieu 2000).This research orientation may be an invitation to not take the philosophical arguments brought forward seriously enough (cf.Kusch, 1995: 30).Constellation research, on the other hand, remains a philosophical undertaking through its focus on the analysis of arguments and its ambition to eventually contribute to on-going philosophical debates. The most conspicuous absence in constellation research is in my eyes the lack of a concept of power, which on the other hand is never far away in the sociology of philosophy (Füssel, 2005: 197 and 205f.).When constellation research talks in terms of resources, it has in mind primarily argumentative resources, not different kinds of social resourcesvarious forms of capitalthat establish positions of power.This dimension may be added to constellation research with the help of the sociology of philosophy, thereby making the former more attractive to sociologists.Both reflections on power in interpersonal relations and situated on the meso-and macro-levels should be of relevance to constellation research.A dominant figure in a constellation (an energy star) has power in terms of decision-making, in terms of setting the agenda and to some extent also in terms of forming the wishes and strivings of those who participate in the constellation. 11A space of thought is an open and dynamic but not unlimited space for various moves.Furthermore, there are positions of power tied to the institutional and organizational setting upon which a certain constellation relies.Fichte was, for example, dismissed from the University of Jena due to probably political reasons disguised under an accusation of atheism.Taking relations of power into account does not necessarily imply that the arguments of philosophers are not taken seriously and that they are treated as nothing but smokescreens for power struggles. Finally, I want to touch on the possibility to transfer constellation research from philosophy to the social sciences.A salient difference between philosophy and the social sciences is that the latter often involves an input by way of systematic empirical investigations.This means that constellations in the social sciences tend to have the character of synthetic constellations; something new is incorporated that affects the argumentative moves that are possible and the positions that may be taken.Furthermore, there is a reason to believe that constellation research can be conducted in the social sciences especially if the theoretical component is salient or marked in the constellation under study.A good candidate for being the object of constellation research is, for example, the Columbia milieu around Robert K. Merton and Paul Laszarsfeld, two scholars with very different backgrounds, from the 1940s and onwards.Another example is the Edinburgh school and the genesis and development of the strong program in the sociology of knowledge. 12 Conclusion I take it as a fact that there exists a rather far-reaching estrangement between philosophers and sociologistswith a few exceptionsin the contemporary intellectual world.This is a bad situation, especially when sociologists take an interest in intellectual history, the history of philosophy and philosophical issues in general and when philosophers are open-minded for the sociological dimensions of doing philosophy.We cannotagain with a few exceptionsexpect philosophers to become sociologists or sociologists to become philosophers.However, constellation research and sociology of philosophy may lend each other a helping hand. Constellation research keeps the door open not only to philosophy (it has been developed by philosophers) but also to sociology.The idea of spaces of thought equipped with various argumentative resources may be integrated into the sociology of philosophy, thereby making the latter more interesting and relevant to philosophers.As long as philosophers experience that the arguments of philosophers aren't taken seriously enough, there is little chance of convincing them about the relevance and validity of sociological explanations.Constellation research also stresses and works out the relationalism of abstract entities such as problems, concepts, ideas and theories.This is another potential contribution to the sociology of philosophy: a thorough relationalism with both social and intellectual components.In this way constellation research may help realize the micro-sociological research program that, for example, Collins has worked out but not executed.Thus, constellation research may help the sociology of philosophy come closer to the ambitious goal of reaching down to the flow of micro-situations and providing a sociology of thinking. Constellation research may in turn incorporate ideas from the sociology of philosophy to strengthen its already existing sociological leanings and components.Especially with the help of the concept of power, the sociological profile of constellation research may be strengthened.Constellations are not free from power, neither when it comes to interpersonal relations nor in the way macro-and meso-factors affect them, for example, in terms of the availability of secure positions within the university system.Another contribution is the interest from the side of sociology of philosophy in the meso-level, i.e. the institutional and organizational setting and context of philosophical activity.Furthermore, a Bourdieuinspired field concept may help constellation research in thinking about the limits of a constellation: a constellation or a space of thought ends where a certain argumentative move does not provoke any positive or negative response. Thus, I take a cross-fertilization between constellation research and the sociology of philosophy to be not only a real possibility but also an attractive possibilityfor philosophers as well as sociologists.In practice this may take the form of research projects into the history of philosophy and contemporary philosophy in which both philosophers and sociologists are taking part.This is a kind of cooperative projects that, to my knowledge, are not at all common today.Such projects would involve a combination of competencies that are seldom found by an individual researcher.However, I also think that there will always remain a healthy tension between philosophical and sociological approaches to philosophical issues.But cooperation and rivalry may very well go together in one and the same research project.
6,816.6
2023-11-16T00:00:00.000
[ "Sociology", "Philosophy" ]
Broadband Continuous-wave Multi-harmonic Optical Comb Based on a Frequency Division-by-three Optical Parametric Oscillator We report a multi-watt broadband continuous-wave multi-harmonic optical comb based on a frequency division-by-three singly-resonant optical parametric oscillator. This cw optical comb is frequency-stabilized with the help of a beat signal derived from the signal and frequency-doubled idler waves. The measured frequency fluctuation in one standard deviation is ~437 kHz. This is comparable to the linewidth of the pump laser which is a master-oscillator seeded Yb:doped fiber amplifier at ~1064 nm. The measured powers of the fundamental wave and the harmonic waves up to the 6th harmonic wave are 1. The total spectral width covered by this multi-harmonic comb is ~470 THz. When properly phased, this multi-harmonic optical comb can be expected to produce by Fourier synthesis a light source consisting of periodic optical field waveforms that have an envelope full-width at half-maximum of 1.59 fs in each period. Introduction The development of ultrafast physics has made tremendous progress in the past decade.This includes the development of broadband coherent optical sources which are desirable in ultrafast source development as well as in a wide range of research areas including laser spectroscopy and quantum optics [1].While it is possible to produce ultrashort pulses via Fourier waveform synthesizer at optical frequencies that leads to a periodic train of single-cycle attosecond pulses [2][3][4], more attention has been given to isolated pulse generations in the femtosecond and attosecond time domain.This is because an isolated pulse can be employed readily for experiments in probing the electronic dynamic in atoms and molecules [5].The common approaches that have been utilized for generating broadband multi-octave coherent optical sources include self-phase-modulation/cross phase-modulation of femtosecond laser pulses in gases and solids [6,7], driving Raman resonances to produce sidebands by four-wave mixing with nanosecond lasers or femtosecond lasers [8][9][10], synchronously pumped femtosecond optical parametric oscillators [11] or supercontinuum seeded optical parametric amplifiers [12].A common feature of these approaches is that they use a high power pulsed laser as the pump source.Although there are impressive demonstrations of using these sources as femtosecond waveform synthesizers [13,14], stringent requirement on phase stability dictates that the synthesized waveform easily varies with time unless elaborate feedback control is installed in the system [13]. The opposite extreme to isolated pulses is a continuous train of sub-cycle optical pulses.Since cw lasers are intrinsicaly more stable and are better managed than pulsed lasers, a train of sub-cycle pulses at ~100 THz can offer unprecedented precision and accuracy to as much as one part in 10 14 in ultrafast event timing, ranging, quantum control, and for metrologic applications.A Fourier waveform synthesizer using a phase-locked cw laser has a main attribute that pulsed lasers do not have.It can maintain highly precise phase and amplitude stability for each comb component to yield waveforms that are nearly free from waveform variation [3].A cw periodic waveform can therefore be used to produce ultra-stable novel shaped field potentials for trapping neutrals and charged particles and study their short-range dynamical behavior. There are several new developments toward a broadband coherent optical source for the purpose of making a cw Fourier waveform synthesizer.A cavity-enhanced molecular modulation scheme has been demonstrated [15].The spectral components of the molecular modulator were extended to span from 0.8 μm to 3.2 μm via four-wave mixing in a gas medium.A 4-THz cw frequency comb at 1.56 μm based on cascading quadratic nonlinearities has also been realized [16].Yet the power of the frequency bands produced in these developments are in the mW level so that their potential applications are limited.Here we describe an approach that utilizes reliable high power cw fiber lasers.The approach employs a master-oscillator fiber-laser power amplifier pumped frequency division-by-three parametric oscillator [17,18] to provide the first three comb components and then adopt quasi-phase-matched (QPM) nonlinear mixing to generate the next three higher harmonics to give a highly-stable commensurate six component cw comb at the watt level.The approach we describe here encompasses advantages of QPM based parametric oscillators and frequency conversions which are compact and have inherently high conversion efficiency. Experimental Section The schematic of our multi-harmonic optical comb is shown in Figure 1.The watt-level frequency division-by-three singly resonant optical parametric oscillator (SRO) cavity is a bow-tie ring cavity consisting of two curved mirrors (M1 and M2) with a 100 mm radius of curvature, a flat mirror (M3), and an output coupler (M4).The OPO crystal is a 5 mol % MgO-doped PPLN crystal (from HC Photonics) with a length of 40 mm.The total cavity length is 500 mm.The pump, signal, and idler waves are designed to correspond to the 3rd harmonic wave, the 2nd harmonic wave, and the fundamental wave of a multi-harmonic optical comb.M1 and M2 have reflectance of <2%, >99.9%, and <5% at the pump, signal and idler wavelengths, respectively.M3 has reflectance of >99.9% at the signal wavelength.Output coupler mirror M4 has a 0.6% output coupling at the signal wavelength.Multi-harmonic optical comb based on a frequency division-by-three singly resonant optical parametric oscillator.The optical parametric oscillator generates the fundamental wave (idler), the 2nd harmonic wave (signal), and the 3rd harmonic wave (pump) of the multi-harmonic optical comb.The 4th harmonic to 6th harmonic are generated by intracavity second harmonic generation (SHG), external cavity single-pass sum frequency generation (SFG), and external cavity single-pass SHG, respectively.The idler (the fundamental wave) SHG and a small portion of the 2nd harmonic wave are combined to produce an error signal for frequency control by tweaking the cavity length that is based on a home-made phase detection circuit, a commercial high-speed proportional integrating (PI) controller (New Focus LB1005) and a high voltage PZT driver (Physik Instruments E-501.621)shown inside the dashed box. The two end faces of the MgO:PPLN crystal are optically polished and antireflection-coated with <1%, <0.2%, and <5% reflectance at pump, signal, and idler wavelengths.The OPO crystal has a period of 30.8 µm and generates radiation at 1596-nm (signal) and at 3192-nm (idler) at 92.8 °C when pumped at 1064 nm.The pump laser is a cw linearly polarized, single-frequency Yb-doped fiber laser amplifier which is seeded by a single-mode mW-level DFB diode laser that has <0.1 MHz linewidth at 1064 nm from IPG Photonics.The pump laser is capable of producing ~15-W power with TEM00 mode and a M 2 < 1.05.The cavity is designed to resonate at the signal wave that has a focusing parameter ξ = L/b of 1 (L is the crystal length and b is the confocal parameter).The pump beam is focused to a beam waist radius of 60 µm to mode-match to the cavity mode of the signal beam.The pump and idler exit the cavity from M2 and the signal output is at M4.The pump and idler are separated by a dichroic mirror for their power measurements.The idler wave is immediately focused into a 25 mm long MgO:PPLN crystal of period 34.25 µm for SHG of the idler to produce a beat signal with the signal wave.The purpose of the beat signal is to lock the length of the SRO cavity.This beat signal which is detected by a fast photodiode (EOT ET-3010) is sent to a cavity stabilization feedback circuit shown enclosed in the dash box in Figure 1.Locking the cavity length in turn fixes the wavelength of the signal to one of the cavity modes.A 250-µm thick intracavity fused silica etalon is inserted between M2 and M3 to maintain single-longitudinal-mode operation at high pump power. The 4th harmonic to 6th harmonic waves can be generated by either intracavity or external cavity configurations.Intracavity frequency conversion could provide higher conversion efficiency, but the dynamic of the SRO cavity becomes more complicated and the cavity mirrors' coatings are more susceptible to damage.So only the 4th harmonic comb component is generated by intracavity SHG from the 2nd harmonic wave (SRO signal wave).The nonlinear crystal for 4th harmonic wave generation is a 20.58 µm period MgO:PPLN crystal with a 10-mm length.The 5th and 6th harmonic waves are generated by single pass sum-frequency generation (SFG) of the 2nd harmonic wave and the 3rd harmonic wave (SRO signal and pump), and SHG of the 3rd harmonic wave (SRO pump) respectively.The specifications of these two nonlinear crystals are a 12.05 µm period MgO:PPLN crystal with 25 mm in length and a 7.97 µm period MgO:PPSLT crystal with 30 mm in length, respectively. Frequency Control of Division-by-Three OPO When the optical parametric oscillator is operating, the frequencies of signal and idler are 2ω + Δω and ω − Δω where Δω is the deviation in frequency from the fundamental (idler) of the exact division-by-three frequency of the SRO where the pump is at 3ω. Ideally Δω is to be equal to zero.For the SRO, zeroing Δω is done by adjusting the temperature of the gain crystal, the intracavity etalon, and the length of the cavity.The primary source of non-zero Δω is thermal and mechanical fluctuation of the SRO cavity length.Therefore it is necessary to track and stabilize the cavity length.Here a cavity stabilization feedback circuit is introduced to control the free spectral range of the SRO cavity to eliminate or minimize the frequency deviation.In our experiment, the frequency deviation is deduced from the interference beat signal between the frequency-doubled idler wave and the signal wave.This beat signal is equal to {2ω + Δω minus 2 × (ω − Δω)} to give 3Δω.According to standard practice in cavity stabilization, we give this beat signal an offset frequency ΩAOM of +54 MHz by passing the SRO signal beam through an acousto-optic modulator before mixing with the idler's second harmonic.With this offset it is then possible to track the deviation Δω on both sides of Δω = 0 readily.The main ICs employed in our phase detection circuit are two ultrafast comparators (Analog Device AD96685), one phase discriminator (Analog Device AD9901) and one differential receiver amplifier (Analog Device AD8130).The error signal from our phase detection circuit is sent into the high-speed proportional integrating controller.Finally, the PZT driver adjusts a PZT (mounted on M3) in accordance to output from the proportional controller. The quality of the stabilization control is monitored by recording the beat signal thus produced using a radio frequency (RF) spectrum analyzer (Rohde & Schwarz FSL3) with 100 kHz resolution bandwidth.The frequency of the recorded beat signal is 3Δω + ΩAOM.Histograms of the recorded frequency deviation over a 30 min period are shown in Figure 2 with and without cavity stabilization when the temperature of the OPO crystal was set to 92.8 °C to achieve phase-matching in the frequency division-by-three cw SRO. Figure 2a shows the frequency deviation, Δv = Δω/2π, without cavity stabilization.The recorded deviation Δω centers at 131.8 MHz with a ±14.7-MHz random drift (one standard deviation).With the stabilization circuit turned on, the frequency deviation is reduced to centering at −76 kHz with a ±437-kHz random drift.The solid red lines in Figure 2a,b are Gaussian fits of the distribution indicating that the source of the drifts are randomly distributed.The drift width in Figure 2b of 437 kHz is comparable to the frequency stability of the seed DFB diode laser of the fiber amplifier, implying that the seed laser's drifts may be determining the width of the stabilized source.This result shows that by locking the seed laser to a stabilized reference-frequency comb linked to a primary frequency standard Δω can be reduced to less than one Hz and phase-stabilized eventually for waveform synthesis. Power of Harmonic Comb Components For a pump wavelength of 1064 nm, the generated harmonics are at 3192 nm, 1596 nm, 1064 nm, 798 nm, 638.4 nm, and 532 nm.Output power from the SRO and subsequent harmonics produced are measured with a broadband thermopile detector (Coherent P10).The idler power at 3192 nm is determined after its transmission through the SHG crystal and a dichroic mirror used to block out the second harmonic and any residual pump power.The third harmonic (residual pump) is combined with the second, fourth, fifth and sixth harmonics that are collinear after their generation.A Pellin Broca prism is used to disperse these harmonics before sending each into the power meter.The harmonics power is measured as a function of the input pump power.The results are shown in Figure 3. Figure 3. The measured power of the multi-harmonic optical comb after Pellin Broca prism and the dichroic mirror after idler SHG in Figure 1.Here, (a) shows the power of the fundamental wave (idler), the 2nd harmonic wave (signal), the 3rd harmonic wave (residual pump power), and the 4th harmonic wave which were generated from the frequency divided-by-three SRO and the intracavity second harmonic generation for the 4th harmonic wave; (b) shows the power of 5th harmonic wave and the 6th harmonic wave.The 5th harmonic wave and the 6th harmonic wave are generated from single pass sum frequency generation and second harmonic generation, respectively.Figure 3a shows the power of the fundamental wave (idler), the 2nd harmonic wave (signal), the 3rd harmonic wave (residual pump power), and the 4th harmonic wave in the multi-harmonic optical comb.The SRO's lasing threshold is at 6 W.This relatively high threshold is because the length of the SRO nonlinear crystal adopted in this experiment is shorter than used in previous experiments, and the insertion loss from the etalon and the intracavity SHG crystal.Without these extra optics, the SRO threshold drops to about 2 W. The maximum fundamental wave (idler), and the 2nd harmonic wave (signal) power are 1.64 W and 0.77 W, respectively at an input pump power of 14 W. The residual pump power (the 3rd harmonic wave) at this input is 3.9 W. For an output coupling of 0.6% the estimated circulating signal power in the cavity is as much as 128 W. This accounts for a respectable cw SHG conversion of the signal to the 4th harmonic which is measured to be 0.78 W without correcting for losses.The 5th harmonic and the 6th harmonic are obtained by single pass wavelength mixing.Figure 3b indicates the maximum powers obtained of the 5th harmonic wave and the 6th harmonic wave are 166 mW and 114 mW, respectively. The power achieved for the frequency division-by-three SRO is at least ten times higher than previously reported in the literature [17].This is the first time a cw harmonic comb of up to six harmonics has been reported.The total harmonic comb power of 7.4 W is unprecedented.High cw power is needed for effective phase and amplitude management of the comb components in waveform synthesis [19] and its subsequent applications.The multiwatt comb power that has been achieved is expected to be sufficient for this purpose. Simulated Waveform Synthesis with cw Harmonic Comb The harmonic comb that is reported here has a fundamental wavelength of 3192 nm.The relative phase relationship of the harmonic comb is critical during waveforms synthesis.A fixed phase relationship in phase-matched three wave mixing process is ϕa = ϕb + ϕc − π/2, where subscripts a, b, and c represent the identities of the three optical waves [20].Since all components of the cw harmonic comb in this work are generated from three wave mixing process; the deduced phase of ωn is ϕn = nϕ1 − (n − 1)π/2, where ϕ1 = (ϕ3 + π)/3.In this harmonic comb, the phase of the pump, ϕ3 is the only unchangeable parameter in the system.The other components' phase will follow the phase of pump based on the deduced relation.By managing the phase and amplitude of each comb component periodic field waveforms of arbitrary shape could be synthesized.The calculated repetition rate of the synthesized pulse train with this comb is ~94 THz, and has an equivalent period spacing of 10.6 fs.The shortest pulse synthesized within each period will be a transform-limited sub-cycle cosine pulse that has an electric field FWHM of 942 attoseconds.The FWHM of the intensity envelope of this cosine pulse is 1.59 fs.The simulated waveforms, synthesized with the comb produced in this experiment and when the spectral phases are adjusted to be equal, are plotted and shown as the red curve in Figure 4.By adjusting the amplitudes of the comb components to be equal, the narrowest pulses obtainable with this comb are plotted in blue in the figure .For a comb produced with a perfect divided-by-three cw SRO this pulse train will be continuous.In reality the SRO is not perfect and there is a frequency deviation Δω as described in Section 3.1 above.The time duration in which the pulses in the train will remain intact is dependent on this frequency deviation Δω of the SRO.With Δω at one standard deviation of the present case the period of repetition of the pattern of synthesized pulses is ~2288 ns, corresponding to the time inverse of Δω of 437 kHz. Figure 5 shows the evolution of the synthesized waveform at this frequency deviation over a time span of 3000 ns, clearly indicating the pattern repeats itself every 2288 ns.Allowing for a drop of 10% in the maximum field strength as the criteria, then according to the numerical simulations, the sub-cycle cosine electric field maintains its strength for over 376 ns in every cycle.Since this is for a one standard deviation of the frequency deviation, 86% of the time this waveform has this shape for 376 ns or longer.We pointed out in Section 3.1 that the residual frequency deviation is due to the drift of the seed laser.Hence we believe that this duration can be extended to over 1000 ns by actively stabilizing the seed laser of the pump in this experiment. Conclusions We have demonstrated a broadband cw multi-harmonic optical comb based on a frequency divided-by-three optical parametric oscillator.The frequency deviation is centered at −76 kHz with a fluctuation of 437-kHz in one standard deviation.According to our numerical simulation, a stable subcycle cosine electric field pulse can last for more than 376 ns at 437-kHz frequency deviation, which is two orders of magnitude longer than in any synthesizing schemes that have been reported.The output powers of the spectral components in this cw optical comb are 1.64 W, 0.77 W, 3.9 W, 0.78 W, 0.17 W, and 0.11 W which correspond to the fundamental wave to the 6th harmonics wave.The bandwidth of this multi-harmonic comb is ~470 THz.The results show that this cw multi-harmonic optical comb can be a useful light source of a stable optical waveform function generator. Figure 1 . Figure1.Multi-harmonic optical comb based on a frequency division-by-three singly resonant optical parametric oscillator.The optical parametric oscillator generates the fundamental wave (idler), the 2nd harmonic wave (signal), and the 3rd harmonic wave (pump) of the multi-harmonic optical comb.The 4th harmonic to 6th harmonic are generated by intracavity second harmonic generation (SHG), external cavity single-pass sum frequency generation (SFG), and external cavity single-pass SHG, respectively.The idler (the fundamental wave) SHG and a small portion of the 2nd harmonic wave are combined to produce an error signal for frequency control by tweaking the cavity length that is based on a home-made phase detection circuit, a commercial high-speed proportional integrating (PI) controller (New Focus LB1005) and a high voltage PZT driver (Physik Instruments E-501.621)shown inside the dashed box. Figure 2 . Figure 2. The histograms of frequency deviation from 30-min beat-wave signal recording by a radio frequency (RF) spectrum analyzer.The solid red line is fitting curve based on Gaussian distribution (a) without cavity stabilization.The frequency deviation centers at 131.8 MHz with a 14.7-MHz standard deviation; (b) with cavity stabilized feedback circuit functioning properly.The center is at −76 kHz with a 437-kHz standard deviation. Figure 4 . Figure 4. Synthesized waveforms calculated for the six component comb obtained in this experiment (red curve) and for the case where the components are reduced to equal amplitude (blue curve) for the zero detuning case.This will produce a continuous train of pulses of the calculated shape shown here. Figure 5 . Figure 5. Numerical simulations of a subcycle cosine pulse electric field generated by this multi-harmonic optical comb with a frequency deviation of 437 kHz.The vertical scale is the time evolution of the pulse train from 0 to 3000 ns.The horizontal scale is the local time of the waveform displayed for about three cycles.The field pattern shifts with a phase shift of 2π/3 every 1/3 of a period by repeats and replicates itself in approximately 2288 ns which corresponds to the inverse of the frequency deviation used in the simulation.Normalized electric field strength shown here is color-coded to assist in recognition of the evolution of the field pattern as follows: dark red regions represent the value is from 0.8 to 1.0; orange regions represent the value is from 0.5 to 0.8; green regions represent the value is from 0.253 to 0.5 and blue regions represent the value is from 0.0 to 0.253.
4,792.2
2014-11-26T00:00:00.000
[ "Physics" ]
Measurement of the W boson helicity fractions in the decays of top quark pairs to lepton + jets final states produced in pp collisions at sqrt(s) = 8 TeV The W boson helicity fractions from top quark decays in t t-bar events are measured using data from proton-proton collisions at a centre-of-mass energy of 8 TeV. The data were collected in 2012 with the CMS detector at the LHC, corresponding to an integrated luminosity of 19.8 inverse femtobarns. Events are reconstructed with either one muon or one electron, along with four jets in the final state, with two of the jets being identified as originating from b quarks. The measured helicity fractions from both channels are combined, yielding F[0] = 0.681 +/- 0.012 (stat) +/- 0.023 (syst), F[L] = 0.323 +/- 0.008 (stat) +/- 0.014 (syst), and F[R] = -0.004 +/- 0.005 (stat) +/- 0.014 (syst) for the longitudinal, left-, and right-handed components of the helicity, respectively. These measurements of the W boson helicity fractions are the most accurate to date and they agree with the predictions from the standard model. Introduction The data from proton-proton (pp) collisions produced at the CERN LHC provide an excellent environment to investigate properties of the top quark, in the context of its production and decay, with unprecedented precision. Such measurements enable rigorous tests of the standard model (SM), and deviations from the SM predictions would indicate signs of possible new physics [1][2][3][4]. In particular, the W boson helicity fractions in top quark decays are very sensitive to the Wtb vertex structure. The W boson helicity fractions are defined as the partial decay rate for a given helicity state divided by the total decay rate: F L,R,0 ≡ Γ L,R,0 /Γ, where F L , F R , and F 0 are the lefthanded, right-handed, and longitudinal helicity fractions, respectively. The helicity fractions are expected to be F 0 = 0.687 ± 0.005, F L = 0.311 ± 0.005, and F R = 0.0017 ± 0.0001 at next-tonext-to-leading order (NNLO) in the SM, including electroweak effects, for a top quark mass m t = 172.8 ± 1.3 GeV [5]. Anomalous Wtb couplings, i.e. those that do not arise in the SM, would alter these values. Experimentally, the W boson helicity can be measured through the study of angular distributions of the top quark decay products. The helicity angle θ * is defined as the angle between the direction of either the down-type quark or the charged lepton arising from the W boson decay and the reversed direction of the top quark, both in the rest frame of the W boson. The distribution for the cosine of the helicity angle depends on the helicity fractions in the following way, 1 Γ dΓ d cos θ * = 3 8 ( This dependence is shown in Fig. 1 for each contribution separately, normalised to unity, and for the SM expectation. Charged leptons (or down-type quarks) from left-handed W bosons are preferentially emitted in the opposite direction of the W boson, and thus tend to have lower momentum and be closer to the b jet from the top quark decay, as compared to charged leptons (or down-type quarks) from longitudinal or right-handed W bosons. The measurement of the W boson helicity is sensitive to the presence of non-SM couplings between the W boson, the top quark, and the bottom quark. A general parametrisation of the Data and simulated samples Wtb vertex can be expressed as [1,6] where V L , V R , g L , g R are vector and tensor couplings (complex constants), q = p t − p b , and p t (p b ) is the four-momentum of the top quark (b quark), P L (P R ) is the left (right) projection operator, and h.c. denotes the Hermitian conjugate. Hermiticity conditions on the possible dimension-six Lagrangian terms also impose Im(V L ) = 0 [7]. In the SM and at tree level, V L = V tb , where V tb ≈ 1 is the Cabibbo-Kobayashi-Maskawa matrix element connecting the top and the bottom quarks and V R = g L = g R = 0. The relationships between the W boson helicity fractions and the anomalous couplings including dependences on the b quark mass are given in Ref. [8]. , and from single top quark differential cross section production measurements [15]. This Letter describes a measurement of the W boson helicity fractions in tt events involving one lepton and multiple jets, , and its charge conjugate, where is an electron or a muon, including those from leptonic decays of a tau lepton. Final states corresponding to such processes are referred to as lepton+jets. The measurement relies on the analysis strategy described in Ref. [13]. The measurement is performed using pp collisions at centre-of-mass energy of 8 TeV, corresponding to an integrated luminosity of 19.8 fb −1 , collected during 2012 by the CMS detector. The CMS detector The CMS detector is a multipurpose apparatus of cylindrical design with respect to the proton beams. The main features of the detector relevant for this analysis are briefly described here. Charged particle trajectories are measured by a silicon pixel and strip tracker, covering the pseudorapidity range |η| < 2.5. The inner tracker is immersed in a 3.8 T magnetic field provided by a superconducting solenoid of 6 m in diameter that also encompasses several calorimeters. A lead tungstate crystal electromagnetic calorimeter (ECAL), and a brass and scintillator hadronic calorimeter surround the tracking volume and cover the region |η| < 3. Quartz fibre and steel hadron forward calorimeters extend the coverage to |η| ≤ 5. Muons are identified in gas ionisation detectors embedded in the steel return yoke of the magnet. The data for this analysis are recorded using a two-level trigger system. A more detailed description of the CMS detector, together with a definition of the coordinate system used and the relevant kinematic variables, can be found in Ref. [16]. Data and simulated samples Signal events corresponding to top quark pairs that decay to lepton+jets final states are expected to contain one isolated lepton (electron or muon) together with at least four jets, two of which originate from b quark fragmentation. Such events are referred to separately as e+jets or µ+jets, respectively, or when combined as +jets. Background events containing a single isolated lepton and four reconstructed jets arise mainly from processes that produce events containing a single top quark, processes that produce multijet events in association with a W boson that decays leptonically (W+jets), or Drell-Yan processes accompanied by multiple jets (DY+jets) when one of the leptons is misidentified as a jet or goes undetected. Multijet processes can also mimic lepton+jets final states, if a jet is reconstructed as an electromagnetic shower or, more unlikely, if a nonprompt muon from a hadron decay in flight fulfils all identification criteria of a prompt muon. Simulated Monte Carlo (MC) samples, interfaced with GEANT4 [17], are used to account for detector resolution and acceptance effects, as well as to estimate the contribution from background processes that have characteristics similar to lepton+jets final states in tt decays. A signal tt sample, which also provides a reference for the SM (see Eq. (5)), is simulated using MADGRAPH v5.1.3.30 [18] with matrix elements having up to three extra partons in the final state. The parton distribution function (PDF) set CTEQ6L1 [19] is used when simulating this reference tt sample. The MADGRAPH generator is interfaced with PYTHIA v6.426 [20], tune Z2* [21], to simulate hadronisation and parton fragmentation, and also with TAUOLA v27.121.5 [22] to simulate τ lepton decays. This SM reference tt sample is simulated assuming m t = 172.5 GeV, which results in the following leading-order (LO) W boson helicity fractions for that sample: Single top quark events in the s, t, and tW channels are generated using POWHEG v1.0 [23] and PYTHIA interfaced with TAUOLA, with the PDF set CTEQ6M [19]. Background W+jets and DY+jets processes are simulated using MADGRAPH with the PDF set CTEQ6L1, followed by PYTHIA for fragmentation and hadronisation. Finally, background multijet processes are simulated using the PYTHIA event generator. Corrections are applied to the simulated samples so that resolutions, energy scales, and efficiencies as functions of p T and η of jets [24] and leptons [25] measured in data are well described. The effect of multiple pp collisions occurring in the same bunch crossing (pileup) is also taken into account in the simulation. The data samples selected for this measurement were recorded using inclusive single-lepton triggers, which require at least one isolated electron (muon) with p T > 27 (24) GeV, used to define the e+jets (µ+jets) data sample. The decay products of candidate top quarks are reconstructed using the CMS particle-flow (PF) algorithm, described in detail elsewhere [26,27]. Individual charged particles identified as coming from pileup interactions are removed from the event. Effects of neutral particles from pileup interactions are mitigated by applying corrections based on event properties. Leptons are required to originate from the primary vertex of the event [28]. A lepton is determined to be isolated using a variable computed as the total transverse momentum of all particles (except the lepton itself) contained within a cone of radius 0.4, centred on the lepton direction, relative to the transverse momentum of the lepton. Electrons are identified by using a multivariate analysis (MVA) [29] based on information from the inner tracker and the ECAL. Events are selected for the e+jets data sample if the identified electron has an MVA discriminant value greater than 0.9, is determined to be isolated, has p T > 30 GeV, and |η| < 2.5. Muons are identified by matching information from the inner tracker and the muon spectrometer [30]. Events are selected for the µ+jets data sample if they contain an isolated muon, p T > 26 GeV, and |η| < 2.1. Events with at least one additional isolated electron or muon are vetoed to reject backgrounds from dileptonic decay modes of tt and DY+jets processes. Jets are reconstructed 4 Reconstruction of the tt system and reweighting method [24] using the anti-k T clustering algorithm [31], with a distance parameter of 0.5. The selected or vetoed leptons described above are not allowed to be clustered into jets, to avoid ambiguities. The event selection requires at least four reconstructed jets having |η| < 2.4, of which the four most energetic jets are required to have p T higher than 55, 45, 35, and 20 GeV. Events with additional jets are not vetoed. The transverse momentum imbalance of the event p miss T is determined by summing the negative transverse momentum over all reconstructed particles, excluding those charged particles not associated with the primary vertex. The transverse mass of the W boson is defined as where p T is the transverse momentum of the lepton, p miss T is the magnitude of p miss T , and ∆φ is the angle in the (x, y) plane between the direction of the lepton and p miss T . To reduce the multijet background and suppress dilepton events from tt processes, events are required to have 30 < M T < 200 GeV. All backgrounds are further suppressed by requiring that at least two jets be identified as originating from b quarks. All jets with p T > 20 GeV are considered as b quark candidates, including those that are not among the four most energetic. The combined secondary vertex algorithm [32, 33] tags b quark jets with an efficiency of about 70% and mistags jets originating from gluons, u, d, or s quarks with a probability of about 1%, for the typical p T ranges (30-100 GeV) probed in tt events. Charm jets have a probability of ≈20% of being tagged as b quark jets. The residual multijet backgrounds, already strongly suppressed by the b tagging requirement described above, are estimated by normalising simulated event samples to yields in control data samples. The control samples are defined by selection criteria which are similar to those for the signal, but which have no b tagging requirement, and have M T < 30 GeV for the µ+jets channel or have an electron MVA discriminant value smaller than 0.5 for the e+jets channel. The estimated amount of multijet events is ≈2% of the e+jets sample, and less than 1% of the µ+jets sample. The contributions of all other residual backgrounds are determined using simulation. Reconstruction of the tt system and reweighting method The reconstruction of the tt system, described in detail in Ref. [13], relies on testing the selected lepton, the measured p miss T , and all selected jets for their compatibility with the top quark decay products from the leptonic (t → bW → b ν) and hadronic (t → bW → bqq ) branches. The unmeasured component of the neutrino momentum p ν z is determined by requiring the tt system to be consistent with the invariant masses of two top quarks and two W bosons. With these constraints, b jets are correctly assigned to the leptonic (hadronic) branch in about 74% (71%) of signal events. After the assignment, a kinematic fit is performed, where the momenta of the measured jets and lepton are allowed to vary within their resolutions to better comply with the mass constraints, leading to an improved determination of p ν z and a more accurate reconstruction of the tt system. In about 5-7% of the selected events, the fit fails to find a solution that is compatible with the constraints and such events are discarded. The number of data events passing all selection criteria, including the fit convergence, is 71 458 in the e+jets sample and 70 986 in the µ+jets sample. A study using simulations normalised to the most precise theoretical cross sections available to date [34][35][36][37] indicates that the final sample composition is largely dominated by tt events, with about 82% of events from the +jets decay mode, ≈10% from other decay modes (including τ leptons), and ≈3.5% of the events from single top quark processes. The remaining events come from backgrounds not containing top quarks in the final state. The method [13] employed to measure the W boson helicity fractions (F L , F 0 , F R ) ≡ F consists of maximising a binned Poisson likelihood function constructed using the number of observed events in data N data (i) and expected events from MC simulation N MC (i; F), in each bin i of the reconstructed cos θ * rec distribution, While the charged lepton is easily identified in the leptonic branch of tt decays, the downtype quark jet arising from the W boson decay in the hadronic branch of tt decays can not be experimentally distinguished from the up-type quark jet. Due to this ambiguity, only the absolute value | cos θ * rec | can be reconstructed for the hadronic branch. Hence, only the leptonic branch measurement of cos θ * rec is used in this analysis. The expected numbers of events from background processes, N W+jets (i), N DY+jets (i), and N multijet (i) represent W boson production in association with multiple jets, Drell-Yan production in association with multiple jets, and production of multiple jets, which do not depend on the W boson helicity fractions. For the processes containing top quarks, the number of expected events in a given bin i is modified by reweighting each event in that bin by a factor w, defined for each decaying branch as where θ * gen is the helicity angle (specified at matrix element level) of a particular decay branch, and F SM L , F SM 0 , F SM R are given in Eq. (3). Therefore, the number of expected events, as a function of the helicity fractions to be measured, is where N tt (i; F) = F tt ∑ tt events in bin i w lep (cos θ * gen ; F) × w had (cos θ * gen ; F) , N single-t (i; F) = ∑ single-t events in bin i w single-t (cos θ * gen ; F) represent the expected number of events fulfilling event selection criteria for processes involving top quark pair, and single top quark production, respectively. The normalisation factor F tt for the tt sample is a single free parameter in the fit across all bins. The expected cross section for the simulated reference tt sample is 252.9 +13.3 −14.5 pb, calculated at NNLO and next-to-next-toleading-log (NNLL) accuracy [34,35], and describes the data well. The fitted values of F tt in both e+jets and µ+jets channels are compatible with 1.00 within 3%. The overall normalisation factor for simulated single top quark events is not modified in the fit and the uncertainty in the assumed cross section is considered as a source of systematic uncertainty. Finally, the unitarity constraint (F L + F 0 + F R = 1) is imposed, so that one of the helicity fractions, namely F R , is bound by the measurement of the other two. The method was validated using pseudo-experiments, where the fitting procedure was performed on pseudo-data, mimicking altered helicity fractions. Linearity tests show that the fitting procedure correctly retrieves the helicity fractions of altered input values for F 0 ∈ [0.50, 0.85] and F L ∈ [0.20, 0.50]. Likewise the corresponding statistical uncertainties were verified using sets of statistically uncorrelated pseudo-data. Systematic uncertainties Systematic effects which could potentially bias the measurement of the W boson helicity fractions have been investigated and their corresponding uncertainties determined, as presented in Table 1. Residual corrections are applied in simulation to the jet energy scale (JES), to account for differences between data and simulation. The momenta of the jets in simulation are also smeared so that the jet energy resolution (JER) in simulation agrees with that in data. These corrections and smearings are propagated into p miss T to correct its momentum scale. The uncertainties [24] associated with the JES and JER corrections are also propagated to p miss T , and the full analysis, including the tt reconstruction and the resulting measurements of the W boson helicity fractions, is repeated. Scale factors are used to correct the b tagging efficiency in simulation, where those corrections are shifted by their estimated uncertainties, and the full analysis repeated. Scale factors are also used to correct leptons for their identification, isolation and trigger efficiencies, which are varied within their uncertainties so as to maximise potential shape variations of the predicted cos θ * distributions. To account for any possible bias of the W boson helicity measurements due to uncertainties in the normalisation of simulated backgrounds, the assumed cross section for each sample is varied individually [13]. An uncertainty of 30% is used for the normalisation of DY+jets, single top quark, and W boson production in association with light-quark or gluon jet production. Since the modelling of the simulated heavy-flavour content of the W+jets sample is known to be inaccurate, an uncertainty of +100% −50% is assumed for simulated events involving a W boson produced in association with b quark jets. The impact of the DY+jets normalisation uncertainty in the analysis is small, since it corresponds to only a few percent of the sample composition. The normalisation of the multijet background is estimated from control samples and results in an uncertainty of +50% −50% in the e+jets channel and +40% −50% in the µ+jets channel. Shape uncertainties on the multijet background templates were investigated by comparing the distributions in several different control regions, both in MC and in data, and were found to be negligible, compared to the much larger normalisation uncertainties. Several uncertainties from possible systematic effects related to theoretical modelling of the signal are estimated by replacing the default tt samples with alternative tt samples and repeating the entire analysis. Specifically, for the MADGRAPH interfaced with PYTHIA event generation, the default m t value of 172.5 GeV is shifted up and down by 1 GeV; the renormalisation and factorisation scales are varied down (up) by a factor of 0.5 (2); the kinematic scale used to match jets to partons (matching threshold) is varied down (up) by factor of 0.5 (2); finally, the parton shower and hadronisation modelling is studied in a tt sample simulated with MC@NLO v3.41 Figure 3: Left: the measured W boson helicity fractions in the (F 0 , F L ) plane. The dashed and solid ellipses enclose the allowed two-dimensional 68% and 95% CL regions, for the combined +jets measurement, taking into account the correlations on the total (including systematic) uncertainties. The error bars give the one-dimensional 68% CL interval for the separate F 0 and F L measurements, with the inner-tick (outer-tick) mark representing the statistical (total) uncertainty. Right: the corresponding allowed regions for the real components of the anomalous couplings g L and g R at 68% and 95% CL, for V L = 1 and V R = 0. A region near Re(g L ) = 0 and Re(g R ) 0, allowed by the fit but excluded by the CMS single top quark production measurement, is omitted. The SM predictions are shown as stars. Uncertainties in the helicity fractions arising from the limited size of the simulated tt samples are taken into account, both in the main analysis and in the determination of the modelling uncertainties. In the former case, these effects are added as a separate source of uncertainty. In the latter case, the systematic uncertainties in the W boson helicity are assigned to be the larger of either (i) the statistical precision of the limited sample size or (ii) the systematic shift of the central value with respect to the reference tt sample. The shape of the p T spectrum for top quarks, as measured by the differential cross section for top quark pairs [25,40], has been found to be softer than the predictions from MADGRAPH simulation. The effect of this mismodelling is estimated by reweighting the events in the simulated tt sample, so that the top quark p T at parton level in the MC describes the unfolded data distribution. Further, the systematic effects due to the PDFs used to simulate the signal and background samples are estimated according to the prescriptions described in [41, 42], using NNPDF21 [43] and MSTW2008lo68cl [44] PDF sets as alternatives to those used at generation. Finally, uncertainties related to the modelling of the pileup in simulated events are also taken into account. The total systematic uncertainty is given by the sum in quadrature of all uncertainties described above. Results The measurements of the W boson helicity fractions, using cos θ * from the leptonic branch of tt events that decay into e+jets or µ+jets, including the full combination of these two measurements, are shown in Table 2. Within an individual channel, the helicity parameters F 0 and F L are fit simultaneously, but they are strongly anti-correlated due to the unitarity constraint F L + F 0 + F R = 1, as indicated by the statistical correlation coefficient ρ 0,L given in the table. The separate helicity measurements from the e+jets and µ+jets channels are combined into a single +jets measurement using the BLUE method [45,46], taking into account all uncertainties and their possible correlations. Uncertainties related to lepton efficiency, multijet background estimations, and statistical uncertainties are considered uncorrelated between the e+jets and µ+jets analyses, while all other uncertainties are assumed to be fully correlated. The combined +jets measurement of the helicity fractions is dominated by the µ+jets channel, with weights more than double those of the e+jets channel. The χ 2 of the combination is 2.13 for 2 degrees of freedom, corresponding to a probability of 34.5%. The measurement uncertainties are dominated by systematic effects that are correlated between both the e+jets and µ+jets channels. The combined F 0 and F L values are anti-correlated with a statistical correlation coefficient ρ 0,L = −0.959. The total correlation coefficient, considering both statistical and systematic uncertainties, is found to be −0.870. The measured helicity fractions presented in Table 2 are consistent with the SM predictions given at NNLO accuracy [5]. Figure 2 shows, separately for the e+jets and µ+jets channels, the distributions for the cosine of the helicity angles from the leptonic branch, which are used in the helicity measurements, and the distributions of the corresponding absolute values from the hadronic branch, for comparison purposes. The simulated samples involving top quarks used in the figure were produced using the measured values for the W boson helicity fractions, as determined from the combined +jets fit. Left-handed W bosons tend to populate the region at cos θ * ≈ −1, where the charged lepton overlaps with the b quark. However, the angular separation requirement between leptons and jets removes most of the events near cos θ * = −1. Very few events are expected in the region preferred by right-handed bosons, near cos θ * = +1. However, due to resolution effects, the reconstructed distribution does not fall as rapidly as expected in that region, where the charged lepton and b quark have opposite directions. For these reasons, the shape of the reconstructed cos θ * distribution differs from that expected in the SM (Fig. 1). These features are well reproduced by the simulation, and taken into account in the measurement. Table 2: Measurements of the W boson helicity fractions from lepton+jets final states in tt decays. The helicity fractions F 0 and F L are measured simultaneously and are strongly anticorrelated, as indicated by a correlation coefficient ρ 0,L , because F R is derived from the unitarity condition. Using these results, limits on anomalous couplings are obtained by fixing the two vector couplings in Eq. (2) to their SM values, V L = 1 and V R = 0, and choosing the tensor couplings, Re(g L ) and Re(g R ), as free parameters. The combined +jets measurement of the W boson helicity fractions F 0 and F L is reinterpreted in terms of the tensor couplings, Re(g L ) and Re(g R ), using the relationships between the W boson helicity fractions given in Ref. [8]. Summary The W boson helicity measurements are displayed in the (F 0 , F L ) plane in Fig. 3 (left), together with their one-dimensional statistical (inner-tick mark) and total (outer-tick mark) error bars. The full two-dimensional confidence level (CL) contours corresponding to 68% (dashed line) and 95% (solid line) probabilities are also displayed for the combined measurement. The SM prediction is shown as a star and lies within the 68% CL contour. The corresponding regions in the (Re(g L ), Re(g R )) plane, allowed at 68% (dark contour) and 95% CL (light contour), are shown in Fig. 3 (right), together with the SM value. They are derived from Fig. 3 (left), using the relationships between the W boson helicity fractions and the anomalous couplings given in Ref. [8]. A region near Re(g L ) = 0 and Re(g R ) 0, allowed by the fit but excluded by the CMS single top quark production measurement [47], is not shown. If the right-handed component F R is bound to zero, consistently with the SM within the precision of the current measurement, the combined +jets measurement amounts to F 0 = 0.661 ± 0.006 (stat) ± 0.021 (syst). In this case, F L is obtained via the unitarity constraint and yields F L = 0.339 ± 0.006 (stat) ± 0.021 (syst). Summary A measurement of the W boson helicity fractions in top quark pair events decaying in the e+jets and µ+jets channels has been presented, using proton-proton collision data at √ s = 8 TeV, and corresponding to an integrated luminosity of 19.8 fb −1 . The helicity fractions F 0 and F L are measured with a precision of better than 5%, yielding the most accurate experimental determination of the W boson helicity fractions in tt processes to date. The measured W boson helicity fractions are F 0 = 0.681 ± 0.012 (stat) ± 0.023 (syst), F L = 0.323 ± 0.008 (stat) ± 0.014 (syst), and F R = −0.004 ± 0.005 (stat) ± 0.014 (syst), with a correlation coefficient of −0.87 between F 0 and F L , and they are consistent with the expectations from the standard model. Acknowledgments We congratulate our colleagues in the CERN accelerator departments for the excellent performance of the LHC and thank the technical and administrative staffs at CERN and at other CMS institutes for their contributions to the success of the CMS effort. In addition, we gratefully acknowledge the computing centres and personnel of the Worldwide LHC Computing Grid for delivering so effectively the computing infrastructure essential to our analyses. Finally, we acknowledge the enduring support for the construction and operation of the LHC and the CMS detector provided by the following funding agencies: BMWFW and FWF (Austria); FNRS and Individuals have received support from the Marie-Curie programme and the European Research Council and EPLANET (European Union); the Leventis Foundation; the Alfred P. Sloan [29] CMS Collaboration, "Performance of electron reconstruction and selection with the CMS detector in proton-proton collisions at √ s = 8 TeV", JINST 10 (2015) P06005, doi:10.1088/1748-0221/10/06/P06005, arXiv:1502.02701.
6,820.8
2016-05-29T00:00:00.000
[ "Physics" ]
Migration as a Window into the Coevolution between Language and Behavior Understanding the causes and consequences of language evolution in relation to social factors is challenging as we generally lack a clear picture of how languages coevolve with historical social processes. Research analyzing the relation between language and socio-economic factors relies on contemporaneous data. Because of this, such analysis may be plagued by spurious correlation concerns coming from the historical co-evolution and dependency of the relationship between language and behavior to the institutional environment. To solve this problem, we propose migrations to the same country as a microevolutionary step that may uncover constraints on behavior. We detail strategies available to other researchers by applying the epidemiological approach to study the correlation between sex-based gender distinctions and female labor force participation. Our main finding is that language must have evolved partly as a result of cultural change, but also that it may have directly constrained the evolution of norms. We conclude by discussing implications for the coevolution of language and behavior, and by comparing different methodological approaches. The Methodological Challenge Disentangling whether language influences the evolution of society, whether social factors impact language evolution, or whether they are independent of each other, is a daunting challenge. Indeed, it requires ruling out spurious correlations in cross-cultural linguistic analysis and addressing a fundamental problem of identification. As Roberts and Winters (2013) highlight, it may be inappropriate to simply treat languages as independent data points because they are related by both vertical and horizontal transmission mechanisms. For instance, sharing a common ancestor (language families) or spillovers via contact with neighboring languages in the past (linguistic areas) may generate spurious correlations between language and behavior. More concretely, it hinders our understanding of whether linguistic characteristics reflect changes in socio-economic relations and culture, whether they evolve independently, or even if they constraint and influence directly behavior. Roberts et al. (2015) demonstrate that cross-cultural correlations involving languages may be spurious once these language dependencies are accounted for and propose a series of empirical tests to help address these features of language. In this paper, we consider a further methodological complication, which arises in when studying relationships between language structure and socioeconomic behavior: the potential for these associations to depend on the surrounding environment. That is, that language may co-evolve with institutional constraints. An example is illustrative. Consider the correlation between future time reference (FTR) in language and the propensity to save as examined in Chen (2013) and Roberts et al (2015). Assume that a correlation between the two exists. That is, speakers of languages that exhibit a stronger FTR have a higher propensity to save. A task such as saving does not occur in vacuum. Rather, observed saving behaviors are dependent on the existence of a liquid and stable financial system regardless of individual preferences. Should such a system not exist (or should it be highly inefficient), a higher propensity to save may translate into higher investment in non-financial assets such as cattle -which may not be observable to the researcher. An empirical analysis of the relationship between languages' FTR and (financial) savings behavior could then falsely conclude that there is no relationship. Hence, it is possible for the estimated magnitude and significance of observed correlations between linguistic and socioeconomic behaviors to depend on the institutional environment within which individuals operate. Our Proposal We propose a new methodology to address this component of the identification problem: the application of the epidemiological approach. This approach takes its origin from epidemiologists who compare immigrants to natives in order to isolate the contribution of genetic factors from the influence of correlated environmental factors. The idea is to use immigrant populations to study the relationship between linguistic features and non-linguistic choices or individual outcomes that may evolve under a common institutional environment. As an example, we study the labor market decisions of immigrants in the US. These migrants speak languages that exhibit varying levels of grammatical gender distinction. Theory suggests that we should expect women speaking languages that contain genders based on biological sex to participate less intensively in formal labor markets and instead to adopt more traditional gender roles such as work within the home (Hicks et al. 2015). The empirical strategy we propose allows researchers to control for linguistic co-evolution, the institutional set up of the host country, and for unobservable cultural influences obtained in the origin country. This strategy draws its identification from migrants originating from the same country, but speaking languages with varying structure. We empirically test this hypothesis on a sample of 675,000 immigrants in the U.S. from 156 countries and speaking 47 languages. We show that this approach is compatible to that of Roberts et al. (2015), which controls for language relatedness. In particular, allowing the intercept as well as the slope of the relationship to vary, as a function of language structure and behavior, is feasible. The rest of the paper is organized as follows. Section 2 presents the epidemiological approach. Section 3 presents an application. Section 4 concludes. The Epidemiological Approach Epidemiologists rely on the comparison of immigrant and native populations in order to isolate the contribution of genetic factors from the influence of environmental factors. This approach has been extensively applied within the economics research (Fernandez, 2007). Fundamentally, this approach implies studying variations across first and second-generation migrants to investigate the impact of their culture and disentangle its effect from the institutional and political environment of the host country. We propose that extending this approach to study language correlations with cultural and socio-economic outcomes is a fruitful avenue for future research. Studying the behavior of migrants allows the researcher to compare individuals that evolve in a common institutional environment. As a result of the shared environment, incentives regarding their socioeconomic behavior are held constant across individuals. For linguistics specifically, it is possible to undertake a comparison of individuals who share the same country of origin, but speak different languages. Exploiting this source of heterogeneity allows researchers to control for a wide range of unobservable factors from both the home and the host country. We provide an example to illustrate the set of strategies available to researchers when using this methodology. In particular, the next section presents an analysis of female labor participation among immigrants to the U.S and its correlation with sex-based grammatical distinctions in language. This application also highlights the richness of available census data concerning linguistic diversity both across and within countries of origin. Data Our sample comes from the US in the American Community Survey 2007-2011 (ACS, 5% sample) and consists of migrants who report speaking a language other than English in their own home. This provides 675,000 observations from 156 countries and speaking 47 different languages. For each migrant, we have information about their labor market status, country of origin, language spoken in the home, and various other socioeconomic indicators such as income, education, marital status, level of English proficiency, and time since migration. Our outcome variable is a dummy variable equal to 1 if the individual is in the labor force and 0 otherwise. To quantify the presence of gender distinctions in language, we assign a dummy variable equal to 1 if the language has a gender system based on biological sex, and 0 if not. We obtain this information from the World Atlas of Language Structures (Dryer & Haspelmath 2013). While most languages around the world have a sex-based gender system, migrants to the US are from sufficiently diverse countries that the sample offers a wide variation in language structure. In particular, the average value of our linguistic dummy is 0.81 with a standard deviation of 0.39. Empirical Strategies available in the Epidemiological Approach A further key advantage of the epidemiological approach is that it allows the researcher to employ fixed effects strategies, which we illustrate in the following example. As a benchmark, we start by assessing the simple correlation between labor participation and sex-based grammatical distinctions in language. Because we are interested in the gap in participation between women speaking languages with different grammatical structure, we include an indicator variable equal to 1 if the individual is a woman, and an interaction between that indicator variable and our language variable. The coefficient of interest is this interaction term: it measures the additional impact on labor participation of being a female migrant speaking a language with a sex-based gender system compared to being a female migrant speaking a language without a sex-based gender system. This effect is in addition to the estimated impact of being a female compared to being a male (captured by the female coefficient alone), and in addition to the direct impact of speaking sexbased language alone (regardless of gender). Additionally, we control throughout the analysis for the individual's income and education levels, English proficiency, marital status and state of residence, as these factors may influence economic participation rates. In this setting, the interaction term compares women who have the same socioeconomic profile and live in the same state, but who speak a language that has a different grammatical structure. We use a simple OLS regression model. This simplifies the interpretation of the results, which are virtually the same as with a logit regression model. Column (1) in Table 1 presents these results. A first strategy when using the epidemiological approach is to use country of origin fixed effects. This allows us to capture the role of norms of behavior related such as gender roles acquired prior to migration that are specific to an immigrant's country of origin. These fixed effects capture unobservable cultural influence on the migrants' behavior. Such a strategy allows us to effectively compare labor participation of women with similar socioeconomic background, living in the same US state, and coming from the same country, but speaking a language with a different grammatical structure. The results are presented in column (2) of Table 1. Second, the epidemiological approach permits the use a set of fixed effects to address language relatedness. Indeed, languages may be related in two ways: a common ancestor (vertical dependence) and language contact (horizontal dependence) as discussed by Roberts et al. (2015). To account for the impact of language relatedness, we include a set of fixed effects for each language's family and linguistic area (Nichols et al. 2013). This allows the correlation between gender in language and labor market participation to have a different intercept across languages that pertain to a different language family and linguistic area. Column (3) of Table 1 includes language family and language area fixed effects. Third, Roberts et al. (2015) argue that the strength of the correlation between a linguistic trait and a non-linguistic variable may itself be dependent on language relatedness. We can control for this dependence by including a set of interactions between each language's family and linguistic area, and the linguistic feature of interest itself. This allows the correlation to have a different slope across languages that pertain to a different language family and linguistic area. Column (4) of Table 1 presents the results. A final strategy is to include fixed effects of the country of origin interacted by the subpopulation that the linguistic trait is supposed to affect. This approach depends on the particular nature of such a trait. In our example, the main assumption is that women speaking a language with a sex-based gender system are less likely to participate in the labor market, due to gender roles embedded in and/or caused by the language structure. If so, it should also be the case that these women behave differently than man in the country of origin. Therefore, an even more stringent strategy is to control for country of origin interacted with female fixed effects. With this strategy, we can control for characteristics of the origin country that are specific to women, thereby encapsulating the origin country characteristics that are most relevant to the question at hand. Column (5) of Table 1 presents the results. Notes: Estimates are survey weighted. Sample includes all immigrants aged 16 and above who report speaking a language other than English in the home. Additional controls include time since immigration, household income, household size, age, age squared, number of children, log wages, and indicators for survey wave, level of English language proficiency, marital status, student status, race and ethnicity, education level, and state of residence. Robust standard errors are in brackets. Source: Results calculated using the 2007-2011 ACS. *** Significant at the 1 percent level. ** Significant at the 5 percent level. * Significant at the 10 percent level. Results The results in column (1) show that compared to male migrants speaking a language lacking a sex-based gender system, male migrants speaking a language that has a sex-based gender system are 6.0 percentage points more likely to be in the labor force. In comparison, similar female migrants are 6.3 percentage points less likely to be in the labor force. This discrepancy is in addition to the average gap in labor force participation between male and female migrants of 17.3 percentage points. Controlling for the country of origin (column (2)) and language relatedness (columns (3) and (4)) alters the magnitude slightly but does not remove the significance of the results, suggesting that there is not much heterogeneity in the relationship between labor participation and language across origin countries, linguistic families and linguistic areas in this context. Finally, controlling for the interaction between country of origin and female reduces the magnitude of the coefficient of interest. Women speaking a language with a sex-based gender system are 2.9 percentage points less likely to be in the labor force than similar women speaking a language without a sex-based gender system. The coefficient on the interaction term is still significant at the 1% level. Implications for the coevolution of language and behavior Our application and analysis has centered on presenting a set of simple yet powerful strategies that the epidemiological approach makes possible. Studying migrant populations has several additional advantages that researchers interested in the study of language evolution and its relation to non-linguistic phenomena may find useful. Our example demonstrates that the correlation between gender in language and female labor force participation is robust to controlling for country of origin and for language relatedness. Yet, the magnitude of the coefficient is substantially reduced when controlling for female specific country fixed effects. This implies that language must have evolved partly as a result of cultural change, but also that it may have directly constrained the evolution of norms, even if to a smaller extent. External versus Internal Validity of Different Approaches While they propose a series of series of empirical tests to be applied to crosscultural data, Roberts et al. (2015) conclude that "experiments or case-studies would be more fruitful avenues for future research on this specific topic, rather than further large-scale cross-cultural correlational studies.'' We agree that there is much promise in experimental research. At the same time, while laboratory experiments arguably have a strong internal validity, they may not perform well in terms of external validity. The non-generalization of the results from lab experiments has been the subject of intensive research in economics (e.g., Stoop et al, 2012, Abeler & Nosenzo, 2014. At the other extreme, cross-cultural studies perform well in terms of external validity by nature, but they are more likely to suffer from internal validity problems, as Roberts et al. (2015) makes clear. We thus place the epidemiological approach in the middle ground in terms of both external and internal validity. While the environment is not perfectly controlled by the researcher, migrants speaking different languages are observed within the same institutional environment. On the other hand, while findings are more generalizable than for lab or even framed field or natural experiments, migrants are a selected pool that may differ from the native populations. While all approaches have advantages and disadvantages, the epidemiological approach provides researchers with an opportunity that should not be neglected. This is because (1) it provides a middle ground between crosscultural correlations and experiments in terms of validity and (2) because it provides a rich new setting with which to test the relation between language evolution and non-linguistic phenomena.
3,761.8
2016-03-13T00:00:00.000
[ "Linguistics", "Sociology" ]
ASSESSMENT MODEL OF LEVELS FOR WINTER ROAD MAINTENANCE The limited funding for the road industry leads to economizing in the planning of road network maintenance, to identifying the appropriate priorities of the activities with the greatest benefit for the society. The level of maintenance is the direct assessment of the road operation and maintenance service provided to the road users; it directly affects the road maintenance and for road users costs the better is road maintenance, the road users incur the fewerexpenses and vice versa. Insufficient road maintenance in the winter time causes not only the danger of traffic accidents but also worsens the driving conditions, increases the fuel consumption, vehicle depreciation, transportation becomes more expensive. Many results of studies showed that the current choice of maintenance levels in the winter time taking into account only the road category and traffic volume does not ensure the indicators of the most advanced world countries and road functional purpose. The principle of the minimal expenses for the society should be the main criterion in identifying the optimal levels of winter road maintenance. The experience of Lithuania and foreign countries helped in creating the model of assessment of winter maintenance levels for Lithuanian roads of national significance, which can be applied in the other foreign countries as well. This model could be an effective tool for the selection of the optimal maintenance levels, which would economically substantiate the winter road maintenance strategy, that best corresponds to the needs of the society. Introduction The effective road transport system encourages economic growth, however, it also inevitably affects the environment both during construction and modernization as well as during road maintenance. The major part of the road maintenance funds is devoted to supporting the winter maintenance. Thus, it is of crucial importance to distribute them in the most rational way. To optimize the selection of the road maintenance levels in the winter time, it is necessary to regard not only the economic calculations, but also the environmental and social benefits and costs. The significant economic effect is achieved as a result of road funding and maintenance planning calculations which assist in solving the following problems: − how to organize and distribute funding for road maintenance; − which road maintenance levels should be applied and on which roads; − what economic benefit for the society stems from the winter road maintenance; − what amounts of funds are necessary currently and will be needed in the future to achieve and continuously maintain the established optimal road network maintenance level in the winter time. Many results of studies showed road maintenance great effect on road safety (Andrey et al. 2001;Fu et al. 2006;Hanbali, Kuemmel 1992, 1993Hermans et al. 2006;Nixon, Qiu 2008;Norrman et al. 2000;Qin et al. 2007;Shankar et al. 1995). On road with snow the accident risk is two times bigger than on bare, dry road but from ten to thirty times more if bad road condition (snow or ice) occurs unexpectedly, without warning and if the winter condition seldom occurs (Andrey et al. 2001;Eisenberg, Warner 2005;Knapp et al. 2002;Nixon, Qiu 2008;Velavan 2006). The five-year analysis of accident rate data on the national significance roads established that during the warm season the representative road accident amount was reduced by 36 %. However, during the cold season of the same period, the situation did not have improvement tendencies. Therefore, it is necessary to evaluate the quality of the performed winter road maintenance as well as to optimize the winter road maintenance selection methods. The analysis of the experience of developed countries in the world in the area of winter road maintenance (WRM) has shown that most of them apply at least three WRM levels (I -high level, II -average level, III -low level). Each maintenance level is meant for a different road maintenance type, mostly based on traffic intensity and road category. Lithuanian national road maintenance in the winter time is currently selected without taking into consideration all the factors that determine the need for the society. Winter road maintenance level application while taking into account only the road category and traffic volume does not ensure the indicators of the most advanced world countries and road functional purpose. In Lithuania the quality of the WRM results from the limited funding, the WRM levels are being selected without taking into consideration all the factors that determine the need for the society. Thus, it is appropriate to create a WRM quality assessment system, which would assist in establishing economic feasibility of the need for funds and would optimize the application of the WRM levels. Having performed the analysis of the WRM practice, it was determined that selection of economically feasible WRM levels would require the assessment of the functional purpose of the roads. The principle of the minimal expenses for the society is the main criterion for identifying the optimal levels of WRM. Swedish National Road and Transport Research Institute (VTI) in collaboration with the Swedish Road Administration developed a different Winter Model which is an unique tool allowing to create an economically-based winter maintenance strategy while selecting the road maintenance levels leading to the most significant economic benefit to the society. Figure 1 shows 60 % of the total expenses incurred by the society consist of costs related to travel time. Almost 15 % are made up by road accident losses, the costs of the remaining social and economic factors do not reach 10 %, while the expenses of the WRM represent only up to 0.5 %. These results can lead us to a conclusion that currently, the most important social and economic factors are travel time and road accidents, which jointly stand for more than 75 % of all the expenses incurred by the society in Sweden (Berglund 2008;Wallman 2004). In summarizing the practices of the road infrastructure asset management and road maintenance in different countries as well as the quality improvement possibilities it is worth noting that despite the differences of details, complexity and application for specific country's road network aspects in the systems, models, and procedures applied for road infrastructure management and road maintenance of different countries, the general view of the goal to be achieved is the same, i. e. to ensure road infrastructure functioning, which would be the long-term, economically efficient and meet the social and environmental needs. As the social and ecological road infrastructure requirements become more evident, while the road maintenance funding is extremely limited, it is important to assess the social and economic aspects in selecting the road maintenance levels. The performed analysis of the scientific works on WRM can lead us to a statement that the condition of the surfaced portion of the roadway in the winter influences the greatest economic benefit. The technologies and materials used for the maintenance of this particular portion of the road as well as the set requirements (optimal selection of the maintenance levels in the road network) have the most significant impact on the potential maintenance quality improvement. Since the technologies used in Lithuania are similar to those used abroad, it would be more sensible to research the efficiency and appropriateness of the alternative materials under the conditions specific to Lithuania (comparing them with the traditional salts). However, the main task is to create a tool, which could assess the quality of the applied maintenance (surface cleaning and salting) means as well as the provided economic benefit for the road users. The purpose of this paper is to suggest a model of assessment of winter road maintenance levels that would economically justify the need of funds, and could be an effective tool for the selection of the optimal maintenance levels, taking into account the factors having the most significant impact on the direct expenses of the road users. Winter road maintenance experimental research and analysis of the results The market continually offers different new materials for salting or sanding the roads with seemingly greater effectiveness and lower negative impact on the environment. However, their price in comparison with the traditional chloride salts is dozens of times higher, while their declared effectiveness does not always prove itself in practice. Therefore, it is necessary to carry out experimental research to evaluate the efficiency of the alternative materials and to compare it with the currently used materials as well as to offer recommendations for the application of the materials on the national roads in Lithuania. The analysis of the experience of Lithuania and foreign countries lead to suggesting a research methodology of efficiency of chemical slipperiness reducing materials (SRMs), which would allow to make objective assessment of the chemical materials alternative to traditional chloride salts under meteorological and traffic conditions in Lithuania (Cuelho, Harwood 2012;Chen et al. 2009;Flintsch et al. 2009;;Malmivuo 2011;Nakatasuji et al. 2005 To identify the most efficient SRMs under the road weather conditions in Lithuania a preliminary research plan was compiled with two stages: − experimental research in the laboratory (Fig. 2). − experimental research on the tested sections (Fig. 3). Based on the developed research methodology for road SRMs research of five chemical materials was completed, including: sodium chloride (NaCl), calcium chloride (CaCl 2 ), magnesium chloride (MgCl 2 ), a mixture of sodium and calcium modified chlorides under the commercial name of Icemelt (NCMC), and a mixture of sodium acetate and sodium formate under the commercial name of Nordway (NANF) (Fig. 2). The assessment of the laboratory-based research results the three most effective SRMs was selected (NaCl, CaCl 2 , NANF), which were tested under the real-life weather and traffic conditions on the national significance road sections (Fig. 3). The summarized experimental laboratory-based and field research results are presented in Table 1, in which the resulting materials' efficiency in each research is distributed into four categories (starting from high efficiency marked by "+" to no efficiency marked by "-"). In summarizing the experimental research results, similar tendencies of laboratory-based and field-based SRM efficiency research were noticed: − in case of surface temperature ranging from -2 °C to -6 °C under the effect of NaCl and CaCl 2 , the NaCl Note: + high efficiency, ± average efficiency, ∓ low efficiency, -no efficiency, * has not been studied. friction coefficient varies in the same interval from 0.3 to 0.8 and reached the same values after an identical time interval, while in case of surface temperature lower than -9 °C, all the researched materials are insignificantly effective. − when the surface temperature reaches -15 °C and fall below the ice can be melted only using CaCl 2 and MgCl 2 salts. However, even their efficiency is considerably low, while all the other researched materials are not efficient at all. NaCl efficiency compared to NANF is higher up to -7 °C both in respect of the ice and snow melting speed and in the value of the friction coefficient. The assessment of the chemical materials for WRM tested in different aspects and the additional evaluation of the effect of these materials on the environment as well as the consideration of their price lead us to a conclusion that the most suitable choice for the highest maintenance level on roads are NaCl and CaCl 2 salts which are currently in use. It is noteworthy that prior to the application of the described and other alternative materials it is necessary to complete a thorough analysis of the requirement of their supply, transportation, storage and use (technical capacity) as well as other specific features, whereas some of them are extremely sensitive to the humidity of the environment and are transported only in sealed vacuum bags, etc. Moreover, while taking into account their considerably high price (adding up all the expenses related to their purchase until application) it is feasible to perform the cost-benefit analysis since the practices of the foreign countries show that the scope of use of these materials is very small. The theoretical model of assessment of winter road maintenance levels The evaluation of the analysis of literature sources performed in the first chapter leads to a compilation of the theoretical model of assessment of WRM levels (Fig. 4). The theoretical model is made based on the main principles of the Winter Model developed by VTI in collaboration with the Swedish Road Administration. It is impossible to fully adopt the mentioned Swedish winter model as the model is adapted for the Swedish road network in particular. To identify the economically optimal maintenance levels, it is necessary to calculate certain economic indicators -maintenance prices and road user cost road users' expenses. It was established that the economically optimal maintenance levels could be calculated for only one road element -the surface. The most significant impact on the road user's expenses in the winter time is made by the condition of the surfaced portion of the roadway. Therefore, the application of the developed theoretical assessment model of WRM levels is dedicated to this particular element of the road. For the sake of the economically feasible assessment of WRM levels, the functional purpose of the Lithuanian national significance roads was taken into consideration. In such case, the quantification of public spending would be limited to the calculation of the average values of all the road functional dependence groups with the account of different maintenance levels. To identify the economically feasible road maintenance levels and to simplify the economic calculations and assessments the distribution of the Lithuanian national significance road network according to the functional purpose presented in the VGTU Road Research Institute scientific research report Functional Analysis of the Road Network Elements and Preparation of the Development Scheme were used (Fig. 5). The important estimate values were identified for the compiled theoretical WRM levels quality assessment model, which make the most significant impact on the direct expenses of the road users related to safe traffic, travel time, vehicle operational costs and environmental ones. Based on the minimal public spending principle to identify the optimal WRM level, it is necessary to: − calculate the maintenance price or the specific maintenance work type price of the certain road element for different maintenance levels; − identify the types of road users costs (vehicle operational costs, time, road accident expenses) which are influenced by the road element maintenance or specific maintenance work type; − calculate the road users costs for different maintenance levels of the certain road element or specific maintenance work type; − identify the maintenance level for which the aggregate road element maintenance and road users costs would be the lowest; the same maintenance level will be the optimal one for that particular element or maintenance work type. In other words, in the case of maintenance of this particular road element under a certain level, the public spending will be the lowest, whereas in the case of maintenance of the same under a different (even higher) level the society will incur losses. To get an exact result economically-wise while identifying the factors influencing the societal needs, it is important to know all the factors having an impact on the modelled size. However, while identifying the environmental and vehicle operational costs' estimates, a problem arose of the lack of reliable statistical data on these factors. Moreover, in Lithuania, the assessment of the mentioned factors would currently require immense material and physical resources. Thus, they are not being assessed in this paper. Application of a assessment model of levels for winter road maintenance on Lithuanian national roads After the analysis of the experimental research results, the evaluation of the national significance road maintenance levels' in the winter time was made by using the developed model. The benefit components of the national significance road maintenance in the winter time were evaluated, i. e. the road accident loss and travel time savings which are directly dependent on some beneficiaries (road users). The higher the volume of the road traffic, the more beneficiaries there are, and, consequently, the higher is the benefit provided by the road as well as the benefit provided by road maintenance. It is common practice to compare the activities under analysis, their costs, and con-sequences with a certain zero type alternative of the absence of activity. This principle is also applied while calculating the benefit of the WRM: the benefit is calculated as the reduction of the vehicle travel time, vehicle operational cost and road accident loss because the WRM is performed. The higher the maintenance level, the lower are the travel expenses (and losses) and the greater are the savings. However, it would be difficult to imagine national significance roads where WRM would not be performed at all -such catastrophic scenario would hardly be possible on national significance roads. While determining the degree of economic effect made by the road maintenance quality, a cost-benefit analysis was performed according to three WRM scenarios (Fig. 6): − basic -the applied WRM levels during the winter seasons of 2011-2014; − minimal -provided that all the national significance roads are being maintained by applying the lowest WRM levels; − optimal -provided that national significance road maintenance levels in the winter time are applied with consideration of their functional purpose. The experimental research results' assessment in Chapter 2 established the fact that the traditional chloride salts were the most effective for the Lithuanian weather and traffic conditions (Bulevičius et al. 2014;Laurinavicius et al. 2016;Ružinskas et al. 2016). During the cost-benefit analysis for the compiled three WRM scenarios the WRM technology and used chemical materials' costs applied in Lithuania in 2011-2014 were employed ( Table 2). As a consequence of the comparison of the alternative scenarios of WRM with the applied ones during the 2011-2014 winter seasons the following results were received (Table 3 and Fig. 7): − according to the minimal WRM scenario, more than 160 representative road accidents would take place, while the total societal losses would reach 122.03 mln. Eur compared to the basic scenario; − according to the optimal WRM scenario 13 representative road accidents less would take place, while 13.43 mln Eur compared to the basic scenario would reduce the total societal losses. Cost-benefit analysis results showed that the road maintenance costs in the winter season of 2011-2014 made up only approximately 4% of the total societal expenses compared to the expenses incurred by the society during road accidents and travel time expenses (which constitute 9 % and 87 % respectively) (Fig. 8). The obtained modelling results prove that the offered assessment model of winter road maintenance levels makes it possible to determine the economically feasible road maintenance strategy for Lithuania's national significance roads. Conclusions 1. Lithuanian national road maintenance in the winter time are currently selected without taking into consideration all the factors that determine the need for the society. Winter road maintenance level application while taking into account only the road category and traffic intensity does not ensure the indicators of the most advanced world countries and road functional purpose. 2. The analysis of the experience of Lithuania and foreign countries lead to suggesting a research methodology of the efficiency of chemical slipperiness reducing materials, which would allow making an objective assessment of the chemical materials alternative to traditional chloride salts under meteorological and traffic conditions in Lithuania. There were performed field and laboratory-based research of the materials efficiency based on suggested methodology. The research results showed that in case the air temperature is: − up to -6 °C the most effective material is the traditional sodium chloride; − in case it is lower than -9 °C all the tested materials are insignificantly effective. 3. The assessment of the analysed chemical materials for winter road maintenance tested in different aspects, and the consideration of the effect of these materials on the environment as well as the review of their price lead to a conclusion that the most suitable choice for Lithuanian conditions is the sodium and calcium chloride salts. 4. The calculation of the total public expenses during the winter seasons of 2011-2014 determined that the road maintenance costs made up only up to 4% of the total general expenses, while the expenditure related to the road accident losses and travel time costs respectively reach 9 % and 87 % of the total calculated public costs. The most economically rational way to apply the winter road maintenance levels which would minimize these total public costs. Based on the obtained modelling results the maintenance levels applied according to the optimal scenario during the 2011-2014 period up to 13 representative road accidents would be avoided, and more than 13 mln. Eur would reduce the total public expenses. In the case of implementation minimal winter road maintenance scenario, more than 160 representative traffic accidents would take place, and the total societal losses would be 122 mln. Eur more compared to the basic scenario. 5. It is important to know all the factors having an impact on the modelled size, to get an exact result economically-wise while identifying the factors influencing the societal needs. However, while performing the environmental and vehicle operational costs' estimations, a problem arose of the lack of reliable statistical data on these factors. Moreover, in Lithuania, the assessment of the mentioned factors would currently require immense material and physical resources. Thus, they are not being assessed in this paper. In the future, it would be appropriate to evaluate these factors, this process requires systematic accumulation and processing of the data related to environmental and vehicle operational cost estimates. These factors is important because of the presence of clear tendencies showing that in the future such areas as environmental protection, climate change, and energy saving will take priority positions and their significance for the economy and the entire society will only increase. 6. The suggested model of assessment of winter road maintenance levels could be an effective tool for the application of the optimal maintenance levels, which would economically substantiate the winter road maintenance strategy, that best corresponds with the needs of the society.
5,096
2017-06-25T00:00:00.000
[ "Engineering", "Environmental Science" ]
Bioactivity and Toxicity of Senna cana and Senna pendula Extracts This work investigated the content of total polyphenolic compounds and flavonoids as well as their toxicity and larvicidal and acetylcholinesterase inhibitory activities. The antioxidant activities of two medicinal Senna species extracts (Senna cana and Senna pendula) were also investigated. The ethanol extract of the leaves of S. cana and the ethanol extract of the branches of S. pendula presented the best performance in the DPPH/FRAP and ABTS/ORAC assays, respectively. For the inhibition of acetylcholinesterase, the hexane extract of the flowers of S. pendula presented the lowest IC50 value among the ethanol extracts of the leaves of S. cana and showed the best performance in some assays. The hexane extract of the leaves of S. pendula and the hexane extract of the branches of S. cana were moderate to Artemia salina Leach. In the quantification of phenols and flavonoids, the ethanol extract of the leaves of S. cana presented the best results. The ethanol extracts of the leaves of S. cana were found to be rich in antioxidants, phenolic compounds, and flavonoids. These results indicate the antioxidant potential of the extracts of Senna species and can be responsible for some of the therapeutic uses of these plants. Introduction e genus Senna (Fabaceae) includes about 260 species, 200 of which occur in the Americas. Several of these occur in the Brazilian northeastern semiarid region, such as S. martiana and S. spectabilis var. excelsa, which are used in folk medicine to treat colds, as laxatives, and for antioxidant, cytotoxic, and acetylcholinesterase inhibitory activities [1]. Many of these species are reported in the literature to contain anthraquinone glycosides, responsible for laxative activity, and polyphenolic metabolites such as flavonoids, which are scientifically recognized as having considerable leishmanicidal and antioxidant activity, among other biological activities [1][2][3]. With the current spread of diseases caused by arboviruses transmitted through Aedes aegypti mosquitoes, such as dengue, zika, and chikungunya, the search for new sources of natural repellents extracted from plants is very important to control outbreaks [11]. erefore, the aim of this study was to evaluate the pharmacological potential of S. cana HS Irwin and Barneby and S. pendula HS Irwin and Barneby, native species of the Brazilian northeastern, through different methods of determining antioxidant activity, toxicity, larvicidal activity, acetylcholinesterase inhibition, and flavonoid content (Supplementary Materials (available here)). Preparation of Plant Extracts. e leaves and branches of Senna cana were collected in Mucugê, Bahia State (−12°57′ 7900″ S and −41°19′ 3600″ W, elevation of 400 m). e leaves, branches, and flowers of S. pendula were collected in Cratéus, Ceará State (−5°17′ 833″ S and −40°67′ 75″ W, elevation of 300 m) ( Figure 2). Both places are in northeastern Brazil. e botanical material was deposited in the Prisco Bezerra Herbarium of Universidad Federal of Ceará with respective identification numbers of 50297 and 54075. e plant material was dried, weighed, and macerated thoroughly in n-hexane for seven days at room temperature. Afterwards, the hexane extract was filtered and concentrated in a rotary evaporator, obtaining the hexane extracts. is process was repeated with ethanol to obtain the ethanol extracts. e residues were dried and stored at 27°C. e extracts obtained are shown in Table 1. Chemical Screening. e tests were carried out according to the method proposed by Matos [12], using freeze-dried extracts. e main secondary metabolite classes present in the extracts were identified by chemical reactions with specific reagents and formation of precipitates or color changes. Some of the chemical tests performed are described below. Test for Detection of Xanthones and Flavonoids. Solutions of the extracts were prepared at a concentration of 5 mg·mL 1 in methanol. Aliquots of 5 mL of this solution were removed and placed in test tubes for chemical testing. Magnesium strips and 4 drops of concentrated HCL were added to the tubes. e presence of flavonoids and xanthones was detected by the appearance and intensification of the red color in the solution. Test for Detection of Tannins. Solutions of the extracts were prepared in the same way as for the flavonoid test (5 mg·mL 1 in methanol). Aliquots of 5 mL of this solution were removed and placed in test tubes for chemical testing. Distilled water (5 mL) was added, and the solution was filtered to remove any solids, after which 5 drops of FeCl 3 were added to attain a concentration of 10%. e formation of blue color indicates the presence of hydrolyzable tannins and green color indicates the presence of condensed tannins. Occurrence of species Senna pendula Crateús, Ceará, Brazil control, di-tert-butylmethylphenol (BHT), were mixed, and the absorbance of the reaction was read at 515 nm. e test was performed in triplicate at various concentrations (mg·mL −1 ) [13]. Absorbance measurements were determined in a Spekol 1100 spectrophotometer. Folin-Ciocalteu Method. Total phenol content was determined by the spectrophotometric method using the Folin-Ciocalteu reagent and gallic acid as the reference standard [14]. Ethanol extracts (7.5 mg) were dissolved in methanol and transferred to a 25 mL volumetric flask, and the final volume was completed with methanol. An aliquot of 100 µL of the latter solution was shaken with 500 µL of the Folin-Ciocalteu reagent and 6 mL of distilled water for 1 min. After this time, 2 mL of 15% Na 2 CO 3 was added to the mixture and stirred for 30 s. Finally, the solution was diluted to 10 mL volume with distilled water. After 2 h of incubation, the absorbance of the samples was measured at 750 nm. ABTS Radical Scavenging Assay. is method was based on Re et al. [15]. It measures the antioxidant capacity based on the ability of the substances to inactivate the cation radical 2,2′-azinobis-(ethylbenzo-thiazoline-6-sulfonic acid) diammonium salt (ABTS°+). Solutions of the extracts were prepared at a concentration of 600 mg·L −1 . Aliquots of 10 µL, 20 µL, and 30 µL of these solutions were added to test tubes, and the volume in the first two cases was completed with distilled water to 30 µL (extract and water). In dark environment, 3 mL of each solution (radical ABTS°+ + ethanol P.A.) was added to a test tube, which already had absorbance preset to 0.70 in the absence of light. Readings were taken at 734 nm in a spectrophotometer six minutes after addition of the radical. e percentage inhibition of ABTS°+ was determined from the standard curve of trolox, and the results were expressed as TEAC (trolox equivalent antioxidant capacity) µmol·g −1 . Potential Antioxidant FRAP (Ferric-Reducing Antioxidant Power). is method is based on the capacity of metabolites to reduce Fe 3+ to Fe 2+ . When this occurs in the presence of 2,4,6-tripyridyl-S-triazine (TPTZ), the formation of Fe 3+ /TPTZ with blue staining of the Fe 2+ occurs. e extracts were prepared in methanol at different concentrations between 0.25 and 1.0 mg/mL. Ten microliters of the extracts was first incubated with 30 µL of bidistilled water and 300 µL of FRAP reagent, consisting of 25 mL of acetate buffer (300 mmol·L −1 sodium acetate, pH � 3.6), 2.5 mL of TPTZ (TPTZ in 10 mmol·L −1 and HCl 40 mmol·L −1 ), and 2.5 mL of FeCl 3 at 37°C, before measurement. A ferrous sulfate calibration curve (0.01-1.0 mmol·L −1 ) was used, and the results were expressed as (Fe 2+ mmol·L −1 )·L −1 . e reaction was measured at 595 nm in a universal microplate reader (ELx 800, BioTek Instruments, Winooski, Vermont, USA) after 5 min resting time. Using these regression curves, the EC values were calculated as the concentrations of the antioxidant (expressed as mg·mL −1 ) giving an absorbance equivalent to a 1 mM Fe(II) solution according to Pulido et al. [16]. From a stock solution of DMSO, trolox (10 mmol·L −1 ) was diluted in ORAC buffer solution to a concentration of 20 μmol·L −1 . e ORAC buffer contains 75 mmol·L −1 of sodium hydrogen phosphate/hydrogen phosphate potassium with pH � 7.4. e decline of fluorescein was measured at 37°C at constant intervals of 2 minutes between measurements until completing 122 min using a CytoFluor 4000 fluorescence microplate reader (excitation wavelength at 530 nm was measured every minute for 25 min and emission wavelength at 585 nm was measured every minute for 30 min) (Perspective Biosystems, Minnesota, USA). e final ORAC results were calculated using a regression equation between the trolox concentrations and AUC, expressed as ORAC units, where 1 ORAC unit inhibits the decline produced by 1 μmol·L −1 of trolox. Total Flavonoid Contents. e total flavonoid contents of the extracts were determined using the spectrophotometric method described in Quettier et al. [18], where quercetin was used as a standard, in a solution of aluminum chloride. Twenty milligrams of each extract was placed in a test tube, and methanol was added to complete the volume of 50 mL, producing a second extract. en, 5 mL of this second extract was removed and 0.5 mL of 2% AlCl 3 solution was added, after which the volume was completed to 10 ml with 5% acetic acid solution. Following incubation for 30 min, the absorbance of the reaction mixture was measured at λ max � 425 nm with a Femto 700 plus spectrophotometer. e calibration curve was plotted using concentrations of 5, 10, 25, 50, and 75 µg·mL −1 of quercetin. Evaluation of Larvicidal Activity. is assay was based on Cavalcanti et al. [22]. Solutions of 100, 250, 500, and 1000 µg·mL −1 of all extracts were prepared with water and DMSO. en, 25 Aedes aegypti larvae were added in stage 3. e solution was left with the larvae for 24 hours; then, the dead larvae were counted and the LC 50 of each extract was calculated. Statistical Analysis. e relation between the phenolic compound and antioxidant variables was determined by linear regression using Excel and Origin 6.0 software. Chemical Screening. e chemical screening of the hexane extracts LHESC, BHESC, LHESP, BHESP, and FHESP was positive only for anthraquinones and triterpenes. However, for the ethanol extracts LEESC, BEESC, LEESP, BEESP, and FEESP, the presence of anthraquinones, flavonoids, triterpenes, tannins, and xanthones was confirmed. e results for the chemical screening of ethanol extracts are shown in Table 2. ese metabolites are frequently found in Senna species [23]. Determination of Antioxidant Activity and Quantification of Flavonoids and Total Phenol Contents. Antioxidant tests can be divided into two types: the direct method (ORAC), which is used to ascertain chemical kinetics, and indirect methods (DPPH, ABTS, and FRAP), which are mediated by electron transfer. Each method has its specificity. Some methods involve an acid medium, as is the case of the FRAP assay, while others are already in the basic medium, such as the ABTS assay, and other tests are carried out in a neutral medium, such as the test of total phenolic content. e pH of the medium can have an important effect on antioxidant capacity of the compounds [17]. In this work, we related the results to ethanol extracts because the hexane extracts were negative for flavonoids, which are recognized as having a significant antioxidant potential. Additionally, the hexane extracts were not compatible with all the antioxidant screening methods. e results obtained for the antioxidant tests are shown in Table 3. e extracts that showed the best performance against DPPH/FRAP were the ethanol extract of the leaves of S. cana According to the literature, direct methods (in this case, ORAC) are more suitable for the evaluation of antioxidant activity, especially those based on the controlled chain Biochemistry Research International reaction model, because in general they are more sensitive. e best practice is always to make a comparison between the data obtained by both methods (direct and indirect) in order to obtain greater analytical safety [17,24]. Correlation between the Antioxidant Assays. According to the statistical analysis and a comparison of correlation between the 4 tests, the strongest correlation (Table 4) among the antioxidant capacity assays was observed between the DPPH and the FRAP assays ( Figure 3) (r � 0.8908; n � 5) and the ABTS and ORAC assays (Figure 4) (r � 0.8353; n � 5). In the spectrophotometric quantification of the total flavonoid content (Table 5), the ethanol extract of the leaves of S. cana (LEESC) presented the best results (228.9 mg·g −1 ), followed by the ethanol extract of the leaves of S. pendula (LEESP), which presented a value of 221.1 mg·g −1 . Regarding the total phenolic content, the leaf extract of S. cana (LEESC) also presented the best result (724.5 mg EAG·g −1 extract), which confirms that this extract, because it contains a high phenolic content, automatically has a high content of antioxidants, since phenolic compounds are excellent natural antioxidants. It is noteworthy that the extracts that presented the best results had more satisfactory performance than the standards used (BHT and rutin), even though rutin is a flavonoid, which is a natural antioxidant. is suggests that the synergism of the compounds present in extracts may have influenced this excellent result. Phenolic compounds have been reported in the literature to have good antioxidant activity [25,26]. Like in the antioxidant tests, the branch extract of S. cana (BEESC) showed the lowest phenolic content and the second lowest total flavonoid content. ese results partly explain the weak performance of this extract in the antioxidant trials because of the lower content of phenolic compounds and flavonoids in this extract [7,9]. Another factor that can be considered for the good performance of the leaf extracts of S. cana, both in relation to the different antioxidant tests and the total phenolic content, may be the high content of flavonoids present in the extract, considering that the extract was prepared in an organic solvent, which increases the solubility of these flavonoids in the solution. While ORAC and Folin-Ciocalteu tests are not suitable for measuring liposoluble antioxidants, ABTS can measure the activity of both water-soluble antioxidants and liposoluble ones and DPPH in turn is soluble only in organic solvents [27,28]. When comparing the value obtained with the values reported in the literature, the TEAC value found for the ethanol extract of the leaves of Senna alata was 125 μmol trolox·g −1 (31.29 mg trolox·g −1 ), a value well below the values obtained for all extracts studied in this work [29]. According to Liczano [7], the aqueous extract of the leaves of S. reticulata presented an ORAC value of 226.6 μmol trolox·g −1 and a TEAC value of 34.04 μmol trolox·g −1 . Both these results are lower than those found for all extracts presented here. Based on the data observed, the extracts of S. cana and S. pendula had excellent performance in the different antioxidant trials and contain high concentrations of phenolic compounds and flavonoids in their composition. Also in comparison with the values in the literature, Mak et al. [30] reported that the ethanol extract of the flowers of S. bicapsularis L. contained total flavonoids and phenols of 12.93 mg quercetin·g −1 extract and 262.23 mg EAG·g −1 extract, respectively. In contrast, the extracts of S. cana and S. pendula studied contained much higher total flavonoids and phenols, and the lowest value found for the flavonoid content was for the ethanol extract of the branches of S. pendula, which presented a value of 87.04 mg of quercetin·g −1 extract. For the total phenol content, the lowest value was found for the ethanol extract of the branches of S. cana, which presented a content of 473.7 mg EAG·g −1 extract. Correlation between Total Phenolic Content and Antioxidant Capacity. e best correlation (Table 6), that is, positive correlation, between total phenolic compounds and antioxidant capacity was obtained in the ABTS assay (r � 0.595982; n � 5) followed by the ORAC assay (r � 0.576957; n � 5). Correlation between Total Flavonoid Content and Antioxidant Capacity. e best correlation (Table 7) between total flavonoid and antioxidant capacity was obtained in the ABTS assay (r � 0.29654; n � 5) followed by the ORAC assay (r � 0.28349; n � 5). Evaluation of Antiacetylcholinesterase Activity. Plants are promising sources of new drugs, including some used to treat Alzheimer's disease (AD), such as galantamine. e drugs available at present are effective only in the early stages of AD, whose period is very short. erefore, it is important to search for new drugs of natural origin that inhibit the enzyme acetylcholinesterase (AchE), both in the early stages and advanced stages of AD [31,32]. Phenols and flavonoids are important natural products that inhibit acetylcholinesterase and thus restore the level of acetylcholine, which is essential for brain function [33]. Since the Senna extracts contained significant concentrations of phenolic compounds and flavonoids, the evaluation of antiacetylcholinesterase activity was carried out to confirm this aspect. To date, no studies have been published evaluating the anticholinesterase activity of extracts of Senna species. ere are only two reports involving alkaloids isolated from Senna multijuga who had this activity [34,35]. e results obtained for the inhibition of cholinesterase are shown in Table 8. Only the ethanol extracts of the branches and leaves of S. cana showed no activity in this qualitative test. e hexane extract of the bark of S. pendula was the only one that presented significant activity. e hexane extracts of S. cana and the hexane and ethanol extracts of S. pendula presented inhibitory activity and can be considered for future studies against Alzheimer's disease, neurodegenerative diseases, and dysfunction of the cholinergic system. rough quantitative analysis of anticholinesterase activity, it was possible to verify an IC 50 value different from the value obtained for the eserine standard. e extract that presented the best IC 50 value was FHESP, in agreement with the result obtained in the qualitative test, where this extract also presented the best performance. Another observation is related to the ethanol extracts of S. cana, which in the qualitative test had a negative result, while in the quantitative test, they presented satisfactory results. e LEESC extract presented the third best performance in the quantitative test. is fact may be related to the high content of phenolic compounds and flavonoids found in this extract. Regarding antioxidant activity, this same extract performed best in the DPPH and FRAP assays and the second best in the ABTS and ORAC assays. ese results corroborate the finding of Penido et al. [33] that the higher the content of phenolic compounds and flavonoids, the better the performance against inhibition of acetylcholinesterase. Evaluation of Toxicity against Artemia salina. According to Silva et al. [36], the determination of the toxicity of plant samples against Artemia salina L. allows the evaluation of toxicity involving only one parameter: life or death. erefore, this model is considered as a preliminary form of testing, with low cost and easy handling, to identify bioactive compounds. e absence of cytotoxicity of the extracts tested against A. salina is an indicator that the plant can be well tolerated by the biological system. In the in vitro evaluation of the toxicity of the samples against A. salina, only the BHESC and LHESP extracts were able to cause 50% mortality of the larvae of A. salina, and they also presented low toxicity, with LC 50 of 790.94 and 746.35 μg·mL −1 , respectively. All values are reported in Table 9. ese results confirm the toxic action for only these extracts, since only the samples with LC 50 less than 1000 μg·mL −1 are considered to be toxic according to the classification described by Meyer [21]. ese results corroborate the possible use of the other extracts tested, as reported by Simões and De Almeida [37]. If a sample is not shown to be toxic to A. salina, its effects will also be the same in humans. e toxicity results of the BHESC and LHESP extracts are also valuable, since samples that are toxic to A. salina can contain bioactive compounds with antitumor, antimalarial, trypanosomicidal, and insecticidal potential [38], possible antiplasmodic activity [39], and antimicrobial effect [40]. Parra et al. [41], when testing the aqueous extract of the leaves of Senna alata, obtained LC 50 of 7.74 μg·mL −1 , much lower than what is considered to be below toxicity. In other words, this aqueous extract showed high toxicity, unlike the ethanol extracts of S. cana and S. pendula shown in this work. Evaluation of Larvicidal Activity. e results of the larvicidal activity were analyzed and interpreted according to LC 50 values. Due to the low solubility of the hexane extracts in DMSO, only the ethanol extracts of each part of the two Senna species were used. According to the literature, for results to be considered good, the sample should have LC 50 below 100 ppm. As seen in Table 10, all the results found for S. cana and S. pendula were well above 100 ppm, meaning that these plants are not considered promising sources of substances with relevant larvicidal activity [22]. Edwin et al. [42] investigated the larvicidal activity of the ethanol and aqueous extracts of leaves and stems of Senna alata: for the aqueous extract, the values were 0.840 (% w/v) for the leaves and 0.935 (% w/v) for the stems, while for the ethanol extract of the leaves, the value was 0.791 (% w/v) and for the stems, it was 0.923 (% w/v). Unlike the extracts of S. cana and S. pendula, those of S. alata [43] and S. occidentalis [23] presented excellent larvicidal activity. When comparing the chemical composition of these species of Senna, there is a very peculiar difference, which may explain this great difference in toxicity against Aedes aegypti: S. alata was found to contain cassiaindoline, a dimeric indole alkaloid, a compound that has not been identified in S. cana and S. pendula. Conclusions e chemical study of the Senna cana and Senna pendula ethanol extracts revealed the presence of anthraquinones, flavonoids, tannins, triterpenes, and xanthones, while hexane extracts presented positive results for anthraquinones and triterpenes. e ethanol extract of the leaves of S. cana (LEESC) performed the best in all the antioxidant tests, surpassing the standard values, meaning that this species is a promising source of antioxidants, possibly because of the presence of polyphenolic compounds, especially flavonoids, anthraquinones, and tannins, as confirmed by the quantitative analyses. e ethanol extract of the branches of Senna cana (BEESC) presented the worst performance in all the antioxidant trials, the lowest phenolic content, and the second lowest total flavonoid content. ese results confirm that the lower the content of phenolic compounds and flavonoids, the worse the antioxidant capacity of the extract. is study determined that the hexane extracts presented the best results for acetylcholinesterase activity, which can be attributed to the presence of triterpenes and/or anthraquinones, but the ethanolic extracts of the leaves of S. cana (LEESC) presented a satisfactory activity according to the quantitative tests, confirming that the higher the content of phenolic compounds and flavonoids, the better the performance regarding inhibition of acetylcholinesterase. Most of the extracts investigated showed no toxicity against Artemia salina. As stated before, if a sample does not show toxicity to A. salina, this means that its effects will also be the same in humans. None of the extracts presented relevant larvicidal activity. e results obtained in this work indicate that these species are sources of substances with promising pharmacological activities, with emphasis on antioxidant activity, mainly in extracts from the leaves of Senna cana. Conflicts of Interest e authors declare that they have no conflicts of interest.
5,451.6
2018-04-02T00:00:00.000
[ "Environmental Science", "Medicine", "Biology" ]
The Effect of the pH of Ammonum Nitrate Solution on the Susceptability of Mild Steel to Stress Corrosion Cracking (SCC) and General Corrosion This work investigates the relative aggressiveness of nitrate solutions at different pH values on mild steel towards stress corrosion cracking (SCC) and general corrosion. Electrochemical behavior and stress corrosion cracking susceptibility measurements were carried out in 52 Wt% ammonium nitrate solutions at 368 K and various pH values ranging from 0.77 to 9.64. Constant load stress corrosion test at 90% yield stress was conducted. Tested specimens were prepared and examined using the scanning electron microscope (SEM). The potentiodynamic polarization curves for different pH values again emphasized the validity of the gravimetric measurements and hence the mechanism of cracking was attributed to the stress that assisted the dissolution process. Introduction Stress corrosion cracking (SCC) can lead to rapid and catastrophic failure in many different metals and alloys.This phenomenon occurs under conditions where a component is exposed to a mildly corrosive environment while under applied and/or residual tensile stress.Hence, metal parts with severe SCC can appear bright and shiny whilst being filled with microscopic cracks [1].These factors along with the rapid progress of SCC make it common for SCC to go undetected prior to failure.Steel/ nitrate interaction is an issue in nitrogenous fertilizer plants, waste heat recovery boilers (WHRBs) in power generating plant and nuclear wastes [2]. The effect of pH and types of nitrate solution has been investigated [3], which concluded that the order of decreasing aggressiveness of nitrate solutions corresponded to the order of increasing (initial) pH for a given chemical strength, i.e., NH 4 + , Ca ++ , K + , Na + .The aggressiveness of ammonium nitrate when compared to other nitrates was attributed to its lower pH.Parkins [4] reported that the marked decrease in potency at initial pH values in the region number 4 compared to either slightly higher or lower values depends probably upon pH changes in the solution during the test.The above work was concerned with the changes in pH of the bulk of solution and not with the pH at the crack tip region, which undoubtedly more acidic.Steel has been characterized as being very susceptible to SCC at nearneutral pH [5]. In this work a comprehensive study on the effect of the pH of ammonium nitrate solution on the susceptibility of mild steel to stress corrosion cracking and general corrosion was carried out.The results indicate that in some pH values the general and localized corrosion were the cause of failure.In other pH values the stress corrosion cracking were the cause of failures.The severity was confirmed by the calculation of crack growth rate, morphology of the fracture surface by SEM and by polarization work. Material The work was carried out on mild steel of the following composition (wt%): C 0.070, Mn 0.300, Si 0.093, S 0.044, P 0.019.The Material was supplied in the form of 19 mm diameter rods.The corroding solution was prepared by using ammonium nitrate. Electrochemical Measurement The steel rods were hot-rolled at 1200 ° K to 4 mm thick strips.This was reheated to 1200 ° K in the furnace for 900 s, and then allowed to cool to room temperature.Most of the oxide film was removed by pickling in 30% HCl solution, and the surface was finally cleaned for coldrolling by mechanical abrasion.The strips were reduced to 0.5 mm thick by cold-rolling. Samples of 20 mm by 13 mm were prepared.A 3 mm holes were drilled at one end to suspend the samples, then the specimens were degreased with ether, annealed at 1200 ° K for 3.6 ks.The specimens were attached to the holder and the whole assembly coated apart from an area of 100 mm 2 on one face. Stress Corrosion Measurement The steel rod was hot-rolled at 1200 ° k and swaged cold to approximately 10 mm diameter.It was then annealed at 1200 ° k for 900 s, furnace cooled to 850 ° k, followed by air cooling to room temperature.The specimens were machined from the rod as shown in Figure 1.They have a gauge length of 15.8 mm and a gauge diameter of 3.2 mm. General Corrosion Testing Samples of 40 mm by 15 mm were prepared.A 3 mm holes were drilled at one end to suspend the samples, then the specimen were degreased with ether, annealed at 1200 k for 3.6 ks. Electrochemical Measurements For electrochemical measurements on unstressed specimens, a glass cell comprising two compartments was designed.The main compartment contained the working electrode and the platinum counter electrode.The reference compartment contained a saturated calomel electrode.The complete cell is shown in Figure 2. The two compartments were connected by a salt bridge with a Luggin capillary.The glass joints that carried the working and the counter electrodes also had a screw cap joint for the thermometer.There was another two openings in the main compartment, one for water condenser, and the other for gas and solution inlet, when working with de-aerated system.The reference compartment has a thermometer gas inlet and liquid inlet together with the saturated calomel electrode in one joint.The cell capacity is 0.4 dm 3 of test solution.Only the main compartment of the cell was immersed in an oil bath controlling the required temperature, the reference compartment was held at room temperature. Stress Corrosion Measurement The majority of the work was conducted using the constant load method.The tensile properties of the material were measured in triplicate using Instron Tensile Testing Machine.In all the constant load tests the load applied was 90% of the predetermined yield stress. For electrochemical measurements on stressed specimens, a glass cell consisting of two compartments was used; the main compartment contained the stress corrosion specimen and the platinum counter electrode.The reference compartment contained the saturated calomel electrode, similar to the reference compartment described before.The two compartments were connected by a salt bridge with a Luggin capillary.The capacity of the cell is 0.25 dm 3 , and the details are shown in Figure 3. General Corrosion Measurement A flat-bottomed one-liter glass vessel with two necks was used.One neck held the water condenser; the specimens were suspended by a glass hook from the lower end of the condenser.A thermometer, inserted through the second neck. Results and Discussion To validate the results, all measurements were conducted at least three times under each specific environment. Stress Corrosion Life and Corrosion Potential The entire stress corrosion test carried out under a constant load of 90% of the yield stress (206.5 MN m -2 ).A series of stress corrosion tests were carried out to determine the stress corrosion life in 52 Wt% NH 4 NO 3 solution at 368 ° k and various pH values ranging from 0.77 to 9.64.The corrosion potential was also recorded during the tests.Figure 7 shows the whole range of potential change, the initial and the final pH of the solution, and the stress corrosion life for some of the above tests.For lower pH values, the pH was more basic at the end of the test, while at high pH values (5.7 and above), the pH became more acidic at the end of the test.While results indicate that the critical range for cracking (i.e., where the stress corrosion life is minimum) is between pH 3.0 and pH 7.5, cracking occurs even at pH 9.6 but after very long period of time. Staehle [6] reported that with respect to pH, a change of one unit of pH changes the solubility of oxide by three orders of magnitude for three valent ions such as Fe 3+ and by two orders of magnitude for two valent ion such as Fe 2+ .It was reported that nuclear waste are all alkaline, with pH in the range 11-14.Even under these highly alkaline conditions, the presence of certain constituents, such as nitrates can make the carbon steel susceptible to SCC [7].Other researchers indicated that elevated pH are long considered corrosion inhibitors and did not have a dominant effect on the susceptibility to SCC in the range 10-13.5 [8,9]. Figure 8 summarizes the relation between the pH of the solution and the corrosion potential at different time during the test.The graph shows no straight forward systematic correlation between the pH of the solution and the potential. Metallographic Examination Selected specimens were prepared for examination using the scanning electron microscope and the ordinary metallographic microscope.The results are in Table 1.Figures 9(a) and 9(b) show a fracture surface of a specimen which was stress corroded in a solution of pH The crack growth rate was very small (3 nms -1 ) with big reduction of the specimen diameter (660 μm) which indicates the high general and intergranular corrosion occurs during the test. Very heavy attack was observed when specimens were broken in a solution of pH 4.2. Figure 10 Illustrates part of the fracture surface.It is clear that that cracking occurs and the cracking rate was very high (71 nms -1 ), with very small reduction in the diameter (8 μm) which indicates the severity of such environment. Figure 11 shows a specimen which was stress corroded in a solution of pH 9.64.The high potency of the solution at pH 2.78 compared to lower or higher pH values (Figure 4).3) In a solution of pH 0.77, the SCC was associated with high rate of general attack (Figure 4, Figure 9 and Table 1).Naris Sridhar et al. [10] reported that intergranular stress corrosion cracking (IG SCC) has generally not been observed when the pH greater than 11.0.According to potential pH or pourbaix diagram [11] dangerous zones where SCC caused by nitrate solution is between pH 2.2 and 5.2, with corresponding potential between 0-I000 mV SCE . Parkins et al. [12] reported that changes in the pH during the test show more relation to the results than the initial pH values.They showed that the time to failure correlated more significantly with the final pH of the solution than with initial values. Relating the results shown in  There are no sudden jumps to more active potentials before failure at a very low pH value (0.77), and hence no indications of fast propagation period [13]. This suggests that at this pH value, general corrosion and not stress corrosion is the predominant factor.  In the pH range 2.0 -8.0, oscillations in potential are evident, occurring over a period representing 20 -25% of the total life.This suggests that variations in pH above and below the natural pH of 4.2 do not affect the percentage of the total time taken up by the fast propagation period [13]. At relatively high pH (9.64), the duration of the fast propagation period is relatively unchanged, but it occupies only about 1% of the total life.This can be attributed to the very low rate of attack at such high pH value. Corrosion Rate and Corrosion Potential Measurement on Unstressed Specimens Figure 12 shows the effect of pH of the solution on both the general corrosion rate and the stress corrosion life.This figure indicates that the increase in the stress corrosion life at lower pH values was associated with high general corrosion, while at higher pH values (greater than 7.5) it was associated with low general corrosion. The variation of the corrosion potential with time for unstressed specimens was measured for period of ~70 ks at different pH values, ranging from 1.05 to 9.13 (Figure 13 and 14).The potential changed to the more noble direction as the test proceeded over almost the whole range of the investigated pH except at pH 9.13.At this pH the potential at the beginning was less noble, and after approximately 30 ks more noble values were observed. From Figures 12, 13 and 14, the following are observed:  High general corrosion at lower pH values is accompanied by a more active potential. The non dependence of pH on the corrosion rate between pH values of 4.2 and 6.9 is associated with unchanged corrosion potential. The decrease in the corrosion rate with increasing pH in the range 6.0 -8.5 is characterized by erratic changes in the corrosion potential, e.g. it is comparatively noble at pH 7 and yet more active at higher values.The above behavior probably reflected the difference of the solubility of the corrosion product in solutions of different pH values. Figure 15 shows the effect of stress on the maximum corrosion potential during testing in 52 Wt% NH 4 NO 3 at 368 ° K at different pH values.The stress appears to cause a shift in the corrosion potential to more active values in the range of the critical pH. Crack Growth Rate From the microscopic examination and the results of stress corrosion life in different pH values reported in Table 1, the crack growth rate was calculated by dividing the maximum measured crack depth by the total time to failure.The obtained value gives an estimate of the rate since it does not take into account the time to initiate cracks.The growth rate of any observed cracking is assumed to be constant throughout the exposure period.These estimates provide a simple semi quantitative diagnostic to classify the SCC propensity.The following points regarding the crack growth rate are concluded from Table 1.  In the solution of pH 0.77 the crack growth was only 3.2 nms -1 but there was a big reduction (18%) in the diameter which clearly indicates that general corrosion was dominating. The crack growth rate at pH 2.78 is 17.7 nms -1 while it is 71 nms -1 at pH 4.2; thus it is obvious that the latter environment is more prone to SCC.  In the solution of pH 8.8 the crack growth rate was 2.6 nms -1 and this is probably because of the longer initiation period. Potentiodynamic Polarization Figure 16 shows the potentiodynamic polarization curves for different pH values of 52 Wt% NH 4 NO 3 solution at 368 K ranging from 2.04 to 8.35. The scanning started approximately 200 mv more negative than the open circuit potential (OPC) in the noble direction to more than +1500 mv using sweep rate of 0.33 mVs -1 . Several distinct characteristics for solution of different pH are as below:  For pH 2.04 1) An active dissolution regime between −500 mV SCE and −150 mV SCE , 2) First passive plateau at a current density of 2 × 10 3 Am -2 between potential −150mV SCE and zero mV SCE , 3) Broad active to passive transition peak starting at zero mV with corresponding current density of 0.85 × 10 3 Am -2 , 4) Second dissolution regime between + 500 mV and 600 mV, 5) Second passive plateau at current density of 1.3 × 10 3 Am -2 between 600 mV and 1300 mV SCE , 6) A transpassive regime starting at 1300 mV SCE . For pH 4.2 1) An active dissolution regime between −500 mV SCE and −350 mV SCE , 2) First active-passive transition starting at −350 mV with a corresponding current density of 1 × 10 3 Am -2 , 3) An active dissolution regime between −300 mV SCE to −150 mV SCE , 4) A second active-passive transition starting at −150 mV SCE with a corresponding current density of 3 × 10 3 Am -2 , 5) Transition peak at potential 0 mV with a corresponding current density of 0.6 × 10 3 Am -2 , 6) A passive plateau at a current density of 0.4 × 10 3 Am -2 between 500 mV SCE and 1400 mV SCE , 7) The initiation of the transpassive regime at 1450 mV SCE . For pH 6.97 1) An active dissolution regime between −500 mV SCE and −150 mV SCE , 2) A passive plateau at a current density of 3.5 × 10 3 Am -2 between −100 mV SCE and +500 mV SCE , 3) Sudden decrease in the current density to 0.2 × 10 3 Am -2 at −520 mV SCE , 4) passive plateau at current density of 0.2 Am -2 between 550 mV SCE and 1200 mV SCE , 5) The initiation of transpassive regime at 1200 mV SCE . For pH 8.35 1) A slow dissolution from −250 mV SCE to 800 mV SCE to a maximum current density of 0.1 × 10 3 Am -2 , 2) Another dissolution behavior from 800 mV SCE to 1200 mV SCE to a maximum current density of 0.9 × 10 3 Am -2 , 3) Passive plateau at a current density of 1 × 10 3 Am -2 between 1200 mV SCE and 1600 mV SCE , 4) The initiation of transpassive regime at 1600 mV SCE .From above it is clear that the potentiodynamic polarization curves again emphasize the validity of the gravimetric measurement and show the influence of pH on the anodic dissolution characteristics of mild steel.These results were found to be in accord with previous research [14]. Conclusions At very low pH values, stress corrosion cracking is associated with very high rate of general corrosion.In the region of pH 2.0 to 4.2 the stress corrosion life is relatively unchanged and general corrosion rate decreases with increasing of the pH level.Between pH 4.2 and 6.0, the corrosion rate and stress corrosion life are almost constant.In the region of pH 6.0 to 7.5, the stress corrosion life increases slightly and the corrosion rate decreases.Above pH 7.5, there is a noticeable increase in the stress corrosion life while the general corrosion rate shows a marked decrease.Experimental observation suggests that an oxide film of critical physical properties is formed at immersion.This film suffers localized breakdown at the grain boundary.These limited grain boundary micro fissures will only propagate if aided by stress which facilitates the continuing action of the corrosion process.Subsequently, the precipitation of a layer of stifling corrosion product re-occurs and the cyclic process is repeated until failure.The local dissolution rate at the crack tip is accelerated with test time, which would be attributed to the continuous increase in the stress concentration.This reflects the interaction of stress and anodic dissolution during the SCC process. Figures 4 , 5 and 6 show the changes in the corrosion potential during the stress corrosion test at different pH values.In solution of pH 2.78 and above jumps in the corrosion potential to more negative values were observed before failure occurs.At pH 0.77, no oscillations were observed. Figure 4 . Figure 4. Effect of pH on the corrosion potential/time behavior of mild steel during stress corrosion testing in 52 Wt% NH 4 NO 3 at 368 ° K. Figure 5 . Figure 5.Effect of pH on the corrosion potential/time behavior of mild steel during stress corrosion testing in 52 Wt% NH 4 NO 3 solutions at 368 ° K. Figure 6 . Figure 6.Effect of pH on the corrosion potential/time behavior of mild steel during stress corrosion testing in 52 Wt% NH 4 NO 3 at 368 ° K. (pH 9.64).0.77.The first shows an area where localized attack has occurred, whilst the second shows the ordinary structure.The crack growth rate was very small (3 nms -1 ) with big reduction of the specimen diameter (660 μm) which indicates the high general and intergranular corrosion occurs during the test.Very heavy attack was observed when specimens were broken in a solution of pH 4.2.Figure10Illustrates part of the fracture surface.It is clear that that cracking occurs and the cracking rate was very high (71 nms -1 ), with very small reduction in the diameter (8 μm) which indicates the severity of such environment.Figure11shows a specimen which was stress corroded in a solution of pH 9.64. Figure 7 . Figure 7.The relationship between stress corrosion life, corrosion potential variations and solution pH changes for mild steel in 52 Wt% NH 4 NO 3 solutions at 368 ° K. Figure 8 . Figure 8.Effect of pH on the corrosion potential of mild steel under applied stress in 52 Wt% NH 4 NO 3 solutions at 368 ° K. Figure 9 . Figure 9. Fracture surface morphologies of a stress corrosion specimen after failure in NH 4 NO 3 solution of pH 0.77 at 368°K (X 950).(a) localized intergranular attack and cracking; (b) Ductile failure.In summary the results of the present work indicate the following facts: 1) a presence of stress corrosion cracking at high pH values (pH 9.64, Figures 6, 11 and Table 1).2) Figure 10 . Figure 10.Fracture surface morphology of a specimen after stress corrosion failure in 52 Wt% NH 4 NO 3 solution of pH 4.2 at 368° K showing the high degree of intergranular attack (X 250). Figure 11 . Figure 11.An illustration of catastrophic crack formation in a stress corrosion specimen after failure in 52 Wt% NH 4 NO 3 solutions of pH 9.64 at 368° K. Figures 4 , 5 and 6 to the stages of SCC indicate the followings: Figure 12 . Figure 12.Effect of pH on the corrosion rate and stress corrosion life of mild steel in 52 Wt% NH 4 NO 3 solutions at 368 ° K. Figure 13 . Figure 13.Effect of pH on the corrosion potential/time behavior of mild steel in 52 Wt% NH 4 NO 3 solutions at 368 ° K. Figure 14 . Figure 14.Effect of pH on the corrosion potential/time behavior of mild steel in 52 Wt% NH 4 NO 3 solutions at 368 ° K. Figure 15 . Figure 15.Effect of pH on the maximum corrosion potential value attained for mild steel in 52 Wt% NH 4 NO 3 solutions at 368 ° K. Figure 16 . Figure 16.Effect of pH on the potentiodynamic anodic polarization behavior of mild steel in 52 Wt% NH 4 NO 3 solution at 368 K (sweep rate 0.33 mVs -1 ). Table 1 . The Effect of pH on the stress corrosion life and morphology of attack in 52 Wt% NH 4 NO 3 solution at 368 ° K. A: Fine cracks; B: Wide cracks; C: Cracks visible to the naked eye
5,105.8
2010-10-29T00:00:00.000
[ "Materials Science" ]
Ketocarotenoid Biosynthesis Outside of Plastids in the Unicellular Green Alga Haematococcus pluvialis * The carotenoid biosynthetic pathway in algae and plants takes place within plastids. In these organelles, carotenoids occur either in a free form or bound to proteins. Under stress, the unicellular green alga Haematococcus pluvialis accumulates secondary carotenoids, mainly astaxanthin esters, in cytoplasmic lipid vesicles up to 4% of its dry mass. It is therefore one of the favored organisms for the biotechnological production of these antioxidative compounds. We have studied the cellular localization and regulation of the enzyme b -car-otene oxygenase in H. pluvialis that catalyzes the introduction of keto functions at position C-4 of the b -ionone ring of b -carotene and zeaxanthin. Using immunogold labeling of ultrathin sections and Western blot analysis of cell fractions, we discovered that under inductive conditions, b -carotene oxygenase was localized both in the chloroplast and in the cytoplasmic lipid vesicles, which are (according to their lipid composition) derived from cytoplasmic membranes. However, b -carotene oxygenase activity was confined to the lipid vesicle compartment. Because an early carotenogenic enzyme in the pathway, phytoene desaturase, was found only in the chloroplast (Gru¨newald, K., Eckert, M., 0.5 m M FeSO 4 0.1% deoxycholate (w/v), 0.5 M 2-oxoglutarate) and brief mixing, initi-ated by addition of 5 m l of a 1% b -carotene stock solution (w/v) in chloroform. In parallel samples, 100 M DPA were added to inhibit b -carotene oxygenase. Incubation was under continuos stir- ring for 2 h in thedark at 30 °C. Reactions were terminated by freezing the samples in liquid nitrogen. Carotenoids play major roles in oxygenic photosynthesis where they function in light harvesting and protect the photosynthetic apparatus from excess light by energy dissipation (1). Carotenoids that fulfill these processes are commonly referred to as primary carotenoids, because they are essential for the basic metabolism of the organism. In contrast, secondary carotenoids (SC) 1 are defined functionally as carotenoids that are not obligatory for photosynthesis and are not localized in the thylakoid membranes of the chloroplast (2). SC function in specific stages of development (e.g. flower, fruit), mainly for coloration or under extreme environmental conditions. In plants, SC are often accumulated in special structures, for instance in plastoglobuli of chromoplasts. In some green algae, however, SC accumulate outside the plastid in cytoplasmic lipid vesicles. One typical example is the unicellular microalga Haematococcus pluvialis, well known for its massive accumulation of ketocarotenoids, mainly astaxanthin and its acylesters, in response to various stress conditions, e.g. nutrient deprivation or high irradiation (3). Different functions of SC in H. pluvialis such as acting as a sunshade (4), protecting from photodynamic damage (5), or minimizing the oxidation of storage lipids (6) have been proposed. There is growing commercial interest in the biotechnological production of astaxanthin because of its antioxidative properties and the increasing amounts needed as supplement in the aquaculture of salmonoids and other seafood (7). H. pluvialis is one of the preferred microorganisms for this purpose because it accumulates SC at up to 4% of its dry mass (3). The pathway of astaxanthin biosynthesis in H. pluvialis was elucidated by inhibitor studies (8), and most of the involved genes are cloned (6,9,10). In higher plants and green algae, the carotenoid precursor, isopentenylpyrophosphate (IPP) is derived from the DOXP pathway (synonyms are nonmevalonate or MEP pathway, Ref. 11). For SC synthesis in H. pluvialis this was confirmed with inhibitor studies (12). The first specific steps in carotenogenesis lead to the formation of the tetraterpene phytoene. Following desaturation and ␤-cyclization, ␤-carotene is formed. The subsequent steps in the pathway leading to astaxanthin in H. pluvialis are catalyzed by ␤-carotene hydroxylase (10) and ␤-carotene oxygenase (CRTO, synonym is ␤-carotene ketolase, BKT; for a recent review see Cunningham and Gantt, Ref. 13 and Fig. 1). Little is known about the regulation of SC synthesis in vivo in response to stress. The gene for CRTO, the enzyme studied in this paper, was cloned from two different strains of H. pluvialis by Lotan and Hirschberg (14) and Kajiwara et al. (15). A series of ␤-carotene oxygenases (among them one from H. pluvialis), and bacterial ␤-carotene hydroxylases were characterized in vitro with respect to substrate specificity and cofactor requirements (16,17). Moreover, conversion of ␤-carotene by cell extracts of H. pluvialis was reported (18). Recently, we have studied regulation and compartmentation of phytoene desaturase (PDS), an early enzyme of the carotenoid biosyn- thetic pathway (19). The enzyme is up-regulated at the mRNA level during SC synthesis and localized exclusively in the chloroplast. This is consistent with the common hypothesis that in plants including algae carotenoids are synthesized exclusively within plastids (11). H. pluvialis is distinguished in that it accumulates large amounts of carotenoids in lipid vesicles outside the plastid (3,20). This has given rise to speculation about the possible existence of a biosynthetic pathway specific for secondary carotenogenesis that is localized in the cytoplasm, as was supported by the existence of two different IPP isomerases in H. pluvialis (6). However, no extra pathway specific for SC biosynthesis in the cytosol of H. pluvialis was found at the level of PDS (19). It was therefore hypothesized that carotenoids are transported from the site of biosynthesis (chloroplast) to the site of accumulation (cytoplasmic lipid vesicles). Here, we present a study of the origin of these lipid vesicles as well as regulation and compartmentation of the SC biosynthetic-specific ketolase CRTO in flagellates of H. pluvialis using immunolocalization and cell fractionation techniques. Our results indicate that the last oxygenation steps in the astaxanthin biosynthesis pathway take place outside the plastid in the cytoplasmic lipid vesicles and is discussed relative to the role of this sequestering structure in SC accumulation. Flotow (No.192.80, culture collection of the University of Göttingen, Germany; synonym: Haematococcus lacustris (Girod) Rostafinski) was grown autotrophically in a two-step batch cultivation system as described (21). Following precultivation for 5 days at 25 mol of photons m Ϫ2 s Ϫ1 of white fluorescent light (Osram L36/W25, Berlin, Germany), flagellates in the logarithmic growth phase were exposed to SC-inducing conditions (nitrate-deprived medium and 150 mol of photons m Ϫ2 s Ϫ1 of continuous white light) leading to accumulation of SC in the flagellated developmental state of H. pluvialis (21). These flagellates surrounded by a thin extracellular matrix are more accessible to biochemical and ultrastructural analysis than the thick-walled and resistant aplanospore state. Cell Growth Conditions-H. pluvialis Photon flux densities were measured using a LI-189 photometer (LI-COR, Lincoln, NE), and cell number was determined using a Cell Counter Casy 1 (Schä rfe Systems, Reutlingen, Germany). At the time points specified, sample aliquots corresponding to a defined cell number were collected by centrifugation at 1,400 ϫ g for 2 min. Preparation of Cell Fractions-Cell fractions were prepared by gentle filtration rupture that produced less contamination of the lipid vesicle fraction by light harvesting complexes (LHC) and chlorophylls than sonication. Aliquots of cells were harvested by centrifugation at 1,400 ϫ g for 2 min and resuspended in break buffer consisting of 0.1 M Tris-HCl, pH 6.8, 5 mM MgCl 2 , 10 mM NaCl, 10 mM KCl, 5 mM Na 2 EDTA, 0.3 M sorbitol, 1 mM aminobenzamidine, 1 mM aminohexanacid, and 0.1 mM phenylmethylsulfonyl fluoride. The hyperosmotically shocked cells were broken by passage through a 10-m isopore polycarbonate filter (Millipore, Eschborn, Germany). The filtrate was centrifuged at 10,000 ϫ g for 10 min at 4°C to yield a chloroplast and cell debris pellet. The supernatant was transferred to a fresh tube and centrifuged again at 10,000 ϫ g for 10 min at 4°C. The suspension below the lipid vesicle fraction floating on top was transferred to a fresh tube and centrifuged at 76,000 ϫ g for 2 h at 4°C. The resulting microsome pellet was separated from the supernatant fraction. All fractions were stored at Ϫ20°C. Lipid Analysis-Cell aliquots or lipid vesicle preparations were extracted essentially as described (22). Lipids were then separated on HPTLC plates (Merk, Darmstadt, Germany), developed for two-thirds of the plate in chloroform/methanol/acetic acid/water, 73:25:2:4 (v/v/v/v) to separate the polar lipids and subsequently, in a second development, with hexane/diethylether/acetic acid, 85:20:1.5 (v/v/v) for the whole plate to separate the neutral lipids from the pigments. Lipids were identified by cochromatography of standard substances and by color reaction with different spray reagents (ninhydrin for free amines of phosphatidylethanolamine (PE) and phosphatidylserine (PS); ␣-naphthol for glyco-and sulfolipids; molybdenium blue for phospholipids; Dragendorff's reagent for quarternary amines, phosphatidylcholine (PC) and diacylglyceryltrimethylhomoserine (DGTS)). Quantification of individual lipids was performed densitometrically after visualization by Godin's spray reagent (23) and calibration with standard substances. For quantification of DGTS and PS, calibration data of PE and PC were used, respectively. Antibody Preparation-The 17-amino acid peptide LPHCRRLS-GRGLVPALA, corresponding to the C terminus (residues 304 -320) in the predicted sequence of BKT (15) and residues 315-329 (with the last three amino acids missing) in the predicted sequence of CRTO (14), was chemically synthesized and purified (Alpha Diagnostics International, San Antonio, TX). The peptide was coupled to thyroglobulin by means of glutaraldehyde and used for immunization of rabbits to raise polyclonal antibodies as described (24). The raw serum was deployed without further purification. Protein Analysis-Cell pellets were thawed on ice, suspended in break buffer, and broken by sonication for 1 min on ice (Vibra-Cell 72405 sonication processor, Sonics & Materials, Danbury, CT; pulse mode, 0.75 s on, 1 s off, 60 watts output). Break buffer with SDS was added to yield a final concentration of 2% SDS (w/v). Solubilization, especially of hydrophobic proteins like CRTO, was carried out upon shaking at 2,000 rpm for 2 h at 20°C. Samples were centrifuged to remove unsolubilized material, and sample loading buffer to a final concentration of 50 mM Tris-HCl, pH 6.8, 2% SDS (w/v), 10% glycerol (v/v), and 0.01% bromphenol blue was added. Cell fractions were thawed on ice and resuspended in break buffer with 2% SDS (w/v), and solubilization was performed as described for total cell extracts. Before loading cell aliquots, samples were boiled for 5 min. Proteins were separated on 12% SDS-polyacrylamide gels essentially as described (25). For Western blot analysis, the gels were electrophoretically transferred semidry onto nitrocellulose membranes (Schleicher & Schuell, Dassel, Germany) and treated with Ponceau S for staining the protein ladder transiently. Membranes were blocked in blocking buffer containing 5% (w/v) nonfat dry milk, 1% Tween 20 (v/v), 150 mM NaCl, and 25 mM Tris-HCl, pH 7.6 at 4°C overnight. Then the blots were challenged with anti-CRTO antibodies in blocking buffer at 1:250 dilution for 1 h at 4°C and thereafter with secondary antibody alkaline phosphatase conjugates (Bio-Rad, Munich, Germany) used at 1:500 dilution. After the chromogenic reaction with 5-bromo-4-chloro-3-indolyl phosphate (BCIP) and nitro blue tetrazolium chloride (NBT), the labeling was quantified using densitometry (Scanpack 3.0, Biometra, Göttingen, Germany). Total protein content was determined by means of the detergent compatible protein assay kit (Bio-Rad, Munich, Germany). Electron Microscopy and Immunolocalization-For ultrastructural examination, algal cells were harvested at 550 ϫ g for 3 min and then fixed with 0.7% glutaraldehyde, 0.8% paraformaldehyde, and 1% OsO 4 simultaneously in growth medium for 25 min at 4°C. After several washes in distilled water the specimens were dehydrated in graded ethanol series. The 70% ethanol step was performed in the presence of 3% uranylacetate for 10 min. Cells were embedded in LR Gold (London Resin, London) according to the manufacturer's instructions. Before immunogold labeling, ultrathin sections were cut as described (19) and etched to unmask antigenic determinants (26). Etching was done by floating grids section side down on 2% H 2 O 2 for 2 min at room temperature followed by three washes on distilled water. The grids were exposed to anti-CRTO antibody at 1:100 dilution, and immunogold labeling was performed as described (19). Subsequent to poststaining with 3% aqueous uranylacetate (w/v) for 5 min and 1% aqueous lead citrate (w/v) for 20 s, immunogold-labeled sections were examined in a Zeiss EM 900 electron microscope (Carl Zeiss, Oberkochen, Germany) at 80 kV. In Vitro Incubations-Incubations were carried out in a total volume of 600 l under conditions essentially as reported (16,17). Cell fraction aliquots of 10 7 cells were suspended in break buffer (0.1 M Tris-HCl, pH 6.8, 0.3 M sorbitol, 1 mM aminobenzamidine, 1 mM aminohexanacid, 1 mM dithiothreitol, 0.1 mM phenylmethylsulfonyl fluoride) in a total volume of 300 l. Following the addition of 295 l of cofactor buffer (5 mM ascorbic acid, 1 mM dithiothreitol, 0.5 mM FeSO 4 , 0.1% deoxycholate (w/v), 0.5 mM 2-oxoglutarate) and brief mixing, the reaction was initiated by addition of 5 l of a 1% ␤-carotene stock solution (w/v) in chloroform. In parallel samples, 100 M DPA were added to inhibit ␤-carotene oxygenase. Incubation was performed under continuos stirring for 2 h in the dark at 30°C. Reactions were terminated by freezing the samples in liquid nitrogen. Pigment Extraction and HPLC Analysis-Cell pellets were extracted quantitatively in 100% acetone at 4°C, and the pigment content was determined spectrophotometrically according to Lichtenthaler (27). Fractions were freeze-dried, and carotenoids were extracted with 200 l of acetone (the chloroplast fraction was extracted with 500 l of acetone) at 4°C. In vitro incubations were freeze-dried, and pigments were extracted with acetone, at 4°C. Prior to HPLC analysis, samples were filtered and 20% water (v/v) was added. HPLC analysis was performed as described (21). RESULTS Origin of the SC-accumulating Lipid Vesicles-Lipid profiles of total extracts from cells drawn after 4 days of exposure to conditions inductive for SC synthesis revealed massive accumulation of TAG during SC synthesis (Fig. 2). Concomitantly, the amount of most membrane lipids, especially of MGDG, decreased whereas that of DGDG and DGTS increased slightly ( Table I). Analysis of the lipid vesicles formed under inductive conditions revealed triglycerides as their predominant lipid class. Membrane lipids accounted for less than 5% (w/w) of total lipids in this fraction. No MGDG was detectable, and DGDG and DGTS made up half of the membrane lipids in this fraction besides significant amounts of PC and of PE. Coupled in Vivo Inhibitor Treatments-Application of low concentrations of DPA under conditions inductive for synthesis of SC led to accumulation of ␤-carotene instead of ketocarotenoids in H. pluvialis (8,21,28). Lipid vesicles in the cytoplasm of treated cells appeared yellow instead of red in control samples, suggesting that ␤-carotene accumulated in the cytoplasm (8). To substantiate this observation, we determined the pigment composition in different cellular compartments. Results revealed a predominant accumulation of ␤-carotene inside the lipid vesicles of DPA-treated cells (Fig. 3). To test if this extraplastidic ␤-carotene can be converted to astaxanthin, DPA was removed concomitantly with the addition of NF to inhibit carotenoid de novo synthesis at the level of phytoene desaturation. Beside the known bleaching effect of NF leading to a reduced amount in total carotenoids, a significant decrease in the ratio of ␤-carotene to ketocarotenoids occurred inside the lipid vesicles (Fig. 3). The pattern of SC in this fraction did not differ significantly from untreated samples consisting mostly of mono-and diesters of astaxanthin. Generation of Antibodies against CRTO-Compartmentation studies and regulation analysis of the late steps in SC biosynthesis in H. pluvialis require specific antibodies against the enzymes involved. Attempts to obtain antibodies against the His-tagged C terminus of CRTO, encompassing two-thirds of the polypeptide overexpressed in Escherichia coli, were unsuccessful. Despite poor expression and isolation difficulties because of pronounced hydrophobic behavior of the protein, necessary amounts of the antigen were recovered by Ni 2ϩ -affinity chromatography and subsequent purification steps. The generated polyclonal antibodies recognized a series of proteins on Western blots and did not meet the needs for localization experiments, even after parallel immunization experiments and various purification approaches by affinity chromatography. Interestingly, we noticed an increasing oligomerization tendency of the overexpressed antigen up to the octamer, even under denaturating SDS-polyacrylamide gel electrophoresis conditions, dependent on storage time. Finally, we immunized rabbits with a 17-mer synthetic oligopeptide corresponding to the C-terminal part of the predicted structure of CRTO. Database searches revealed no counterparts of this peptide among plant amino acid sequences. Abundance of CRTO during SC Accumulation-The ability of the antibodies to recognize less than 30 ng of CRTO in Western blots was verified with the E. coli-overexpressed Cterminal part of the enzyme (data not shown). Abundance of CRTO was examined in total cell extracts of start samples and of samples taken 1, 2, 3, 4, and 7 days after inducing SC synthesis in the flagellates of H. pluvialis by intense illumination and nitrate deprivation (Fig. 4A). No CRTO was observed before the second day after induction. After this time, the amount of a 34-kDa protein increased rapidly in parallel to SC accumulation (Fig. 4B). The apparent molecular mass of the recognized protein was ϳ3 kDa smaller than predicted from the cDNA sequence of CRTO (14). The preimmune serum did not detect this polypeptide (not shown). Immunogold Localization of CRTO-The antibodies against the C-terminal 17-mer of CRTO were tested on LR Gold sections of H. pluvialis flagellates that have previously been shown to present the best combination for structural preservation and maintenance of antigenic structures (19). To prevent the extraction of cellular lipids during dehydration steps and to ensure full preservation, high pressure cryofixation in combination with cryodehydration was applied. However, despite a number of modifications of the preparation protocol, the structure of lipid vesicles could not be improved. Thus, an etching technique was chosen as described (26). During the ethanolic dehydration process before embedding, the lipid vesicles remained intact because of lipid cross-linking by OsO 4 . Probing the sections with polyclonal antibodies against different photosynthetic proteins (19) did not reveal any signals because of masking of antigenic determinants by the fixative. To unmask antigenic determinants, sections were exposed to hydrogen peroxide for a defined time span. Accessibility of antigenic determinants after etching was confirmed with anti-LHC and anti-PDS antibodies, which detected the corresponding polypeptides as reported previously (19). After challenging the sections with the polyclonal antibodies raised against the CRTO C-terminal 17-mer, two cell compartments became specifically immunogold-labeled in the course of SC synthesis, namely the chloroplast and, becoming dominant, the lipid vesicles (Fig. 5, Table II). Labeling of the latter compartment was not restricted to the periphery, but was scattered throughout the vesicles. The only notable signal in the cytosol was obtained after 2 days of inductive conditions (15%) and was localized in close contact to the Golgi cisternae (not shown). No specific labeling was observed when sections were probed with preimmune serum (not shown). Detection of CRTO in Subcellular Fractions-To ascertain the results from the immunogold localization experiments, four cellular fractions were obtained: (i) a pellet containing mainly the chloroplast, (ii) a supernatant fraction, (iii) microsomes and cytoplasmic membranes, and (iv) the lipid vesicles (19). The polypeptides in each fraction were analyzed by Western blots using the anti-CRTO antibodies. A 34-kDa polypeptide was observed in the chloroplast membrane fraction (Fig. 6), in the lipid vesicle fraction and, to a small extend, in the microsome fraction. Fractions shown here were derived from flagellates 7 days after start of SC induction, thus representing the maximum of CRTO protein in total extracts (Fig. 4). In Vitro Metabolism of ␤-Carotene in Cell Fractions-To provide additional support for astaxanthin synthesis inside the lipid vesicles, we investigated the ability of various subcellular fractions to metabolize ␤-carotene in vitro. Cofactors and reaction conditions were essentially as reported recently for recombinant ␤-carotene oxygenases from different organisms (16,17). Cell fractions were prepared from flagellates exposed for 3 days to SC inductive conditions, which contained relatively low initial amount of ketocarotenoids. We observed a conversion of ␤-carotene to ketocarotenoids in the lipid vesicle fraction, but not in the chloroplast fraction (Fig. 7). The SC product pattern included mainly mono-and diesters of astaxanthin. The pool sizes, i.e. the total of ␤-carotene and ketocarotenoids, remained constant in all samples. Control experiments with heat-denatured extracts (16) were not feasible because of concomitant degradation of fraction pigments. Therefore, DPA was applied to inhibit ␤-carotene oxygenase but did not completely prevent the conversion of ␤-carotene in the lipid vesicle fraction. This is consistent with results from in vivo experiments (Fig. 3, Ref. 21). DISCUSSION Carotenoid accumulation in plant cells requires specialized accumulation structures (29). Changes in the lipid composition during the period of induction of SC synthesis in H. pluvialis, namely prominent TAG accumulation and remarkable reduction of the chloroplast-specific lipid MGDG, reflect the microscopically visible formation of lipid vesicles (20,30) and corresponding changes in the photosynthetic apparatus (30,31), respectively. To elucidate whether the SC-accumulating lipid vesicles of H. pluvialis are derived from the plastid or from cytoplasmic compartments, we analyzed their lipid composition separately. As expected, TAG made up the main part of lipids in this fraction (95%). Of the membrane lipids, the plastidic MGDG was totally absent, whereas PE as a typical nonplastidial lipid was found. DGTS, the major membrane forming lipid in the SC-containing vesicles, which are surrounded by a half-membrane (20), is known to be primarily localized in nonplastidial membranes (32). The second abundant lipid DGDG was recently shown to be synthesized in the cytoplasm under nutrient starvation conditions (33). Altogether the results point to a cytoplasmic origin of the lipid vesicles presenting an oleosome-like structure (34). Low concentrations of DPA inhibit ketocarotenoid biosynthesis by preventing the introduction of oxygen functions and, thus, ␤-carotene accumulates instead of ketocarotenoids (8,21,28). Furthermore, Harker & Young (8) observed in cells pretreated with DPA that in the presence of norflurazone, a known inhibitor of phytoene desaturase, SC were formed at the expense of ␤-carotene. We repeated this experiment with our cultivation scheme where SC accumulate in the flagellated state of H. pluvialis, thus allowing pigment analysis of the cytoplasmic lipid vesicles after cell fractionation. Surprisingly, the ␤-carotene that accumulated inside these lipid vesicles was oxygenated to ketocarotenoids. This implies that the cytoplasmic-located lipid vesicles play a role in the synthesis of SC in addition to their function as storage structure for these compounds. A crucial point in understanding the regulation of secondary carotenogenesis in H. pluvialis is the localization of the enzymes involved. Therefore, immunolocalization using antibodies against SC-specific enzymes was chosen to gain corresponding data. The problems that occurred during overexpression of the SC-specific CRTO and its subsequent purification brought us in contact with its special properties, particularly the very hydrophobic behavior of the enzyme. Probably because of sequence similarity of CRTO to fatty acid desaturases (13) and the relatively high antigenicity of the conserved di-iron binding regions containing histidine residues (35), the polyclonal antibodies generated from the overexpressed antigen showed crossreactivity with many other proteins. In contrast, antibodies generated against a 17-mer synthetic oligopeptide representing the C-terminal part of the predicted structure of CRTO, still recognizing the CRTO polypeptide expressed in E. coli, reacted specifically with a 34-kDa polypeptide in the protein extract of induced H. pluvialis cells. The slight decrease in the apparent molecular mass as compared with the size predicted (14,15) might indicate processing of a N-terminal transit signal peptide. From the highly conserved structure of the ␤-carotene oxygenases from two different strains of H. pluvialis with respect to the oligopeptide used for immunization and from the polyclonal nature of the antibodies, it can be concluded that isoenzymes of CRTO should have been recognized in our cytoimmunochemical experiments. Additionally the similar size of CRTO observed in the chloroplast and in the lipid vesicles did not support the existence of isoenzymes or different ␤-carotene oxygenases in H. pluvialis as was speculated from the two existing sequences (6). More likely this reflects strain differences. The observed pattern of CRTO induction in parallel to carotenoid accumulation denotes the essential role of the enzyme in SC biosynthesis. ␤-carotene oxygenase mRNA levels have been shown to exhibit similar kinetics of induction during the first 4 days of our cultivation scheme, but thereafter they declined to a After subtraction of the corresponding count found in preimmune labeled sections. b S.E. of day 4 and day 0; days 2 and 7 are given for n ϭ 24 and n ϭ 12, respectively. 50% of the maximum (19). This behavior was different for the earlier carotenogenic enzyme PDS that showed a parallel change in the amounts of mRNA and of the protein (19). Thus, beside regulation at the mRNA level, post-translational mechanisms seem to be involved in CRTO induction. Immunogold labeling of ultrathin sections revealed that in contrast to PDS, which is localized in the chloroplast only, CRTO is present both in the chloroplast and inside the SCcontaining lipid vesicles. Interestingly, the signals were not restricted to the vesicle boundary but were distributed throughout the whole lumen of the vesicles, consistent with the observed hydrophobic behavior of CRTO. Because this location was confirmed by Western blot experiments in cell fractions, we conclude that CRTO occurs in both compartments. Colocalization of proteins and carotenoids in sequestering structures was reported for the carotene globule protein (CGP) in Dunaliella bardawil, a close relative of H. pluvialis (36). This protein is restricted to the periphery of the globules and was suggested to function in stabilizing the ␤-carotene globule structure within the chloroplast. A similar function beside ketolase activity is unlikely for CRTO, because of its low abundance. The discrepancy between the exclusive chloroplast localization of PDS that is up-regulated during SC biosynthesis (19) and the in vivo and (exclusive) in vitro CRTO activity in the lipid vesicles implies a transport of carotenoid precursors, possibly of ␤-carotene, across the chloroplast envelope into the cytoplasm where they are sequestered in the lipid vesicles. However, electron microscopic investigation did not reveal any structure for such a transport, at least not at the level of membrane-enclosed vesicles (30). Two enzymes are involved in the biosynthesis of astaxanthin from ␤-carotene, ␤-carotene C-4 oxygenase (ketolase) and ␤-ring hydroxylase. The in vivo and in vitro conversion of ␤-carotene to astaxanthin in the cytoplasmic lipid vesicles also predicts the occurrence and activity of a ␤-ring hydroxylase in this compartment. This is of particular interest, because a ␤-carotene hydroxylase exists in the chloroplast too, as is evident from the formation of zeaxanthin. As the closest relatives of CRTO, fatty acid desaturases are localized in a number of different compartments, among them the chloroplast and microsomes (13,37). These enzymes act on very hydrophobic substrates. That could be the origin of the ability of CRTO to act in the extraordinary environment of lipid vesicles. On the other hand, this hydrophobic environment might provide an explanation for the sustained increase of the CRTO protein despite the decreasing mRNA level, by protecting protein from protease attacks. A possible explanation for the mode of action of CRTO in this compartment is that the enzyme, albeit its presence throughout the lipid vesicle matrix, is active only at the periphery, allowing access to cofactors needed because of the spatial closeness to ER structures and Golgi vesicles. This hypothesis is strengthened by electron microscopic observations on this prominent colocalization (20). Evolutionarily, one could imagine a desaturase engaged in chloroplast fatty acid desaturation that was excreted into the cytoplasmic lipid vesicles and subsequently acquired the competence to oxygenate ␤-carotene. Although a search for transit signal peptides did not yield a clear result, the nuclear-encoded CRTO could be transported first into the chloroplast as proposed for all plant carotenoid biosynthesis enzymes (11), including the ␤-carotene hydroxylase (13). ␤-carotene oxygenase might then be exported from the chloroplast into the cytoplasm, possibly together with the substrate ␤-carotene, accumulating in the oleosome-like lipid vesicles. The tendency of the almost complete CRTO antigen to form multimers could play an important role, causing a changed secondary structure of the complex favorable for transport outside the chloroplast. Additionally, the monomer might be the active form of the enzyme that can be established only in a very hydrophobic environment. That speculation is supported by the fact that in vitro CRTO activity is increased by the presence of a strong detergent, deoxycholate, in the cofactor buffer (17). Import studies that could test our hypotheses cannot be performed with H. pluvialis due do the reticulate structure of the chloroplast and the difficulties associated with isolating this compartment intact. Further studies will focus on the carotenoid transport from the chloroplast and the role of the lipid vesicles as storage structures in the regulation of SC accumulation. The latter fact was brought recently into consideration. Rabbani et al. (38) showed that in the unicellular alga D. bardawil, the secondary ␤-carotene accumulation in intraplastidic lipid droplets is controlled by the formation of this sequestering structure. Our hypothesis on the origin of CRTO would also imply that ␤-carotene accumulation in D. bardawil represents the phylogenetically older type of SC accumulation in plants, conserved in chromoplasts of higher plants. FIG. 6. Distribution of CRTO in cellular fractions of H. pluvialis flagellates exposed for 7 days to nitrate deprivation and high irradiation. Immunoblots were probed with antibodies raised against the C-terminal domain (1:500). Lane 1 shows total protein extract from 5 ϫ 10 5 cells, lanes 2-5 represent chloroplast membrane (2), supernatant (3), microsomal (4), and lipid vesicle fraction proteins (5) and were loaded with cell fraction aliquots of 10 5 cells for the chloroplast membrane fraction and of 5 ϫ 10 5 cells for other fractions. FIG. 7. In vitro conversion of ␤-carotene (␤-car) to ketocarotenoids (ketocar) in cellular fractions of H. pluvialis. Cell fractions were prepared from flagellates exposed for 3 days to conditions inductive for SC synthesis. Changes in the portion of ␤-carotene relative to the sum of ␤-carotene and ketocarotenoids are shown for the lipid vesicle and chloroplast fractions. Data are related to the start value (100%, white columns) and represent incubation for 2 h without (black columns) and with (hatched columns) 100 M DPA. The pool size of ␤-carotene (substrate) and ketocarotenoids (product) did not change during reaction. Error bars depict S.E. of 3-6 parallel experiments.
6,930
2001-02-23T00:00:00.000
[ "Biology" ]
Fast One-Step Synthesis of Anisotropic Silver Nanoparticles : The shape control of metal nanoparticles, along with the size, is critical for most of their applications as they control their optical properties. Anisotropic metal nanoparticles show superior performance in a number of applications compared to spherical ones. Shape control is usually achieved by a two-step process, where the first involves the formation of spherical nanoparticles and the second is about the actual shape transformation. In this paper, we report on a fast and facile synthesis of silver nanoplates in a single step, involving laser ablation of a silver target in a liquid medium while this is exposed to light irradiation and hydrogen peroxide flow. We obtained anisotropic particles with a mixture of shapes, of 70–80 nm in size and 10–20 nm in thickness, which showed a plasmon sensitivity greater than 200 nm/RIU. Introduction Metal nanoparticles (NPs) have generated a great deal of interest in a range of fields including sensing, photonics, biology, and catalysis, owing to the unique interaction of their conduction electrons with electric fields [1][2][3][4]. Such an interaction is commonly known as Surface plasmon resonance (SPR) and is strongly related to the size and shape of the NPs, as well as the surrounding medium, which makes them particularly suitable for sensing applications [5]. While spherical NPs are easily obtained by chemical or physical methods, significant effort has been devoted to the synthesis of anisotropic metal NPs, with gold nanorods and silver nanoplates (NPTs) being among the most popular [6,7], due to their ability to better confine light below the diffraction limit, which enhances their sensitivity. Silver NPTs in particular show the highest sensitivity to the refractive index of the surrounding medium among metal NPs [7][8][9][10][11]. Ag NPT synthesis often occurs in two steps, since the first step is required to produce spherical NPs, while the second step transforms them into flat triangular nanoplates. This is necessary in order to separate the nucleation of new particles from their growth and shape transformation. The most widely used process so far is the so-called seed-mediated growth, where spherical particles are first produced by chemical reduction of Ag ions by NaBH 4 , then transformed chemically by hydrazine and citrate [7,12,13]. Other methods were also proposed in place of the first or second step of the seed-mediated growth or both. For example, chemically produced NPs can be transformed into NPTs through the use of light irradiation or H 2 O 2 or a combination of both [14][15][16][17][18][19][20][21][22][23]. More recently, Ag NPTs were produced from laser-ablated Ag NPs using light irradiation and hydrogen peroxide (H 2 O 2 ), fully avoiding the chemistry involved in the seed-mediated growth [10,11,[23][24][25]. A few examples of a single-step synthesis of anisotropic Ag NPs have been reported [26][27][28][29][30]. However, most of the processes reported are very time consuming (from 12 h to 90 h). In this paper, we report on the synthesis of Ag NPTs in a single step, combining the laser ablation process with light irradiation and H 2 O 2 , a process that is completed within less less than one hour. To our knowledge, this is the first time Ag NPTs have been obtained by a single laser ablation step. We also investigated the plasmon sensitivity of such nanoparticles under a varying refractive index. Materials and Methods Ag NPT synthesis took place according to the scheme shown in Figure 1. A quartz optically transparent reaction vessel was filled with a 1 mM solution of trisodium citrate (TSC). This gave a pH in the region of 8.5-9. A silver target was immersed in the vessel. The beam of a Nd:YAG pulsed laser (Quanta System Spa, Varese, Italy) at 1064 nm (fluence 0.6-1 J/cm 2 , 5 ns pulse duration, 10 Hz repetition rate) was focused by a lens onto the silver target and produces Ag NPs by laser ablation. During the laser ablation process, a white light LED (Ekoo IP66, 100 W; see the emission spectrum in Figure S1) illuminated the reaction vessel, and H2O2 flowed into the solution as regulated by a peristaltic pump at a flow of 69 μL/min. The overall setup is schematized in Figure 1. Small aliquots of the solution were taken at regular intervals to monitor the evolution of the reaction by UV-Vis spectroscopy (Agilent Cary 60 spectrometer, Santa Clara, CA, USA). The morphology of the obtained Ag NPTs was characterized by scanning electron microscopy (SEM) using a Zeiss SUPRA 55-VP (Carl Zeiss Microscoy, Oberkochen, Germany) system and atomic force microscopy (AFM) using a Witec Alpha 300 RS system (WITec, Ulm, Germany). For the SEM and AFM analysis, the Ag NPTs were deposited onto a silane-functionalized Si substrate immediately after synthesis. SEM image analysis was performed using the software ImageJ (Author: Wayne Rasband, National Institute of Mental Health, Bethesda, MD, USA). Simulations were performed using the commercial COMSOL Multiphysics package (from COMSOL Inc., Stockholm, Sweden) in the frequency domain. The simulated spectra were obtained using radiation perpendicular to the flat/larger side of the NPT, with polarization parallel to the same side. For plasmon sensitivity measurements, a solution obtained by the described process (15 mL) was initially centrifuged for 20 min at 10 K rpm. After the supernatant was removed, the deposit was redissolved in 1 mL of water and homogenized by very short ultrasonication (1 min). Then, 100 μL of the solution was added to 3 mL of water for a refractive index of 1.333 and to 3 mL of sucrose solution (22%, 40%, and 50% for a 1.367, 1.399, and 1.420 refractive index, respectively [31]), and the absorption spectrum was then measured. Figure 2 shows the UV-Vis absorption spectra of a solution that was exposed to our process for 30 min at 69 μL/min H2O2 flow, as well as a solution that was produced by laser ablation only. The latter showed a single sharp feature at 395 nm arising from the surface plasmon resonance of spherical Ag NPs. On the other hand, the spectrum from The morphology of the obtained Ag NPTs was characterized by scanning electron microscopy (SEM) using a Zeiss SUPRA 55-VP (Carl Zeiss Microscoy, Oberkochen, Germany) system and atomic force microscopy (AFM) using a Witec Alpha 300 RS system (WITec, Ulm, Germany). For the SEM and AFM analysis, the Ag NPTs were deposited onto a silane-functionalized Si substrate immediately after synthesis. SEM image analysis was performed using the software ImageJ (Author: Wayne Rasband, National Institute of Mental Health, Bethesda, MD, USA). Simulations were performed using the commercial COMSOL Multiphysics package (from COMSOL Inc., Stockholm, Sweden) in the frequency domain. The simulated spectra were obtained using radiation perpendicular to the flat/larger side of the NPT, with polarization parallel to the same side. Results and Discussion For plasmon sensitivity measurements, a solution obtained by the described process (15 mL) was initially centrifuged for 20 min at 10 K rpm. After the supernatant was removed, the deposit was redissolved in 1 mL of water and homogenized by very short ultrasonication (1 min). Then, 100 µL of the solution was added to 3 mL of water for a refractive index of 1.333 and to 3 mL of sucrose solution (22%, 40%, and 50% for a 1.367, 1.399, and 1.420 refractive index, respectively [31]), and the absorption spectrum was then measured. Figure 2 shows the UV-Vis absorption spectra of a solution that was exposed to our process for 30 min at 69 µL/min H 2 O 2 flow, as well as a solution that was produced by laser ablation only. The latter showed a single sharp feature at 395 nm arising from the surface plasmon resonance of spherical Ag NPs. On the other hand, the spectrum from the solution exposed to irradiation and H 2 O 2 flow during the laser ablation process appeared significantly different, as the main feature was red-shifted from that of spherical NPs, and additional features arose, including a low intensity one at 335 nm. Indeed, this is strong evidence that the Ag NPs in solution in this case were far from spherical and suggests that the nanoplates were formed during the process. The spectrum did not show significant variations by changing the H 2 O 2 flow rate between 23 µL/min and 69 µL/min, while increasing the flow rate above 100 µL/min caused complete oxidation of the formed material, and the corresponding absorption spectrum would appear flat. Similarly, decreasing the TSC concentration to 0.1 mM did not change the absorption spectrum significantly, while increasing it up to 10 mM caused complete oxidation of the formed material (flat absorption spectrum). The main feature around 500 nm can be attributed to the in-plane dipole mode, while the peak at 335 nm can be attributed to the out-of-plane quadrupole mode, which is typical of Ag anisotropic structures [14,32]. The other feature around 400 nm may be attributed to larger spherical NPs and/or to the in-plane quadrupole and out-of-plane dipole modes for NPTs or both. We also performed control experiments using either light irradiation or H 2 O 2 flow only. When only irradiation was used, without introducing any H 2 O 2 , the process yielded only spherical NPs, with an absorption spectrum close to that shown in Figure 2. When only H 2 O 2 was introduced in the solution, without light irradiation, the transformation happened only to some extent (see Figure S2). We also performed a further control experiment, using no citrate in the solution. In this case, we observed an absorption spectrum with a single broad and asymmetrical feature ( Figure S3), suggesting that only a broad distribution of spherical particles was formed. Such results indicate that citrate, light irradiation, and H 2 O 2 are all essential ingredients that cooperate in a successful process, as already shown for a similar two-step process [11]. Results and Discussion Appl. Sci. 2021, 11, x FOR PEER REVIEW 3 of 9 the solution exposed to irradiation and H2O2 flow during the laser ablation process appeared significantly different, as the main feature was red-shifted from that of spherical NPs, and additional features arose, including a low intensity one at 335 nm. Indeed, this is strong evidence that the Ag NPs in solution in this case were far from spherical and suggests that the nanoplates were formed during the process. The spectrum did not show significant variations by changing the H2O2 flow rate between 23 μL/min and 69 μL/min, while increasing the flow rate above 100 μL/min caused complete oxidation of the formed material, and the corresponding absorption spectrum would appear flat. Similarly, decreasing the TSC concentration to 0.1 mM did not change the absorption spectrum significantly, while increasing it up to 10 mM caused complete oxidation of the formed material (flat absorption spectrum). The main feature around 500 nm can be attributed to the inplane dipole mode, while the peak at 335 nm can be attributed to the out-of-plane quadrupole mode, which is typical of Ag anisotropic structures [14,32]. The other feature around 400 nm may be attributed to larger spherical NPs and/or to the in-plane quadrupole and out-of-plane dipole modes for NPTs or both. We also performed control experiments using either light irradiation or H2O2 flow only. When only irradiation was used, without introducing any H2O2, the process yielded only spherical NPs, with an absorption spectrum close to that shown in Figure 2. When only H2O2 was introduced in the solution, without light irradiation, the transformation happened only to some extent (see Figure S2). We also performed a further control experiment, using no citrate in the solution. In this case, we observed an absorption spectrum with a single broad and asymmetrical feature ( Figure S3), suggesting that only a broad distribution of spherical particles was formed. Such results indicate that citrate, light irradiation, and H2O2 are all essential ingredients that cooperate in a successful process, as already shown for a similar two-step process [11]. Figure 3a,b shows representative SEM images of Ag NPTs produced by our process. We can observe Ag NPTs showing a mixture of shapes, including triangles, hexagons, circles, and other irregular ones. The size distribution shows that most NPTs were 70-90 nm in size (Figure 3c). Figure 4 shows an AFM image and the profiles obtained from the same samples analyzed by SEM. Figure 4a also shows NPTs with mixed shapes, and Figure 4b shows the thickness ranging between 10 nm and 25 nm. We can observe Ag NPTs showing a mixture of shapes, including triangles, hexagons, circles, and other irregular ones. The size distribution shows that most NPTs were 70-90 nm in size (Figure 3c). Figure 4 shows an AFM image and the profiles obtained from the same samples analyzed by SEM. Figure 4a also shows NPTs with mixed shapes, and Figure 4b shows the thickness ranging between 10 nm and 25 nm. The growth mechanism in this process can be elucidated within the framework of the reactions taking place among citrate, Ag, and H 2 O 2 in the presence of light. It is known that, in the presence of light, citrate is able to reduce Ag + to Ag 0 [33]: Ag + + e − → Ag 0 At the same time, H 2 O 2 can both oxidize and reduce silver [17,18]: Appl. Sci. 2021, 11, x FOR PEER REVIEW 4 of 9 The growth mechanism in this process can be elucidated within the framework of the reactions taking place among citrate, Ag, and H2O2 in the presence of light. It is known that, in the presence of light, citrate is able to reduce Ag + to Ag 0 [33]: At the same time, H2O2 can both oxidize and reduce silver [17,18]: H2O2 + 2Ag + + 2OH − → 2Ag + 2H2O + O2 (4) Besides reducing Ag in the presence of light, citrate also has the function of being a capping agent for Ag NPs, as it easily binds to the (111) facets of silver [34,35]. This allows both NP stabilization in solution and their preferential growth along the Ag (100) direction, which is less preferred by citrate. Hence, Ag NPs initially produced by the laser ablation process were first oxidized by H 2 O 2 , being partially dissolved in the liquid medium, and then reduced back by citrate, supported by H 2 O 2 , adding preferentially to the uncapped (100) Ag facet. It is interesting to note that the absorption spectrum shown in Figure 2 did not dramatically change its shape after about 10 min of this process, while the intensity increased and then stabilized over time ( Figure S4). This suggests that an equilibrium was reached within the solution between the production of NPs by laser ablation and their conversion into NPT and that after reaching this equilibrium, the process only increased the NPT concentration in the solution. Besides reducing Ag in the presence of light, citrate also has the function of being a capping agent for Ag NPs, as it easily binds to the (111) facets of silver [34,35]. This allows both NP stabilization in solution and their preferential growth along the Ag (100) direction, which is less preferred by citrate. Hence, Ag NPs initially produced by the laser ablation process were first oxidized by H2O2, being partially dissolved in the liquid medium, and then reduced back by citrate, supported by H2O2, adding preferentially to the uncapped (100) Ag facet. It is interesting to note that the absorption spectrum shown in Figure 2 did not dramatically change its shape after about 10 min of this process, while the intensity increased and then stabilized over time ( Figure S4). This suggests that an equilibrium was reached within the solution between the production of NPs by laser ablation and their conversion into NPT and that after reaching this equilibrium, the process only increased the NPT concentration in the solution. Simulations of the extinction spectra were performed to support and complement the experimental results. Figure 5 shows simulated extinction spectra for NPTs of a round and hexagonal shape, with a size of 70 nm and 82.5 nm and a thickness of 10 and 20 nm, representative of the structures observed by SEM and AFM. Figure 6 shows the experimental absorption spectrum, already shown in Figure 2, with the simulated ones. It can be observed that the simulated spectra were mainly in agreement with the longer wavelength part of the experimental spectrum. This partial disagreement might be due to the fact that the simulated spectra were obtained for isolated NPTs, thus not taking into account the interaction among the particles. However, this aspect is still unclear and will be subject to further investigation. The obtained NPTs were finally tested for plasmon sensitivity (S), i.e., the variation of the plasmon resonance peak position due to a change in the refractive index (S = Δλ/Δn), which is typically reported in terms of nm/RIU (refractive index unit). To change the refractive index, we used sucrose solutions at set concentrations, since the relation between sucrose concentration in water and the refractive index is well established [31]. Figure 7 shows the plasmon resonance peak position of a Ag NPT solution produced by our process by varying the refractive index, along with the data simulated for disks of 70 nm in size and 20 nm in thickness. Fitting the data yielded a plasmon sensitivity S of 216 nm/RIU, close to that obtained by the simulations. This value is comparable to the others, typically below 250 RIU, reported in the literature for Ag NPTs with a similar plasmon resonance [7][8][9][10][11], with the advantage of obtaining the nanostructures in a fast and simple one-step process, despite the poor SPR tunability. If we compare the experimental data with those obtained from the simulation of discs and hexagons (70 nm × 20 nm), we observe that the simulated data showed a higher plasmon sensitivity. This was probably due to a higher SPR wavelength in the simulated spectra, which was expected to exhibit higher plasmon sensitivity, in agreement with our previous work [36]. Simulations of the extinction spectra were performed to support and complement the experimental results. Figure 5 shows simulated extinction spectra for NPTs of a round and hexagonal shape, with a size of 70 nm and 82.5 nm and a thickness of 10 and 20 nm, representative of the structures observed by SEM and AFM. Figure 6 shows the experimental absorption spectrum, already shown in Figure 2, with the simulated ones. It can be observed that the simulated spectra were mainly in agreement with the longer wavelength part of the experimental spectrum. This partial disagreement might be due to the fact that the simulated spectra were obtained for isolated NPTs, thus not taking into account the interaction among the particles. However, this aspect is still unclear and will be subject to further investigation. The obtained NPTs were finally tested for plasmon sensitivity (S), i.e., the variation of the plasmon resonance peak position due to a change in the refractive index (S = ∆λ/ ∆n), which is typically reported in terms of nm/RIU (refractive index unit). To change the refractive index, we used sucrose solutions at set concentrations, since the relation between sucrose concentration in water and the refractive index is well established [31]. Figure 7 shows the plasmon resonance peak position of a Ag NPT solution produced by our process by varying the refractive index, along with the data simulated for disks of 70 nm in size and 20 nm in thickness. Fitting the data yielded a plasmon sensitivity S of 216 nm/RIU, close to that obtained by the simulations. This value is comparable to the others, typically below 250 RIU, reported in the literature for Ag NPTs with a similar plasmon resonance [7][8][9][10][11], with the advantage of obtaining the nanostructures in a fast and simple one-step process, despite the poor SPR tunability. If we compare the experimental data with those obtained from the simulation of discs and hexagons (70 nm × 20 nm), we observe that the simulated data showed a higher plasmon sensitivity. This was probably due to a higher SPR wavelength in the simulated spectra, which was expected to exhibit higher plasmon sensitivity, in agreement with our previous work [36]. Appl. Sci. 2021, 11, x FOR PEER REVIEW 6 of 9 Conclusions We demonstrated the fast and facile synthesis of Ag NPTs in a single step by combining laser ablation in liquids with light irradiation in the presence of H2O2. We obtained flat NPTs of mixed shapes with sizes between 60 and 80 nm and thicknesses of 10-20 nm. The SPR resonance of these nanostructures fell around 500 nm. While at this stage, the process seems to be efficient in a narrow range of parameters, with poor SPR tuning capability, we foresee that exploring a wider range of conditions, such as laser ablation parameters or the chemical composition of the solution, might provide some SPR tuning capability to some extent. Changing the refractive index of the surrounding medium yielded a plasmon sensitivity of 216 nm/RIU. Supplementary Materials: The following are available online at www.mdpi.com/xxx/s1, Figure S1: Emission spectrum of the LED used for irradiation during the laser ablation process. Figure S2: Absorption spectrum from a solution obtained when irradiation was taken out of the described process, Figure S3: Absorption spectrum from a solution when citrate was taken out of the described process, Figure S4: Absorption spectra taken at different times during the described process. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: All data presented in this paper are available upon request from the corresponding author.
5,021.8
2021-09-26T00:00:00.000
[ "Materials Science" ]
A Rydberg atom-based amplitude-modulated receiver using the dual-tone microwave field We propose a Rydberg atom-based receiver for amplitude-modulation (AM) reception utilizing a dual-tone microwave field. The pseudo-random binary sequence (PRBS) signal is encoded in the basic microwave field (B-MW) at the frequency of 14.23 GHz. The signal can be decoded by the atomic receiver itself but more obvious with the introduction of an auxiliary microwave (A-MW) field. The receiver’s amplitude variations corresponding to microwave field are simulated by solving density matrices to give this mechanism theoretical support. An appropriate AM frequency is obtained by optimizing the signal-to-noise ratio, guaranteeing both large data transfer capacity (DTC) and high fidelity of the receiver. The power of two MW fields, along with the B-MW field frequency, is studied to acquire larger DTC and wider operating bandwidth. Finally, the readout of PRBS signals is performed by both the proposed and conventional mechanisms, and the comparison proves the obvious increment of DTC with the proposed scheme. This proof-of-principle demonstration exhibits the potential of the dual-tone scheme and offers a novel pathway for Rydberg atom-based microwave communication, which is beneficial for long-distance communication and weak signal perception outside the laboratory Especially, optical readout methods based on electromagnetically induced transparency (EIT) and Aulter-Townes splitting (AT splitting) schemes of Rydberg atom-based receiver have unique communication advantages.Firstly, the Rydberg atom-based receiver reads modulated signals in real-time while do not require additional demodulation devices.The introduction of the atomic superheterodyne mechanism makes it possible to read out PM signals directly on the basis of the original AM and FM self-demodulation capabilities [8,10,11].Secondly, the frequency intervals between Rydberg levels covering from megahertz to terahertz bands allow the small-sized Rydberg receivers to overcome the Chu limit in the conventional dipole antenna, avoiding electromagnetic interference and miniaturizing the whole device [5,6].Thirdly, the atomic species, Rydberg levels, and spatial positions provide various channels for multiplexing in one atomic vapor cell [22][23][24][25][26]. Therefore, multiple channels are ready for transferring different information and signals. Finally, the Rydberg atom-based receiver's robust dynamic range from nV/cm to V/cm scale has been confirmed, which refers to the strength ratio between the maximum and the minimum received signals without distortion [27,28]. The Rydberg atom-based receiver's data transfer capacity (DTC) is one of key parameters to determine the achievable communication rate for a channel.Various mechanisms have been proposed to increase the DTC of Rydberg atom-based receivers.The near photon-shot-noise channel capacity limit is realized by the phase-sensitive conversion of AM-encoded microwave signals into optical signals [1].The spatially distributed probe light beams are implemented as an array of atom-optical receivers, improving the DTC by increasing the number of channels [24].On the other hand, new mechanisms are also explored to extend the parallel operating bandwidth of Rydberg atom-based receivers.The signals in the MHz band are demodulated by a three-photon excitation scheme with an off-resonant heterodyne method [29].The Rydberg alternating current (AC) Stark's mechanism enables digital communication with operating carrier frequency continuously from 0.1 GHz to 5.0 GHz [30,31].Multiple resonant response profiles of a Rydberg atomic receiver are utilized to receive microwave with frequency ranging from 1.7 GHz to 116 GHz [32].A deep-learning algorithm is introduced to encode and decode the frequency-division multiplexed signals [33].However, the performance improvement of the Rydberg atom-based receivers still calls for development. In this work, a Rydberg atom-based AM receiver utilizing a dual-tone microwave field is demonstrated.The basic microwave field (B-MW) field at a frequency of 14.23 GHz carries a pseudo-random binary sequence (PRBS) signal.The atomic receiver has selfdemodulation capability, which is enhanced since the introduction of an auxiliary microwave (A-MW).The performance of the receiver is simulated under different microwave fields to obtain the optimal operating conditions.The appropriate and applicable AM frequency, which indicates both large DTC and high fidelity of the receiver, is obtained.The power of the two microwave fields, together with the B-MW field frequency, is investigated to acquire appropriate DTC and operating bandwidth.Finally, the transferred PRBS signals between the proposed and conventional mechanisms are compared to verify the increment of DTC.This work builds a new avenue for Rydberg atom-based microwave communication, which brings it one step closer to practical application.Figure 1(b) shows the schematic diagram of the experimental setup.The probe beam with a wavelength of 780 nm from an external cavity diode laser (DL pro, Toptica) is first split into two beams by the combination of a half-wave plate (HWP1) and a polarization beam splitter (PBS1).One beam is used to lock probe laser frequency at the transition of 5S 1/2 (F = 3) -5P 3/2 (F = 4) by the saturation absorption spectroscopy (SAS) method.The other beam is employed to obtain the EIT spectroscopy and is injected into the center of the rubidium vapor cell.The coupling laser with 480 nm wavelength is from a frequencydoubled amplified diode laser (DLC TA-SHG pro, Toptica) and is split into two beams by the combination of HWP3 and PBS3.The reflected 480 nm beam is used to lock the frequency of the coupling laser by the EIT spectroscopy, and the transmitted beam counterpropagates and overlaps with the probe beam in the cell.Two broadband bi-convex spherical lenses with focal length of 150 mm are utilized to focus the beam and obtain higher energy densities in the interaction region.The power of probe and coupling lasers are 50 μW and 120 mW with beam diameters of 80 μm and 200 μm in the rubidium vapor cell, respectively.The corresponding Rabi frequencies are 29.7 × 2π MHz and 13.5 × 2π MHz.The cell has a length of 100 mm and 25 mm in diameter.The Rb vapor cell is at room temperature (∼298 K), and the Rydberg EIT spectra are slightly affected by the temperature fluctuations in the experimental environment.Additional absorption caused by increased temperature, or metal devices for thermal regulation will cause distortions and reading errors of microwave fields.Meanwhile, the receivers operating at room temperature are easy to integrate and adapt to practical needs.After that, probe laser passed the atomic vapor is filtered by a dichroic mirror (DM), collected by a photodiode detector (PD) and recorded by a spectrum analyzer (EXA signal analyzer, Keysight) and an oscilloscope (RTO2004, Rohde & Schwarz) simultaneously.The basic vector signal generator (B-SG) and auxiliary vector signal generator (A-SG) (SMB100A, Rohde & Schwarz) provide the B-MW and A-MW fields, respectively.A kilohertz PRBS signal, supplied by an arbitrary function generator (AFG) (3022C, Tektronix), is applied to the B-MW field by amplitude modulation.Then, these two microwave fields irradiate through two horn antennas (B-Horn and A-Horn) to the vapor cell at a distance of 50 cm to satisfy the far-field condition.The propagation direction of the microwave fields is perpendicular to both the probe and coupling beams.The probe, coupling beams, and microwave fields keep coaligned linear polarization.The unaligned polarization among laser and microwave fields will lead to an optical pumping effect and make magnetic sublevels with |m j | > 1/2 have a certain probability of population.Also, the co-aligned and linear polarization of laser and microwave fields estimates the residual EIT peaks during Aulter-Townes splitting process.The waveform of decoded PRBS signals is directly obtained on the oscilloscope by reading the output signal of the PD in real time. Results and discussion Applying the rotating-wave approximation, the Hamiltonian of the five-level system is given by [34][35][36]: where p , c , B-MW and A-MW are the Rabi frequencies of probe, coupling, B-MW and A-MW fields, respectively, and p , c , B-MW and A-MW are their frequency detunings. Taking the Doppler effects into account, p and c are modified as p = p -2πν/λ p and c = c + 2πν/λ c [34][35][36].Here, ν is the velocity of the atoms, λ p and λ c are the wavelengths of corresponding field.Considering spontaneous radiation evolution, the atomic system satisfies the Lindblad equation [37,38]: where L is the Lindblad operator considering decay terms that can be expressed as [34][35][36]: Here, γ ij = ( i + j )/2 and i are the spontaneous decay rates of |i> state.The density matrix element ρ ii represents the atomic population in |i> state, while ρ ij stands for the coherent term between |i> and |j> states.The amplitude variation is deduced from the imaginary part of atomic susceptibility χ of the atoms with respect to the probe laser, which can be described as χ = (N|μ 21 | 2 )/( 0 p )ρ 21 .Here, N stands for atomic number density, 0 is the permittivity of free space, shows the reduced Plank's constant, μ 21 and ρ 21 are the dipole momentum and density matrix element between 5S 1/2 and 5P 3/2 states, respectively.In a steady state, the calculation of ρ 21 evolution can be extracted from the atomic density matrix.The vapor cell is kept at room temperature during the whole experiment, and atomic number density N stays at constant.So, the (N|μ 21 | 2 )/( 0 p ) is regarded as 1 during simulation process for simplify.The simulation of χ can be utilized to describe the atomic coherence caused EIT and the external electric field induced AT splitting spectra. The typical EIT (dark blue line) and AT splitting (black line) spectra are shown first in Fig. 1(a).The amplitude at the resonant frequency (dash line) is the major parameter labeled as A. The EIT effect occurs when only the probe and coupling fields interact with atoms, and the A is marked as level "0" in this case.The AT splitting effect appears when a constant B-MW field is introduced and the A reduces to level "1".In particular, the A moves back and forth between "0" and "1" when the information is AM encoded in the B-MW field, and the modulated waveform can be reproduced by the optical readout method after simple NOT gate operation.The variation in A corresponds directly to the output of AM receiver.The further introduction of the A-MW field brings more obvious amplitude variation, which is directly reflected in the increment of SNR during information transmission.A quantitative simulation of amplitude variation is discussed below.The red dots and line indicate the dual-tone microwave field assisted receiver, while the blue dots and line stand for the single-tone case in the following Fig. 2(b) and (c).A quantified simulation result is presented in Fig. 2(b).The amplitude variation in the single-tone microwave field is theoretically simulated under the condition that p = 0.79 2 and c = 1.4 2 .The amplitude variations of the two receivers decrease as B-MW increases, but the receiver with a dualtone microwave field shows a superior amplitude variation.The best enhancement occurs at B-MW = 4.5 2 , which provides access to the performance improvement of the Rydberg atom-based AM receiver.The dependency of the A-MW on the amplitude variation is shown in Fig. 2(c).The amplitude variation increases with the increment of microwave strength in both cases, while the phenomenon is more obvious with the introduction of the A-MW field.Note that the amplitude variation of atomic receiver with dual-tone microwave field increases rapidly and approaches saturation when A-MW → 0.1 2 . After elaborating the mechanism theoretically, the performances of receivers with single-tone and dual-tone microwave field are tested and compared experimentally.1000 kHz in dual-tone (red dots) and single-tone (blue dots) cases.The P B-MW is -10 dBm, and P A-MW is -15 dBm during the experiment.The SNR is obtained as the signal strength ratio at the corresponding f AM to the noise when no signal is loaded.The tendencies of SNR increases from 10 kHz to 1000 kHz because atoms hardly respond once the 1/f AM is less than the dynamic time for a steady EIT.The dynamic time is influenced by collisional relaxation, spontaneous emission, transit time broadening, etc. [39][40][41].It displays that the instantaneous bandwidth of the dual-tone microwave field assisted Rydberg atom-based receiver is around 220 kHz, larger than the single-tone microwave case.The introduction of the A-MW field significantly increases the SNR over the entire range.Especially, the maximum SNR of the readout signal increases by 3 dB with the involvement of the A-MW field when the f AM is 10 kHz.A large data rate is out of the responsible range of the receiver, which is limited by so-called instantaneous bandwidth.The abundant energy levels of Rydberg atoms and parallel use of multiple atomic species may give solutions to this problem. The DTC of the readout signal is defined by the Shannon-Hartley theorem as [1,42].Here, S is the measured signal in volts, and N is the voltage noise spectral density.Figure 3(b) shows the variation of DTC versus f AM in the dual-tone microwave field (red dots) and the single-tone microwave field (blue dots) cases.The DTC mainly depends on the f AM and significantly increases when the value ranges from 0.1 kHz to 100 kHz.The SNR becomes the major constraint from 100 kHz to 1000 kHz in contrast, and DTC decreases rapidly.The calculated DTC increases by 6.22 dB compared to the single-tone microwave case.Although the maximum DTC of the Rydberg atom-based receiver is obtained, the SNR and fidelity of the readout signal fall into a lower value.For low modulation frequencies, the data capacity has a linear dependence on the modulation frequency.For faster modulations the DTC reaches a maximum, before decreasing due to the downhill SNR caused by a finite atom-switching time.Therefore, DTC has a maximum value for f AM variation, which is determined by the instantaneous bandwidth of the system and shows maximum in 100 kHz.For low modulation frequencies, the data capacity has a linear dependence on the modulation frequency.For faster modulations the DTC reaches a maximum, before decreasing due to the downhill SNR caused by a finite atom-switching time.Therefore, DTC has a maximum value for f AM variation, which is determined by the instantaneous bandwidth of the system and shows maximum in 100 kHz.Therefore, f AM Figure 4 shows the dependence of DTC on P B-MW , P A-MW , and B-MW of Rydberg atombased receiver.The red and blue dots represent the measurement results in dual-tone and single-tone cases.The DTC is extracted in Fig. 4(a) as P B-MW increases from -20 dBm to 25 dBm, setting P A-MW on the -15 dBm.The DTC shows the trend that increases first and then gradually saturates in both cases.The -2 dBm is the point where relative strength goes to reverse in the two cases.The receiver with a dual-tone microwave owns a larger DTC when the P B-MW in the range from -20 dBm to -2 dBm.The underlying reason is that B-MW continues to increase the atom population of 54P 3/2 state in both cases, leading to an increase in SNR and further in DTC.The saturation trend occurs owing to population inversion between 54P 3/2 and 54S 1/2 states and DTC no longer increases with P B-MW after reaching the threshold.The DTC inversion emerges when P B-MW is above -2 dBm.The major reason is the introduction of 54S 1/2 state leads to the increase of DTC but decreases the saturation threshold [36].The strength of basic microwave field larger than -2 dBm makes more atomic population at 53D 5/2 state, and the effect of auxiliary microwave field turns to saturation.However, this strength range is not the suitable microwave power range for communication [16], and also corresponds to a relatively larger Rabi frequency, which leads to the Stark frequency shift and makes resonant frequency change, which is unfavorable for EIT peak detection and communication. Figure 4(b) shows the variation of DTC with P A-MW increasing from -30 dBm to 15 dBm while the P B-MW is kept at -10 dBm.A noticeable increase is observed for the P A-MW ranging from -30 dBm to -20 dBm, and a saturation tendency emerges when the P A-MW is above -20 dBm.The number of Rydberg atoms gradually reaches the limit of atomic population reversal between 54P 3/2 and 54S 1/2 states in Fig. 1 The B-MW field frequency determines the bandwidth of the Rydberg atom-based receiver, and its influence on the DTC is illustrated in Fig. 4(c).The DTC of the readout signal exhibits the maximum at resonant frequency of B-MW and gradually decreases as the frequency tunes to either blue or red detunings.The corresponding transmitted and readout signals with different B-MW field frequencies are shown in the Insets 1 and 2. Both near-resonant and off-resonant cases show higher SNR in dual-tone microwave field case.Note that the signal amplitude is more obvious, and the upper and lower edges are clearer in the case of dual-tone microwave field.The near-resonant B-MW field in the Inset 1 exhibits higher signal recovery and fidelity than the detuned case in the Inset 2. The PRBS signal received from Rydberg atom-based receiver is compared with the original signal to evaluate the fidelity.The PRBS signal is produced in random and generated from an arbitrary waveform generator (AWG, Tektronix).The signal shows as cases of different modulation frequencies in practical application, so the width of transmitted signal is varied with time.The consistency between transmitted and received signals is kept, which shows the self-demodulation capability of atomic receivers and the potential for high-fidelity information transmission.The f AM is kept at 10 kHz, P B-MW is -10 dBm and P A-MW is -15 dBm.The input (black) and readout signals with dual-tone microwave field (red) and the single-tone microwave (blue) cases are recorded as shown in Fig. 5.The recovery rate and signal fidelity are defined as R rec = (f Df T )/f T and F sig = S T -(S D /S T )S D , respectively.The f T and f D represent the frequencies of transmitted and decoded signals, and S T and S D stand for their amplitudes, respectively [43,44].The R rec and F sig of the readout signal in the dual-tone microwave field case are 99.5% and 92.8%, while 98.9% and 80.9% in single-tone case.As a result, the enhanced features guarantees greater DTC and improved fidelity of the dual-tone microwave field assisted Rydberg atom-based receiver. Conclusion We present a Rydberg atom-based AM receiver utilizing a dual-tone microwave field in a 85 Rb atomic ensemble.The PRBS signal is encoded at the basic microwave field with the frequency of 14.23 GHz, then decoded by a Rydberg atom-based AM receiver.The introduction of auxiliary microwave field improves the SNR of received signals.The amplitude variations at the resonant frequency of EIT spectrum with different microwave fields involvement are investigated to provide theoretical support.The appropriate DTC of the proposed Rydberg atom-based AM receiver is about 62 kbit/s, and a 220 kHz instantaneous bandwidth is achieved.Finally, the DTC result of dual-tone microwave receiver is 6.22 dB larger than conventional single-tone microwave receiver.The dual-tone method improves Rydberg atom based receiver's response to microwave field variation, which is appropriate for not only AM scheme but also FM and PM schemes.This work takes a step closer to practical applications by opening up a new avenue for microwave communications based on Rydberg atoms. Figure 1 ( Figure 1(a) shows the relevant energy levels diagram of 85 Rb.The atoms are excited from the ground state 5S 1/2 (F = 3) to the Rydberg state 53D 5/2 by a two-photon transition.The Figure 3 Figure 3 (a) The SNR of received signals and (b) the DTC of Rydberg atom-based receiver versus the modulation frequency (f AM ) for dual-tone (red dots) and single-tone (blue dots) microwave field cases.The error represents the standard deviation of three measurements Figure 4 Figure 4 (a) The DTC of the Rydberg atom-based receiver versus (a) the B-MW field power P B-MW , (b) the A-MW field power P A-MW , and (c) the B-MW field frequency B-MW .The error represents the standard deviation of three measurements.The transmitted and readout signals with and without the A-MW field for B-MW = 0 MHz in the Inset 1 and B-MW = +80 MHz in the Inset 2 (a).The Rydberg atom-based receiver's maximum DTC is about 62 kbit/s at P B-MW = -10 dBm and P A-MW = -15 dBm. Figure 5 Figure 5 The transmitted (black) signal and received signals with dual-tone (red) and single-tone (blue) microwave field cases
4,620
2024-01-02T00:00:00.000
[ "Physics" ]
Updated measurement of decay-time-dependent CP asymmetries in D 0 → K + K − and D 0 → π + π − decays A search for decay-time-dependent charge-parity ( CP ) asymmetry in D 0 → K + K − and D 0 → π + π − decays is performed at the LHCb experiment using proton-proton collision data recorded at a center-of-mass energy of 13 TeV, and corresponding to an integrated luminosity of 5.4 fb − 1 . The D 0 mesons are required to originate from semileptonic decays of b hadrons, such that the charge of the muon identifies the flavor of the neutral D meson at production. The asymmetries in the effective decay widths of D 0 and D 0 mesons are determined to be A Γ ( K + K − ) = ( − 4 . 3 ± 3 . 6 ± 0 . 5) × 10 − 4 and A Γ ( π + π − ) = (2 . 2 ± 7 . 0 ± 0 . 8) × 10 − 4 , where the uncertainties are statistical and systematic, respectively. The results are consistent with CP symmetry and, when combined with previous LHCb results, yield A Γ ( K + K − ) = ( − 4 . 4 ± 2 . 3 ± 0 . 6) × 10 − 4 and A Γ ( π + π − ) = (2 . 5 ± 4 . 3 ± 0 . 7) × 10 − 4 . Published in Phys. Rev. D101 (2020) 012005 Introduction Charge-parity (CP ) violation is one of the key ingredients that are needed to generate the asymmetry between matter and antimatter observed in the Universe [1]. The Standard Model (SM) of particle physics, where all known CP -violating processes arise from the irreducible phase of the Cabibbo-Kobayashi-Maskawa matrix [2,3] is ,however, unable to explain the observed asymmetry [4,5]. New dynamics that lead to a significant enhancement of CP -violating processes are required, making searches for CP violation a powerful probe for physics beyond the SM. Although CP violation has been experimentally observed in the down-type quark sector with measurements of K and B mesons [6][7][8][9][10], no indication of new dynamics has been reported yet. Only recently has CP violation been observed in the decay of charmed mesons [11]. The limited precision of the SM predictions, together with the limited amount of experimental information available [12], is, however, not yet sufficient to establish whether the observed signal could be explained by the SM [13][14][15][16][17][18]. Additional searches for CP violation in the charm sector, and particularly for more suppressed and yet-to-be-observed signs of CP -violating effects induced by D 0 -D 0 mixing, have unique potential to probe for the existence of beyond-the-SM dynamics, which couple preferentially to up-type quarks [19][20][21][22][23][24]. This paper reports a search for CP violation in D 0 -D 0 mixing, or in the interference between mixing and decay, through the measurement of the asymmetry between the effective decay widths,Γ, of mesons initially produced as D 0 and D 0 and decaying into the CP -even final states f = K + K − , π + π − : Several measurements of the parameter A Γ (f ) have been performed by the BaBar [25], CDF [26], Belle [27], and LHCb [28][29][30] Collaborations, leading to the current worldaverage value of (−3.2 ± 2.6) × 10 −4 [12], when neglecting differences between the D 0 → K + K − and D 0 → π + π − decays. 1 The achieved sensitivity is still 1 order of magnitude larger than the theoretical predictions of A Γ ≈ 3 × 10 −5 [31]. This paper updates the LHCb measurements of Refs. [28][29][30] using the data sample of proton-proton collisions collected at a center-of-mass energy of 13 TeV during 2016-2018, and corresponding to an integrated luminosity of 5.4 fb −1 . The analysis is performed using D 0 mesons originating from semileptonic decays of b hadrons, where the b-hadron candidates are only partially reconstructed. The charge of the muon identifies ("tags") the flavor of the D 0 meson at its production. The samples are dominated by where X denotes any set of final-state particles that are not reconstructed. The paper is structured as follows: the analysis strategy is described in Sec. 2. The LHCb detector is sketched in Sec. 3; Sec. 4 details the criteria used to select the signal and control samples; Sec. 5 describes the fit method, and its validation using D 0 → K − π + decays; the determination of the systematic uncertainties is outlined in Sec. 6, before concluding with the presentation of the final results in Sec. 7. Analysis strategy Due to the weak interactions, the mass eigenstates of neutral charm mesons, D 1 and D 2 , are a superposition of the flavor states, D 0 and D 0 : |D 1,2 ≡ p|D 0 ± q|D 0 , where q and p are complex coefficients satisfying |p| 2 + |q| 2 = 1. Hence, an originally produced D 0 meson can oscillate as a function of time into a D 0 meson, and vice versa, before decaying. In the limit of CP symmetry, q equals p and the oscillations are characterized by only two dimensionless parameters, x ≡ (m 1 − m 2 )c 2 /Γ and y ≡ (Γ 1 − Γ 2 )/2Γ, where m 1(2) and Γ 1(2) are the mass and decay width of the CP -even (odd) eigenstate D 1 (2) , respectively, and Γ ≡ (Γ 1 + Γ 2 )/2 is the average decay width [32]. The values of x and y have been measured to be of the order of 1% or smaller [12]. In the presence of CP violation, the mixing rates for mesons produced as D 0 and D 0 differ, further enriching the phenomenology. As an example, indicating with A f (Ā f ) the decay amplitude of a D 0 (D 0 ) meson into the final state f , three different manifestations of CP violation can be measured: differs from zero, (ii) CP violation in mixing if |q/p| differs from unity, and (iii) CP violation in the interference between mixing and decay if φ f ≡ arg[(qĀ f )/(pA f )] differs from zero. The latter two can be accessed by measuring the decay-time-dependent CP asymmetry In the limit of small mixing parameters, Eq. (2) can be approximated as a linear function of decay time [33,34], where τ = 1/Γ is the average lifetime of neutral D mesons. The coefficient A Γ (f ) is related to the mixing and CP -violation parameters by [35] A Contrarily to the measurement reported in Ref. [11], which is sensitive to , A Γ (f ) is mostly sensitive to CP violation in mixing or in the interference between mixing and decay, because the term yA dir CP (f ) 10 −5 [12] can be neglected at the current level of experimental precision. Moreover, neglecting the O(10 −3 ) difference between the weak phases of the decay amplitudes to the CP -even final states K + K − and π + π − , φ f ≈ φ ≡ arg(q/p) becomes universal and A Γ independent of f [22]. Experimentally, the partial rate asymmetry of Eq. (2) cannot be measured directly because of charge-asymmetric detection efficiencies and asymmetric production rates of D 0 and D 0 mesons from semileptonic b-hadron decays in proton-proton collisions. Instead, the "raw" asymmetry between the D 0 and D 0 mesons yields, is measured as a function of decay time. Neglecting higher-order terms in the involved asymmetries, which are at most O(1%), the raw asymmetry can be approximated as where A D (µ) and A P (D) are the nuisance asymmetries due to the detection efficiency of the tagging muon and to the production rates of the neutral D mesons, respectively. The parameter A Γ corresponds to the slope of the decay-time-dependent raw asymmetry only if A D and A P are independent of decay time. In this analysis, a possible time dependence of A D and A P is considered as a source of systematic uncertainty. The analysis procedure is validated on data using a control sample of Cabibbo-favored D 0 → K − π + decays, whose size exceeds that of the D 0 → K + K − and D 0 → π + π − signal modes by approximately 1 order of magnitude, and where measured asymmetries can be attributed solely to instrumental effects because no CP violation is expected. To avoid potential experimenter's bias, the measured values of A Γ (K + K − ) and A Γ (π + π − ) remained unknown during the development of the analysis and were examined only after the analysis procedure and the evaluation of the systematic uncertainties were finalized. Detector The LHCb detector [36,37] is a single-arm forward spectrometer covering the pseudorapidity range 2 < η < 5, designed for the study of particles containing b or c quarks. The detector includes a high-precision tracking system consisting of a siliconstrip vertex detector surrounding the pp interaction region, a large-area silicon-strip detector located upstream of a dipole magnet with a bending power of about 4 Tm, and three stations of silicon-strip detectors and straw drift tubes placed downstream of the magnet. The tracking system provides a measurement of the momentum, p, of charged particles with relative uncertainty that varies from 0.5% at low momentum to 1.0% at 200 GeV/c. The minimum distance of a track to a primary vertex (PV), the impact parameter, is measured with a resolution of (15 + 29/p T ) µm, where p T is the component of the momentum transverse to the beam, in GeV/c. Different types of charged hadrons are distinguished using information from two ring-imaging Cherenkov detectors. Photons, electrons, and hadrons are identified by a calorimeter system consisting of scintillating-pad and preshower detectors, an electromagnetic and a hadronic calorimeter. Muons are identified by a system composed of alternating layers of iron and multiwire proportional chambers. The magnetic-field polarity is reversed periodically during data taking to mitigate the differences of reconstruction efficiencies of particles with opposite charges. The on-line event selection is performed by a trigger, which consists of a hardware stage followed by a two-level software stage. In between the two software stages, an alignment and calibration of the detector is performed in near real time [38]. The same alignment and calibration information is propagated to the off-line reconstruction, ensuring consistent and high-quality particle identification information between the trigger and off-line software. The identical performance of the on-line and off-line reconstruction offers the opportunity to perform physics analyses directly using candidates reconstructed in the trigger [39,40], which the present analysis exploits. Selection The selection criteria are mainly inherited from the measurement of the difference between the decay-time-integrated CP asymmetries in D 0 → K + K − and D 0 → π + π − decays [11], which uses the same sample of proton-proton collisions. Signal candidates are first required to pass the hardware trigger, which selects events containing at least one charged particle with high transverse momentum that leaves a track in the muon system. At the first stage of the software trigger, events are selected if they contain at least one track having large transverse momentum and being incompatible with originating from any PV, or if any two-track combination forming a secondary vertex passes a multivariate classifier. If a particle is identified as a muon, a lower p T threshold is applied. At the second stage of the software trigger, the full event reconstruction is performed, and requirements on kinematic, topological, and particle-identification criteria are placed on the signal candidates. A D 0 candidate is formed by combining two well-reconstructed, oppositely charged tracks such that they are consistent with originating from a common vertex. The D 0 candidate must satisfy requirements on the vertex quality and has to be well separated from all PVs in the event. At the next step, the D 0 candidate is combined with a muon to form a B candidate. Only candidates where the D 0 meson decays downstream along the beam axis with respect to the B candidate are further considered. The B candidate must have a visible mass, m(D 0 µ), and a corrected mass, m corr (B), consistent with a signal decay. The corrected mass is computed as is the momentum of the D 0 µ system transverse to the B flight direction, to partially correct for the unreconstructed particles in the decay of the B hadron. In the off-line selection, trigger signals are associated with reconstructed particles. Particle-identification criteria and requirements on m(D 0 µ) and m corr (B) are tightened with respect to the on-line selection. The mass of the D 0 candidate is required to be in the ranges [1825,1925] MeV/c 2 , [1820, 1939] MeV/c 2 and [1780, 1940] MeV/c 2 for D 0 → K + K − , D 0 → π + π − , and D 0 → K − π + decays, respectively, to reduce the amount of background decays with misidentified final-state particles to a negligible level. The reconstructed decay time is computed from the distance, L, between the measured D 0 and B decay vertices and from the D 0 momentum, p(D 0 ), as t = m D 0 L/[p(D 0 )c], where m D 0 is the known D 0 mass [32]. All D 0 candidates with a reconstructed decay time that is either negative or exceeds 10 times the D 0 lifetime are discarded. Mass vetoes suppress background from misreconstructed B decays to final states involving a charmonium resonance, such as where a muon is misidentified as a pion or kaon and is used in the D 0 final state. Tag muons reconstructed in regions of phase space with large instrumental asymmetries, due to muons of one charge either being bent out of the detector acceptance or deflected into the LHC beam pipe, are vetoed. The fraction of signal candidates removed by this requirement is 10%. In addition, for D 0 → K − π + decays, candidates with kaon p T < 800 MeV/c are removed to reduce instrumental asymmetry between the detection of negatively and positively charged kaons. Since these requirements do not reduce the background to a sufficiently low level for D 0 → K + K − and D 0 → π + π − decays, a dedicated boosted decision tree (BDT) is trained to isolate the signal candidates from background made of accidental combinations of charged particles ("combinatorial background"). The variables used in the BDT to discriminate signal from combinatorial background are the fit quality of the D 0 and the B decay vertices, the D 0 flight distance; the D 0 impact parameter with respect to the closest PV, the transverse momenta of the D 0 decay products, the significance of the distance between the D 0 and B decay vertices, and the visible and corrected masses of the B-hadron candidate. The BDT is trained using D 0 → K − π + decays as signal proxies and candidates from the D 0 mass sidebands of the signal decay modes as background. The optimal requirement on the BDT discriminant is chosen by maximizing the figure of merit S/ √ S + B in a range corresponding to approximately 3 times the mass resolution around the D 0 mass, where S and B denote the signal and background yields, respectively. If an event contains more than one candidate after the full selection, one is chosen at random. The fraction of candidates removed by this requirement is 0.4%. The mass distributions of the selected signal-and control-decay candidates are shown in Fig. 1. Details about the fit model are given in the next section. Approximately 9 × 10 6 , 3 × 10 6 , and 76 × 10 6 signal D 0 → K + K − , D 0 → π + π − , and D 0 → K − π + decays, respectively, are reconstructed over a smooth background dominated by accidental combinations of charged particles. Fit method The samples of selected D 0 → K + K − , D 0 → π + π − and D 0 → K − π + candidates are split into 20 approximately equally populated subsets ("bins") of decay time in the range [0, 10]τ . Figure 2: Raw asymmetry as a function of decay time with fit projection overlaid for D 0 → K − π + signal candidates. In each decay-time bin, the raw asymmetry A raw is determined by a simultaneous binned χ 2 fit to the m(D 0 ) distributions of the D 0 and D 0 candidates, split according to the muon tag. The total signal yields and asymmetries are treated as shared floating parameters of the fit. The fits include two components: signal and combinatorial background. The signal is described with a sum of a Gaussian and a Johnson's S U distribution [41], with parameters determined from a fit to the decay-time-integrated mass spectra. To account for the observed dependence of the signal mass shape on decay time, the means and widths of the signal distributions are left free to float individually for each decay-time bin. The mass shape is assumed to be the same for D 0 and D 0 candidates for charge-symmetric final states of the signal modes, and allowed to differ for D 0 → K − π + and D 0 → K + π − candidates. The combinatorial background is described by a linear function, with a slope that floats independently in each decay-time bin and is allowed to differ between D 0 and D 0 candidates. The raw asymmetry measured in decay-time bin i is fit by minimizing the least squares with respect to the linear function A raw (0) − A Γ t i /τ . The decay-time-independent terms of Eqs. (3) and (6) are incorporated into a single parameter, A raw (0), that is determined by the fit together with A Γ . The average decay time in each bin i, t i , is computed using the decay-time distribution of background-subtracted D 0 candidates. Statistically consistent values are found for the control and signal modes. The D 0 lifetime τ is set to its known value [32]. Using large samples of simulated experiments, it is verified that the analysis procedure leads to unbiased estimates of the fit parameters and of their uncertainties. Figure 2 shows the projection of the decay-time-dependent fit to the D 0 → K − π + control sample. Here A Γ is measured to be (1.6 ± 1.2) × 10 −4 , where the uncertainty is statistical only. The measured value is consistent with zero as expected, confirming the validity of the assumption of decay-time-independent nuisance asymmetries. In D 0 → K − π + decays, due to their charge-asymmetric final states, detection asymmetries are more pronounced compared to the signal modes, where these asymmetries are only caused by the muons used to tag the flavor of the D 0 mesons. Systematic uncertainties The systematic uncertainty is dominated by the following contributions: the impact of decay-time acceptance and resolution, the effect of neglected background from combinations of real D 0 candidates with unrelated muons (which might lead to a wrong identification of the neutral D-meson flavor), and the impact of the assumed parametrization of the signal and background mass shapes. These effects are studied using large samples of pseudoexperiments, where the above sources of systematic biases are simulated. The average decay-time resolution is estimated to be 127 fs using simulated decays. In the generation of the pseudoexperiments, the resolution is increased by 10% to account for differences between data and simulation. The decay-time acceptance is estimated from data by comparing the background-subtracted decay-time distributions of D 0 → K − π + candidates with an exponential function convoluted with the decay-time resolution. Different sets of pseudoexperiments, simulating the effect of decay-time acceptance and resolution, are generated with values of A Γ in the range [−30, 30] × 10 −4 . Each pseudoexperiment is then fit with the default analysis approach, and the difference between the measured and the input values of A Γ is used to determine the systematic bias. As the bias is found to depend linearly on the true value of A Γ , the largest bias observed within the 68% confidence-level interval of the current world average [12] is taken as the systematic uncertainty. This amounts to 0. The probability to wrongly associate unrelated muons with the D 0 candidates is estimated using the yields of "wrong-sign" D 0 (→ K − π + )µ + and D 0 (→ K + π − )µ − candidates in data, which are corrected for the rate of doubly Cabibbo-suppressed decays and decays due to flavor oscillation using the measurements reported in Ref. [42]. Mistag probabilities between 1% at low decay times and 3% at high decay times are observed. Also in this case the bias observed in pseudoexperiments depends linearly on the true value of A Γ . Following the same strategy as discussed above, a systematic uncertainty of 0.3 × 10 −4 (0.6 × 10 −4 ) is assigned for D 0 → K + K − (D 0 → π + π − ) decays. To estimate any potential bias due to the specific choice of the mass model used in the fits that determine the raw asymmetries, samples of pseudoexperiments are generated using alternative signal and background models that describe the data equally well. The observed bias is independent of the input A Γ and results in an additional systematic uncertainty of 0.3 × 10 −4 for both signal decay channels. Uncertainties on t i /τ arising from relative misalignments of subdetectors and from the uncertainty on the input value of the D 0 lifetime [32] give negligible contributions. Furthermore, unexpected biases due to a possible decay-time dependence of the nuisance asymmetries and due to the selection procedure are investigated using the D 0 → K − π + control sample and/or by measuring A Γ in disjoint subsamples split by magnetic-field polarity, year of data taking, and kinematic variables of the B hadron, D 0 meson and muon candidates. No unexpected variations are observed, and no additional systematic uncertainties are assigned. A summary of the relevant systematic uncertainties is given in Table 1. The total systematic uncertainty is obtained by summing in quadrature the individual components and amounts to 0.5 × 10 −4 and 0.8 × 10 −4 for A Γ (K + K − ) and A Γ (π + π − ), respectively. Results and conclusions A search for decay-time-dependent CP violation in D 0 → K + K − and D 0 → π + π − decays is performed using proton-proton collision data recorded with the LHCb detector at a center-of-mass energy of 13 TeV, and corresponding to an integrated luminosity of 5.4 fb −1 . The D 0 mesons are required to originate from semileptonic b-hadron decays, such that the charge of the muon identifies the flavor of the neutral D meson at the moment of its production. The parameter A Γ is determined from a fit to the asymmetry between D 0 and D 0 yields as a function of decay time. The projections of the fits for both D 0 → K + K − and D 0 → π + π − samples are shown in Fig. 3. The results are where the uncertainties are statistical and systematic, respectively. Assuming A Γ to be universal, the above two results can be averaged to yield A Γ = (−2.9 ± 2.0 ± 0.6) × 10 −4 . The results do not show any indication of CP violation in charm mixing or in the interference between mixing and decay.
5,622.4
2019-11-04T00:00:00.000
[ "Physics" ]
The Impact of Endometriosis on Embryo Quality in in-vitro Fertilization/Intracytoplasmic Sperm Injection: A Systematic Review and Meta-Analysis Background: The association between endometriosis and embryological outcomes remains uncertain. The meta-analysis aimed to evaluate the impact of endometriosis on embryo quality. Methods: A systematic review and meta-analysis was conducted to investigate the association between the endometriosis and embryo quality. Searches were performed on the three electronic databases: PubMed, EMBASE, and Web of Science. The detailed characteristics and data of the included studies were extracted. The risk ratio with 95% confidence intervals were calculated using the random and fixed effects model. The main outcome measures were high-quality embryo rate, cleavage rate, and embryo formation rate. Results: A total of 22 studies included were analyzed. Compared with the control group, women with endometriosis had a similar high-quality embryo rate (RR = 1.00; 95% CI, 0.94–1.06), a comparable cleavage rate (RR = 1.00; 95% CI, 0.97–1.02), and a similar embryo formation rate (RR = 1.10; 95% CI, 0.97–1.24). In women with stage III-IV endometriosis, there was no statistically significantly difference in high-quality embryo rate (RR = 1.02; 95% CI, 0.94–1.10), cleavage rate (RR = 1.00; 95% CI, 0.98–1.02), and embryo formation rate (RR = 1.05; 95% CI, 0.97–1.14), compared with those without endometriosis. For women with unilateral endometrioma, pooling of results from the affected ovaries did not show a statistically significantly difference in high-quality embryo rate (RR = 0.99; 95% CI, 0.60–1.63) in comparison to the normal contralateral ovaries. Conclusions: Our results seem to indicate that endometriosis does not compromise embryo quality from the perspective of morphology. Background: The association between endometriosis and embryological outcomes remains uncertain. The meta-analysis aimed to evaluate the impact of endometriosis on embryo quality. Methods: A systematic review and meta-analysis was conducted to investigate the association between the endometriosis and embryo quality. Searches were performed on the three electronic databases: PubMed, EMBASE, and Web of Science. The detailed characteristics and data of the included studies were extracted. The risk ratio with 95% confidence intervals were calculated using the random and fixed effects model. The main outcome measures were high-quality embryo rate, cleavage rate, and embryo formation rate. Conclusions: Our results seem to indicate that endometriosis does not compromise embryo quality from the perspective of morphology. Keywords: endometriosis, embryo quality, morphological evaluation, IVF, ICSI INTRODUCTION Endometriosis, wherein endometrial tissue including glands and stroma is present outside the uterine cavity, affects ∼10% of women in reproductive age and 40% of women with infertility (1,2). Studies have shown that endometriosis has adverse effect on fertility in reproductive women. The corresponding mechanisms mainly include reduction of functional ovarian tissue resulted from endometriomas or surgery, inflammatory changes in peritoneal fluid, reduction in endometrial receptivity and alteration in the number and quality of oocyte or embryo. However, the explicit causes are still poorly understood (3). Assisted reproductive technology (ART) is an effective approach for endometriosis-associated infertility (4). Many research papers pertaining to the consequence of endometriosis on the outcomes of ART have been published; nevertheless, these results remain still controversial (5-7). More specifically, it also seemed disputed in terms of the association between endometriosis and embryological outcomes (8)(9)(10). This respect is essential considering that efforts are made to select high-quality embryos for transfer in embryological laboratories, especially, elective single-embryo transfer (eSET) has been increasingly advocated to reduce the risk of multiple gestations and improve pregnancy outcomes (11). It is of significance to further elucidate this aspect, as so far with conflicting, and to the best of our knowledge, no meta-analysis specifically focusing on the association between endometriosis and embryological outcomes is present. The aims of our systematic review and metaanalysis are to investigate the association between endometriosis and embryological outcomes from the morphological perspective and further review whether the severity of endometriosis or unilateral endometrioma has a negative effect on embryo formation and development. Search Strategy and Selection Criteria PubMed, EMBASE, and Web of Science were searched by two independent reviewers using the keywords and/or medical subject heading (MeSH) terminology: endometriosis, endometrioma, ART, in-vitro fertilization, and intracytoplasmic sperm injection, embryo. The final search was performed in August 2020. The inclusion criteria were as follows: x cohort studies (retrospective or prospective); y women underwent in-vitro fertilization/intracytoplasmic sperm injection (IVF/ICSI); z study group consisted of women with endometriosis diagnosed by laparoscopy, histology, ultrasound, or magnetic resonance imaging (MRI); { women with or without prior treatment (surgery or medicine) for endometriosis; | control study were women without endometriosis including those with tubal infertility, male factor infertility, unexplained infertility or mixed etiology infertility; } the embryo at cleavage stage were assessed morphologically. The exclusion criteria included: x non-English papers; y studies without a control group; z literatures such as conference abstracts or other personal communication; { women with diseases such as polycystic ovary syndrome (PCOS) and premature ovarian failure, which may cause damage to embryo; | women involved with donor or recipient oocytes treatment. Data Extraction and Quality Assessment The primary outcome was a high-quality embryo rate. The secondary outcomes were cleavage rate and embryo formation rate. After an initial screen of all titles and abstracts retrieved from the electronic searches, the full texts of all potentially eligible studies were obtained. Two reviewers respectively scrutinized these articles to select the papers qualified for aforementioned inclusion criteria. Disagreements were resolved through discussions with a third reviewer. Two reviewers independently extracted the outcome data and study characteristics using a specifically designed form. These data were examined repeatedly by both investigators. Discrepancies were resolved by discussion with consensus. The assessment of study quality was implemented by two reviewers based on the Newcastle-Ottawa Quality Assessment Scale for observational studies. The scale involves eight items categorized in three domains: selection, comparability, and outcome, with each item can be awarded a maximum of one star, except comparability, which can be given up to two stars. Eventually, results presented as the number of stars ranging from one to nine. We performed analyses in studies where embryological outcomes in women with endometriosis or stage III-IV endometriosis, which were classified according to the rAFS/ASRM (revised classification of the American Fertility Society or Revised American Society for Reproductive Medicine classification of endometriosis), were compared with those without endometriosis. Additionally, we compared embryological outcomes between affected ovary and intact ovary in women with unilateral endometrioma. The systematic review and meta-analysis were reported in accordance with the Meta-analysis of Observational Studies in Epidemiology (MOOSE) statement. Statistical Analysis The statistical analysis was performed using Review Manager version 5.4. Relevant data was abstracted from original papers, if not presented, then calculated by using matching raw data provided. For dichotomous variables, results for each indicator were expressed as risk ratio (RR) with 95% confidence intervals (CIs). Funnel plots were constructed to assess the publication bias. Sensitivity analyses were completed in the means of removing one of included studies at a time from the metaanalysis and recalculating the combined effect size to evaluate the effect of every study on the pooled effect size. The I 2 were measured to quantify statistical heterogeneity. A fixed effects model was used when I 2 < 50%, while a random effects model of DerSimonian and Laird was applied if I 2 ≥ 50%. A random effects model implied the effects analyzed among the included studies were not identical but followed similar distributions. Study Selection and Characteristics The search strategy retrieved 1,293 citations from PubMed, 4,711 from EMBASE, and 4,089 from Web of Science. The duplicate studies were removed, after importing all results into EndNote, leaving 7,454 articles. With regard to duplicate publication, only the most recent and complete versions were chosen. A total of 7,383 records were excluded following an initial screening for the titles and abstracts. After reviewing full texts of the remaining 71 studies, 49 articles were excluded for conference abstracts (n = 23), full texts not available (n = 8), data not extractable (n = 16), and expert opinion (n = 2). Therefore, a number of 22 publications were eligible for selection criteria and included in the review (Figure 1). With regard to the study design, all the included studies were observational studies, and 22 were cohort studies (six prospective, 14 retrospective). Of these included studies, the study groups contained endometriosis (n = 17), stage III-IV endometriosis (n = 12), and unilateral endometrioma (n = 3). The diagnosis of endometriosis was based on laparoscopy (n = 13), histology (n = 2), and ultrasound (n = 8). In 22 of the included studies, the stage of endometriosis was performed on the basis of the rAFS/ASRM. Studies evaluated the embryo quality in women with endometriosis or stage III-IV endometriosis and included various control groups: tubal infertility (n = 9), male factor infertility (n = 6), unexplained infertility (n = 1), and mixed etiology infertility (n = 5). The control groups were drawn from the same community or hospital as the study groups. The detailed characteristics of the included studies are presented in Supplementary Table 1. Quality Assessment of Included Studies The majority of included studies (n = 12) were awarded eight stars, two studies were awarded nine stars, six studies were awarded seven stars, and only two studies scored six stars. The Newcastle-Ottawa Quality Assessment Scale is shown in Supplementary Table 2. Main Findings We found no statistically significantly differences in highquality embryo rate, cleavage rate, and embryo formation rate in women with endometriosis compared with those without endometriosis. Moreover, the aforementioned indicators were comparable between women with severe endometriosis (stage III-IV) and those without endometriosis. In addition, results from both affected ovaries and intact ovaries were similar. Strengths and Limitations To our knowledge, no previous systematic review and metaanalysis concerning the association between endometriosis and embryo quality is as large scale, up to date, and comprehensive. A prior meta-analysis by Yang et al. have reported a lower number of total embryos formed and a similar number of good-quality embryos between women with endometrioma and control group. They also have made comparisons between ovaries affected and normal contralateral ovaries; no difference was shown in the number of total embryos formed (12). However, the results need to be interpreted with caution due to only two included studies and the two observed indicators, which were not recommended by the Vienna consensus (13). There are also some limitations to be noted in this review. First, we only included published English studies, thus non-English studies as well as conference abstracts were excluded, which may result in selection bias. Second, some issues remain in both comparison groups. The concordance among control groups is not satisfactory because the causes of infertility in control group vary between tubal and male factor, even several studies just including non-endometriosis women not limiting to specific etiologies. It is also worth noting that these etiologies may have an adverse influence on embryo quality respectively or collectively (14). The same is true in endometriosis groups, in which whether to receive treatment or not and which therapeutic modality was performed are not well-controlled. Suzuki et al. reported that their data indicated the negative effect of endometriosis could be compromised by laparoscopic treatment (15). Finally, so far, no consensus on embryo morphological assessment has been applied worldwide, albeit embryologists are dedicated to selecting embryos for transfer based on morphological features. This is a major disadvantage because the differences in the criteria for evaluating embryo quality may compromise the homogeneity between included studies. Nevertheless, the majority of embryo grading systems existing mainly take into consideration the following indexes: the number and symmetry of blastomeres, the relative degrees of fragmentation, and the presence or absence of multinucleation (16), which, to some extent, could reduce the heterogeneity of interstudies. Interpretation and Implication Currently, it is extremely difficult to draw a definite conclusion on the association between endometriosis and embryo quality, with results controversial. A number of studies have suggested that endometriosis has a detrimental impact on embryo quality FIGURE 4 | Forest plot of high-quality embryo rate, cleavage rate, and embryo formation rate for stage III-IV endometriosis vs. control. [17][18][19][20], while some studies are unable to demonstrate the relationship between the two (9,10,15,21). Lin et al. showed a lower high-quality embryo rate in an endometriosis group including 177 women compared with the control group comprising the remaining 4,267 women with any factors other than endometriosis through collecting information from the electronic records between January 2006 and December 2010 in their hospital (19). Further studies observed elevated levels of inflammatory cytokines in both follicular and peritoneal fluid, such as interleukin-6 (IL-6), IL-8, and tumor necrosis factors (22,23). The alteration of the follicular and peritoneal microenvironment may not be conducive to the development and maturation of oocytes, and subsequently, may potentially affect embryo development (24). Our results do not seem to support the adverse effect of endometriosis on embryo quality. These following reasons could account for our findings. A successful pregnancy requires not only high-quality embryos but also a receptive endometrium. Emerging evidences indicate that inflammation plays a vital role in the pathogenic mechanisms of endometriosis (25). Endometriosis induces a series of local and systemic inflammatory responses, and these disordered inflammatory cytokines subsequently interfere with normal endometrial function through complex signal pathways, eventually, leading to less-receptive endometrium for embryo implantation (26). Another reason, namely, the limitations of conventional embryos morphological assessment, is also of great importance. The common indicators of embryo morphological evaluation may not reflect the intrinsic changes of embryos retrieved from women with endometriosis well; that is, the effect of endometriosis on embryos may not be presented as the altered morphology (27). Consequently, it is possible that grading embryos in light of morphological features is imprecise (28). Remarkably, in this review, a higher yet not statistically significant high-quality embryo rate was observed in the endometriosis group than the control group according to Veeck's criteria. An embryo scoring scheme, which considers various factors not limiting to morphological features, is required. CONCLUSIONS Our results indicate that endometriosis does not compromise embryo quality from the perspective of morphology. The universal criteria and terminology for grading embryos are required to reduce the heterogeneity between studies, thus making comparisons of clinical data in this field more statistically powerful. More high-quality, well-designed research with a large sample size as well as population strictly selected need to be undertaken to elucidate the association between endometriosis (especially its subtypes and stages) and embryo quality. DATA AVAILABILITY STATEMENT All datasets generated for this study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author/s. AUTHOR CONTRIBUTIONS HD, JS, and LY conceived and designed the study. XJ performed the literature search. XM extracted the data. HD analyzed the data and wrote the manuscript. LY and JS revised the manuscript and supervised the study. All authors read and approved the final manuscript.
3,430
2021-06-02T00:00:00.000
[ "Medicine", "Biology" ]
Mathematical model of temperature field distribution in thin plates during polishing with a free abrasive The purpose of this paper is to estimate the dynamic characteristics of the heating process of thin plates during polishing with a free abrasive. A mathematical model of the temperature field distribution in space and time according to the plate thickness is based on Lagrange equation of the second kind in the thermodynamics of irreversible processes (variation principle Bio). The research results of thermo elasticity of thin plates (membranes) will allow to correct the modes of polishing with a free abrasive to receive the exact reflecting surfaces of satellites reflector, to increase temperature stability and the ability of radio signal reflection, satellite precision guidance. Calculations of temperature fields in thin plates of different thicknesses (membranes) is held in the Excel, a graphical characteristics of temperature fields in thin plates (membranes) show non-linearity of temperature distribution according to the thickness of thin plates (membranes). Introduction In the aerospace industry circular membranes are used as reflectors of optic-electronic devices for orientation and celestial navigation of spacecraft and satellites.The reflector must have sufficient accuracy of the reflecting surface, which affects the pointing accuracy, temperature stability and radio-reflection ability.Modern technology and equipment used in polishing of thin plates (membranes) allow to receive precision of surface machining, commensurate with the fractions of light wavelength.Membranes are processed by domestic polishing-lapping machine 3PD320 and also by machines ОptoTech foreign production using abrasive pastes as a free abrasive [1].But the process of thin steel plates (membranes) polishing is accompanied by a considerable friction, the uneven distribution of heat flow in the plate material, the fluctuations of the technological system elements, the occurrence of thermo elasticity effects, the substantial depth of the broken layer and other phenomena that affect the quality of thin plates (membranes) surface [2].As a result of these reasons, about 50% of the treated plates do not meet the required parameters of the surface quality: roughness parameters, accuracy of the surface geometrical shape, deviation from flatness etc. Production reject occurs very often -plate warping.The manufacturer incurs significant material and time costs (expenses), because the polishing operation takes approximately 30 minutes and it is accompanied by a significant consumption of abrasive material.The goal of theoretical studies is the analysis of thermal processes occurring in polishing of thin plates (membranes) with a free abrasive. A mathematical model of the temperature field in thin plates (membranes) To assess the dynamic characteristics of the heating process in the plate with thickness G -it is necessary to determine the speed of the thermal wave propagation, for which the variation principle Bio in thermodynamics of irreversible processes has the form [3]: where: U -heat capacity (the temperature elastic potential Bio), an analogue of the potential function in mechanics; ) -the potential of scattering, analogue of the dissipative function in mechanics; k F -thermal mechanical strength, the analogue of the external force in mechanics; k D -the generalized coordinate.The distribution of temperature field in space and in time according to the plate thickness is approximated by the curve of the second order [3]: Heat capacity is defined as follows [3]: where c -is the specific volumetric heat capacity; ρ -density of the plate material Thermal displacement H is represented by the ratio [3]: Then the expression (4) with (2) has the form To calculate the potential scattering [3] computed the derivative of the magnitude of the thermal drift: where: 1 O -coefficient of heat conductivity of the plate material.The expression (5) after the differentiation and integrating has the form: By substituting the expression (7) into (6) and after differentiation finally we have: Generalized thermal force F is represented as a ratio of the variations: Or taking into account the expression (5) The expression heat capacity (3) after the integration has the form: Substituting the obtained expressions (8), (10), (12) the Lagrange equation of the second kind (1), we have: After integration and transformation of the expression (13) finally we have received the form: . c 1 t 36 , 3 U O G From the above calculation we have the following form: where: t-is the time-setting process temperature. In Figure 1 the results of the researches are presented, the graphic characteristics of the temperature field in plates of various thickness have been received.The design results of time for plate loss stability, according to the form (14), i.e. the time, after which the warping of the plate (relaxation-setting process temperature) begins, is represented in figure 2. Conclusions According to the results of theoretical researches we can draw the following conclusions: 1.The temperature field varies nonlinearly across the thickness of a thin plate, i.e. the heat flux in the plate is distributed unevenly, causing the appearance of the temperature elastic effects and, hence, loss stability and warping.2. The relaxation time of heating process of a thin plate depends on the thickness of a thin plate, has a nonlinear character and increases with increasing of the plate thickness.3. To eliminate the warping process of thin plates it is necessary to adjust the technological process of thin plates polishing, including the modes of polishing to reduce operating temperatures when polishing. of temperature field in time according to the thickness of the plate 0...z...G; W 1 T -the distribution of temperature field in time; W -time non-stationary process. Fig. 2 . Fig. 2. The relaxation time of process of heating of the plate.
1,260
2017-01-01T00:00:00.000
[ "Engineering", "Materials Science", "Mathematics", "Physics" ]
Continued Fractions and Conformal Mappings for Domains with Angel Points Here we construct the conformal mappings with the help of the continued fraction approximations. We first show that the method of [19] works for conformal mappings of the unit disk onto domains with acute external angles at the boundary. We give certain illustrative examples of these constructions. Next we outline the problem with domains which boudary possesses acute internal angles. Then we construct the method of rational root approximation in the right complex half-plane. First we construct the square root approximation and consider approximative properties of the mapping sequence in Theorem 1. Then we turn to the general case, namely, the continued fraction approximation of the rational root function in the complex right half-plane. These approximations converge to the algebraic root functions N z , N  N , C  z , 0 > Rez . This is proved in Theorem 2 of the aricle. Thus we prove convergence of this method and construct conformal approximate mappings of the unit disk onto domains with angles and thin domains. We estimate the convergence rate of the approximation sequences. Note that the closer the point is to zero or infinity and the lower is the ratio k/N the worse is the approximation. Also we give the examples that illustrate the conformal mapping construction. Introduction This article extends and develops paper [19]. There we presented the reparametrization method of conformal mapping of the unit disk onto the given simply connected domain with a smooth boundary. This method is based on reduction of Fredholm integral equation to a sufficiently large linear equation system and on the boundary curve reparametrization. The solution possesses polynomial form that can be easily analyzed. The method can be considered as one of the rapidly converging methods according to classification of [14]. The computation cost is actually similar to Theodorsen’s method or Fornberg method [10]. Let us compare the reparametrization method of [19] with the other conformal mapping methods. We do not consider the auxiliary mapping of the unit disk into subdomain of the given domain D as in the set of osculation methods [2]. The method of [19] does not require a sufficiently good initial approximation of the conformal mapping as the graphical methods such as that of [11]. The method does not apply any auxiliary constructions at the domain interior (domain triangulation [8], circle packing [15], domain decompositions, such as meshes of [24]). We do not need any iterative conformal mappings as in the zipper algorithm or the Schwartz-Christoffel mapping [7,13]. We construct our polynomial solution differently to the Fornberg polynomial method [9] that involves consequent approximations through suitable point choice at the domain boundary. Also we do not apply the solutions of auxiliary boundary value problems (the conjugate function method, Wegmann method [22,23]). Finally, the advantages of the method presented in [19] are the following: 1) it is devoid of auxiliary constructions, 2) it brings us to the mapping function in a polynomial form. The mapping function is a Taylor polynomial for the unit disk or a Laurent polynomial for the annulus in the case of multiconnected domains [1], [18]. Let us recall the basic construction steps of the reparametrization method [19]. The necessary condition for the function z z) ( ln  to be analytic in D is just as in [16] , and the unit disk is mapped to the domain bounded by the given smooth boundary ) (t z with the help of the Cauchy integral formula. So we construct an approximate polynomial conformal mapping. Similar method was also applied for construction of the annulus conformal mapping onto an arbitrary multiconnected domain with the smooth boundary in [1,17]. Note that we can reconstruct ) (t q intead of ) (t q in the case of smooth boundary [19]. So this method can also be considered as one of the methods using the derivatives [14]. The drawback of the reparametrization method is that it does not cover the conformal mappings of the unit disk onto domains with non-smooth boundaries. For instance, in the case of a domain with the angle  for at the point 0 t . In order to overcome this difficulty we apply the additional conformal mapping which "straightens" the bondary curve at the corresponding point. Then we apply the reparamentrization method to the new domain with the smooth boundary and again apply the conformal mapping that "bends" the boundary back to the initial one. Our aim is to represent this final mapping as the polynomial fraction. In the article we apply the modification of the conformal mapping construction of [19] both for domains with boundary angles and for slender regions. We present the mapping as a polynomial fraction. We first show that the method of [19] is applicable to domains with acute external angles. Then we present the polynomial fraction construction for the internal angle equal to /2  and conformally map the unit disk to the domain with such an angle. After that we construct the polynomial fraction for the angles Finally we show that this approach is valid for the conformal mapping of the unit disk to the slender region. The case of an internal angle greater than  The method of [19] allows us to solve the conformal mapping construction problem for any contour with the boundary curve forming internal angles greater than  . This can be illustrated , Chapter 2, formula (7.1)). In this case, can be made arbitrarily small for a sufficiently large n . That is, we have the convergence of the Fourier series at the angle point, regardless of the angle. This allows us to apply the conformal mapping construction method of [19]. Example 1. Consider the piecewise circular contour (two semicircles and one circle quarter) with the external angle /2  (Fig.1). First we approximate the boundary with a Fourier polynomial of degree 10 . Then we construct the approximating polynomial of degree 50 . The similar example for the doubly connected domain with rectangular inner boundary can be found in [17]. The construction scheme for the case of an internal angle less than  It is computationally difficult to apply the conformal mapping construction of [19] for a domain whose boundary forms an acute internal angle. Then the mapping polynomial converges slowly and the resulting conformal mapping angle point does not look like an angle at all (sort of a bubble). Consider a curve whose behavior at an angle point is similar to (4), we consider the following inequalities: Thus, the method from [19] is difficult to apply, since even the Fourier series poorly approximate a curve with such an angle point. Let the domain boundary be angled and the angle be equal to [5]. The most thorough and refined method here is the Padé rational function approximation of the algebraic function [3,4]. Note that these approximations are optimal in the set of fraction polynomials though their construction requires application of Euclidean algorythm and additional investigation of the holomorphness domain D . The main result here is that the recursively constructed relations converge to the continued fraction approximating any rational The constructed sequence is clearly not Padé one. But the construction itself is fairly simple, does not possess nonunique solutions and provides convergence to the root function at the complex right half-plane. Similar results can be found in [5]. Also the author is sure that this result can be proved along the lines of [12]. Again the proof should apply induction and we need to consider the roots of the polynomials instead of the mapping itself. Note also that the fractional polynomial mappings can be applied, for instance, to exact solution of the elasticity theory problems [20]. The square root approximation First consider the basic problem of the square root fraction polynomial representation. It is well-known that . This gives rise to the following recursive procedure: The proof is by induction. . The induction step is as follows: Consider This completes the proof. Assume now that we have a convex domain with acute internal angles and we need to construct the conformal mapping of the unit disk onto this domain. The main construction steps are as follows: we make the domain as round as possible with square mappings. If the resulting domain does not overlap itself then we construct the approximating polynomial according to the method of [19]. Finally we construct the square root approximations of the resulting image inverse to the squares of the first step. Example 3. Let us construct an approximate conformal map of the unit disk onto the contour with the internal angle /2  . Here we have the 11th iteration of the square root approximation and degree 50 polynomial for the initial domain (Fig.3). The third approximation then equals The relation we need then equals So the more acute the angle and the closer z is to 0 the worse is the approximation convergence. This completes the proof of the theorem. We now construct the following mappings exactly as in Example 3. The unfolded domain was approximated by the polynomial of degree 50 . We next apply the 6th fraction iteration to fold the domain back to the angled one (Fig.4). These examples show us that the more acute is the internal angle the harder it is to approximate it. The case of thin domains Consider the case of slender regions. The second problem for us is the case of relatively thin domains (e.g. ellipse with two significantly different axes). Consider the integral equation of [19] kernel behavior for  close to the point t of the largest possible curvature Then the diagonal elements of the relative linear equation system matrix are close to ) (t  and are also large. Thus, the greater the curvature ) (t  of the curve in t , the worse the convergence of the polynomial solution. The authors of [6] numerically solve the singular integral equation in order to find the conformal mappings from elliptic to slender regions. The method of recursive fractions is also applicable to the conformal mapping construction of a disk onto a thin domain. The main problem here is the so-called point crowding phenomenon. Here we achieve the similar results (domain sides ratio 1/4 ) with our method as a natural application. We first make the domain less slender with the help of the square mapping 2 ) ( a z  , here the point a lies outside the domain and close to its boundary point of maximal curvature. We cannot take this point at the boundary itself since then we achieve the domain that cannot be immediately inserted into the right half-plane at the neighbourhood of a . Secondly we apply the approximate conformal mapping construction algorithm. Finally we apply the square root approximation in order to return to the domain with the given boundary. Now, if a domain lies between two sides of the right angle closely to the vertex then we consider the mapping of the disk onto the squared domain and the square root approximation of the angle. Example 6. Consider the ellipse of semiaxes 1 and 1/4 : 1 = 16 2 2 y x  . Let us construct an approximate conformal mapping of the unit disk onto this ellipse. The initial method of [19] provides us with the following result for the polynomial of degree 1200 (Fig.6). Here we consider the 20th square root iterations and 1000 degree polynomial (Fig.7). Similar picture under only polynomial approximation due to the point crowding phenomenon happens for polynomial of degree 4 10 . Conclusion We first showed that the method of [19] works for conformal mappings of the unit disk onto domains with acute external angles at the boundary. Next we outlined the problem with domains which boudary possesses acute internal angles. Then we constructed a method of rational root approximation in the right complex half-plane. Also we proved convergence of this method and constructed conformal approximate mappings of the unit disk onto domains with angles and thin domains. All the constructions of the article are supported by examples. Our approach of continuous fractions application to conformal mapping constructions shows good convergence and may be appied, for example to certain problems of mathematical physics, particulary, to elasticity theory problems.
2,871.4
2018-09-27T00:00:00.000
[ "Mathematics" ]
High density lipoprotein-induced endothelial nitric-oxide synthase activation is mediated by Akt and MAP kinases. High density lipoprotein (HDL) activates endothelial nitric-oxide synthase (eNOS), leading to increased production of the antiatherogenic molecule NO. A variety of stimuli regulate eNOS activity through signaling pathways involving Akt kinase and/or mitogen-activated protein (MAP) kinase. In the present study, we investigated the role of kinase cascades in HDL-induced eNOS stimulation in cultured endothelial cells and COS M6 cells transfected with eNOS and the HDL receptor, scavenger receptor B-I. HDL (10-50 microg/ml, 20 min) caused eNOS phosphorylation at Ser-1179, and dominant negative Akt inhibited both HDL-mediated phosphorylation and activation of the enzyme. Phosphoinositide 3-kinase (PI3 kinase) inhibition or dominant negative PI3 kinase also blocked the phosphorylation and activation of eNOS by HDL. Studies with genistein and PP2 showed that the nonreceptor tyrosine kinase, Src, is an upstream stimulator of the PI3 kinase-Akt pathway in this paradigm. In addition, HDL activated MAP kinase through PI3 kinase, and mitogen-activated protein kinase/extracellular signal-regulated kinase kinase inhibition fully attenuated eNOS stimulation by HDL without affecting Akt or eNOS Ser-1179 phosphorylation. Conversely, dominant negative Akt did not alter HDL-induced MAP kinase activation. These results indicate that HDL stimulates eNOS through common upstream, Src-mediated signaling, which leads to parallel activation of Akt and MAP kinases and their resultant independent modulation of the enzyme. The risk for cardiovascular disease from atherosclerosis is inversely proportional to serum levels of high density lipoprotein (HDL) 1 (1,2). HDL classically serves to remove cholesterol from peripheral tissues in a process known as reverse choles-terol transport. However, the mechanisms by which HDL is atheroprotective are complex and not fully understood, since circulating levels of HDL and the major HDL apolipoprotein, apolipoprotein A-I, do not regulate reverse cholesterol transport (3). We previously reported that HDL stimulates endothelial nitric-oxide synthase (eNOS) activity in endothelial cells (EC) through apolipoprotein A-I binding to scavenger receptor type I (SR-BI), the high affinity HDL receptor (4). Similarly, HDL enhances endothelium-and NO-dependent relaxation in aortas from wild-type but not SR-BI knock-out mice. Recently, Li et al. (5) also reported that HDL binding to SR-BI activates eNOS. The HDL-induced increase in NO production may be critical to the atheroprotective features of HDL, since diminished bioavailablity of endothelium-derived NO has a key role in the early pathogenesis of hypercholesterolemia-induced vascular disease and atherosclerosis (6 -8). However, the mechanisms by which HDL activates eNOS are yet to be clarified. eNOS is one of three isoenzymes that convert L-arginine to L-citrulline plus NO. The activity of eNOS is regulated by complex signal transduction pathways that involve various phosphorylation events and protein-protein interactions. Many stimuli modulate eNOS activity by activating kinases that alter the phosphorylation of the enzyme (9 -15). Akt kinase (also known as PKB) activates eNOS by directly phosphorylating the enzyme at Ser-1179 (16 -19). Akt itself is phosphorylated and activated by phosphoinositide 3-kinase (PI3 kinase), which in turn is activated by a tyrosine kinase (TK). Both receptor TK and nonreceptor TK are involved in PI3 kinase-Akt mediated eNOS activation by various agonists (19 -22). In contrast to Ser-1179, phosphorylation of Thr-497 of eNOS attenuates enzyme activity (12,14,15). eNOS is also modulated by MAP kinases (23,24), and unlike Akt, the effect of MAP kinases on eNOS activity can be either positive or negative (9,(25)(26)(27). The role of kinase cascades in signaling by HDL from SR-BI to eNOS is entirely unknown. To better understand the basis of HDL action in endothelium, the present investigation was designed to test the hypothesis that HDL activation of eNOS entails the phosphorylation of the enzyme. We also studied the potential roles of specific kinase cascades in HDL-mediated eNOS stimulation. Using pharmacological inhibition or dominant negative mutant forms of selective kinases in EC or COS M6 cells transfected with eNOS and the HDL receptor, SR-BI, we investigated the involvement of tyrosine kinases, PI3 kinase, Akt, and MAP kinases in HDL-mediated eNOS activation. In addition to improving our specific understanding of eNOS modulation, the elucidation of the signaling cascade(s) coupling SR-BI to the enzyme provides important clues about multiple additional potential target of HDL action in EC. EXPERIMENTAL PROCEDURES Cell Culture and Transfection-Primary ovine endothelial cells were propagated and maintained as described previously (28) in EGM-2 medium from BioWhittaker (Walkersville, MD). We have shown that SR-BI expression is conserved in these cells up to at least passage 7 (4). For confirmation purposes, selected experiments were also done in human aortic endothelial cells (purchased from BioWhittaker), propagated in EGM-2 and used at passages 4 -6, and in bovine aortic endothelial cells, maintained in EGM-2 medium and studied at passages 5-8. COS M6 cells, which express a negligible amount of endogenous SR-BI and no eNOS, were maintained in Dulbecco's modified Eagle's medium (Invitrogen) with 10% fetal calf serum (Invitrogen). COS M6 cells were transfected with various cDNAs using LipofectAMINE 2000 (Invitrogen) according to the manufacturer's instruction. The transfected cells were used 48 h after transfection, and 70 -80% transfection efficiency was typically achieved. cDNAs for wild-type eNOS, S1179A eNOS, AktAAA, AktMyr, and Sr␣⌬85 were prepared as described previously (20). A mutant of eNOS with Ser-1179 converted to alanine (S1179A eNOS) is unable to be phosphorylated at that site. The triple Akt mutant AktAAA (K179A,T308A,S403A) is enzymatically inactive (29,30), whereas the Akt mutant generated by fusing a myristoylation signal to its amino terminus (AktMyr) is membrane-bound and constitutively active (31,32). The mutant form of PI3 kinase lacking the domain in the p85 subunit that is required for interaction with the catalytic subunit (Sr␣⌬85) works as a dominant negative mutant (33). Western Blot Analysis-The methods used for Western blot analysis generally followed those previously reported (34) using ECL reagents (Amersham Biosciences) for chemiluminescence. Primary antibodies were from the following sources. Anti-eNOS monoclonal antibody was from BD Transduction Laboratories (San Diego, CA); anti-phospho-Ser-1179 eNOS polyclonal antibody, anti-Akt polyclonal antibody, and antiphospho-Ser-473 polyclonal antibody were from Cell Signaling Technology (Beverly, MA); anti-phospho-Thr-495 eNOS polyclonal antibody and anti-ERK2 monoclonal antibody were from Upstate Biotechnology, Inc. (Lake Placid, NY); anti-pMAPK polyclonal antibody was from Promega (Madison, WI); and anti-PI3 kinase p85 subunit was from Santa Cruz Biotechnology, Inc. (Santa Cruz, CA). Horseradish peroxidaseconjugated anti-mouse and anti-rabbit secondary antibodies were from Santa Cruz Biotechnology. eNOS Activity Assay-NOS activation was assessed in whole cells by measuring [ 3 H]L-arginine (45-70 Ci/mmol, 1 mCi/ml; PerkinElmer Life Sciences) conversion to [ 3 H]L-citrulline using previously reported meth-ods (28). In selected experiments, the cells were preincubated with kinase inhibitors (genistein, PP2, wortmaninn, LY294002, or PD98059) (Calbiochem) for 30 min at 37°C. After the addition of HDL (10 -50 g/ml) or vascular endothelial growth factor (100 ng/ml) (Calbiochem) and [ 3 H]L-arginine, the cells were further incubated for the indicated times. All findings were confirmed in at least three independent studies. Statistical Analysis-Comparison between two groups was accomplished using Student's t test, and comparison between more than two groups employed Student's t tests with Bonferroni correction for multiple comparisons. Significance was accepted at the 0.05 level of probability. HDL Stimulation of eNOS Requires Phosphorylation at Ser-1179 -In order to determine whether HDL stimulation of eNOS requires the phosphorylation of eNOS at Ser-1179, primary ECs were incubated with HDL (10 or 50 g/ml) for 20 min or with VEGF (100 ng/ml) for 5 min, serving as a positive control. The phosphorylation was detected by Western blotting using anti-phospho-specific eNOS antibody. As shown in Fig. 1a, eNOS phosphorylation was observed with 10 or 50 g/ml HDL treatment as well as with 100 ng/ml VEGF treatment. Phosphorylation of eNOS by HDL (30 g/ml) was observed as early as 5 min and reached maximum at 10 -20 min (data not shown). Comparable eNOS phosphorylation by HDL was seen in the ovine, human, and bovine EC (data not shown). We next determined whether the phosphorylation of eNOS at Ser-1179 is required for the activation of eNOS by HDL. Either wild-type eNOS or S1179A mutant eNOS was expressed in COS M6 cells along with SR-BI, and the phosphorylation and activation of eNOS was assessed in response to HDL (10 or 30 g/ml). No phosphorylation of eNOS by HDL was detected in S1179A-transfected cells, whereas wild-type eNOS phosphorylation was apparent (Fig. 1b). In parallel, HDL did not stimulate eNOS activation in cells transfected with S1179A eNOS, whereas HDL stimulated eNOS activation in wild-type eNOSexpressing cells (Fig. 1c). FIG. 1. HDL stimulates eNOS through Ser-1179 phosphorylation. a, endothelial cells were treated with HDL (10 or 50 g/ml) for 20 min or VEGF (100 ng/ml) for 5 min at 37°C. Cell lysates were analyzed by immunoblot using polyclonal anti-phospho-serine 1179 eNOS antibody or monoclonal eNOS antibody. b, COS M6 cells were transfected with SR-BI cDNA and either cDNA for vector (sham), S1179A eNOS mutant (S1179A), or wild-type eNOS for 48 h. The cells were treated with HDL (10 g/ml) for 20 min, and cell lysates were analyzed by immunoblot using polyclonal anti-phospho-serine 1179 eNOS antibody or monoclonal eNOS antibody. c, COS M6 cells were transfected with SR-BI cDNA and either cDNA for wild-type eNOS or S1179A mutant eNOS for 48 h. eNOS activity was then assessed by measuring [ 3 H]L-arginine to [ 3 H]L-citrulline conversion over 20 min in cells exposed to vehicle alone (basal (B)) or HDL (10 g/ml). *, p Ͻ 0.05 versus basal. d, endothelial cells were treated with HDL (30 g/ml) for 20 min at 37°C, and cell lysates were analyzed by immunoblot using polyclonal anti-phospho-Thr-495 eNOS, anti-phospho-Ser-1179 eNOS, or monoclonal eNOS antibodies. The dephosphorylation of eNOS at Thr-497 is involved in the activation of eNOS by bradykinin and VEGF (9 -11, 15). We investigated the effect of HDL on the phosphorylation state of eNOS at Thr-497 using the phospho-specific antibody against the site. As shown in Fig. 1d, in contrast to the observed change in Ser-1179 phosphorylation, HDL had no notable effect on the phosphorylation state of Thr-497 of eNOS. HDL Stimulation of eNOS Involves Akt Activation-Since certain stimuli such as estrogen and VEGF activate eNOS through Akt-mediated phosphorylation of the enzyme, we investigated the involvement of Akt in the phosphorylation and stimulation of eNOS by HDL. Akt is activated by PI3 kinase through recruitment to the plasma membrane, where Akt becomes phosphorylated at Ser-473 and Ser-308 (31,(35)(36)(37)(38). ECs were incubated with HDL (10 or 50 g/ml) for 20 min. The phosphorylation of Akt at Ser-473 was detected with HDL ( Fig. 2a), and it occurred as early as 5 min and reached a maximum at 30 min (Fig. 2b). We also found that HDL (30 g/ml, 20 min) caused the recruitment of Akt to the plasma membrane ( Fig. 2c), further indicating the activation of Akt by the lipoprotein. To determine whether Akt is responsible for the phosphorylation of eNOS by HDL, either sham plasmid, a dominant negative Akt mutant (AktAAA, designated DN), or a constitutively active Akt mutant (AktMyr, designated CA) was expressed in COS M6 cells along with wild type eNOS and SR-BI. The transfected cells were incubated with HDL (30 g/ml) for 0 -30 min, and the phosphorylation of eNOS at Ser-1179 was assessed. As shown in Fig. 2d, HDL caused phosphorylation of eNOS in cells expressing endogenous wild-type Akt (sham), but no eNOS phosphorylation was observed in cells expressing dominant negative Akt (DN). Constitutively active Akt caused phosphorylation of eNOS in the absence of HDL (CA, 0 min). These results suggest that HDL stimulates Akt, which in turn phosphorylates eNOS. PI3 Kinase Is Upstream of Akt Activation and eNOS Activation-Since PI3 kinase is a known upstream activator of Akt, we next examined the effect of PI3 kinase inhibitors on both Akt and eNOS phosphorylation and activation. Primary ECs were preincubated with the PI3 kinase inhibitor wortmannin (50 M) for 20 min before HDL stimulation (30 g/ml), and the phosphorylation of Akt at Ser-473 was assessed. In parallel experiments, eNOS activity was measured. Wortmannin inhibited both Akt phosphorylation (Fig. 3a) and eNOS activation (Fig. 3b). In order to examine the involvement of PI3 kinase in eNOS phosphorylation by HDL, the effect of another PI3 kinase inhibitor, LY294002, was assessed. ECs were preincubated with LYL294002 (50 M) for 20 min before HDL (30 g/ml) stimulation for 0 -20 min. As shown in Fig. 3c, LY294002 inhibited HDL-mediated eNOS phosphorylation. To further confirm the involvement of PI3 kinase in eNOS phosphorylation, sham vector or dominant negative PI3 kinase (Sr⌬85) was transfected in COS M6 cells expressing eNOS and SR-BI, and the phosphorylation state of eNOS was assessed. As shown in Fig. 3d, overexpression of dominant negative PI3 kinase inhibited eNOS phosphorylation at Ser-1179. These results suggest that HDL phosphorylates and stimulates eNOS through PI3 kinase-mediated Akt activation. A Tyrosine Kinase Is Upstream of eNOS Activation by HDL-Tyrosine kinases frequently serve as upstream stimulators of PI3 kinase. The involvement of nonreceptor tyrosine kinase (Src family) has been demonstrated for eNOS stimulation by a variety of factors (21,22). We determined the effect of the tyrosine kinase inhibitor, genistein, on eNOS activation and phosphorylation by HDL. Primary cultured endothelial cells were preincubated with or without Genistein (50 M) for 20 min before the addition of HDL (30 g/ml) for 0 -20 min. Phosphorylation states and activation of eNOS were assessed. As shown in Fig. 4, a and b, Genistein abrogated both eNOS phosphorylation and activation. We next examined the involvement of the Src kinase family. Endothelial cells were incubated with or without the Src kinase-specific inhibitor, PP2 (0.1 M), before HDL stimulation (10 g/ml, 20 min). PP2 inhibited both eNOS activation and phosphorylation by HDL (Fig. 4, c and d). These results suggest that HDL stimulates tyrosine kinases that are most likely Src family kinases, which in turn activate the PI3 kinase-Akt pathway to ultimately lead to eNOS phosphorylation. HDL Activation of MAP Kinase-The role of MAP kinases in HDL stimulation of eNOS is not known. As such, we studied the effect of HDL on MAP kinase activation in primary ECs. Cells were stimulated with HDL (10 or 50 g/ml) for 20 min, using fetal calf serum (10%, 5 min) or VEGF (100 ng/ml, 5 min) as a positive control. The activation state of MAP kinase was assessed by phospho-MAP kinase-specific antibody. As shown in Fig. 5a, HDL stimulated MAP kinase phosphorylation. The phosphorylation was detected as early as 2 min and reached maximal phosphorylation at 30 min (Fig. 5b). The activation of MAP kinase by HDL was suppressed by the MEK inhibitor PD98059 (Fig. 5c). Both PI3 kinase inhibition (wortmannin (Wort)) (Fig. 5d) and Src kinase inhibition (PP2) (Fig. 5e) -citrulline conversion over 20 min in cells exposed to vehicle (basal (B)), HDL (10 g/ml), or HDL plus genistein (50 M). *, p Ͻ 0.05 versus basal; †, p Ͻ 0.05 versus HDL alone. c, endothelial cells were pretreated with PP2 (0.1 M) for 30 min and exposed to vehicle (B) or HDL (10 g/ml) for 20 min. eNOS activity was assessed as described for b. *, p Ͻ 0.05 versus basal; †, p Ͻ 0.05 versus HDL alone. d, endothelial cells were pretreated with PP2 (0.1 M) for 30 min and exposed to vehicle or HDL (30 g/ml) for 20 min. Cell lysates were analyzed by immunoblot using anti-phospho-eNOS polyclonal antibody (peNOS) or anti-eNOS monoclonal antibody (eNOS). attenuated MAP kinase activation by HDL, indicating that PI3 kinase and Src kinase are the upstream effectors of MAP kinase activation by HDL. To determine whether MAP kinase phosphorylation is required for eNOS activation by HDL, EC were pretreated with MEK inhibitor (PD98059), and eNOS activation was assessed. As shown in Fig. 5f, PD98059 (50 M) completely abrogated eNOS activation by HDL. Comparable MAP kinase activation by HDL was observed in the ovine, human, and bovine EC (data not shown). To determine whether MAP kinase activates eNOS through the phosphorylation of the enzyme at Ser-1179, we examined the effect of MEK inhibition on eNOS phosphorylation. As shown in Fig. 6a, PD98059 prevented MAP kinase phosphorylation, but it did not attenuate eNOS phosphorylation, suggesting that MAP kinase contributes to eNOS activation through a different mechanism. To further assess possible cross-talk between Akt and MAP kinase signaling pathways, we determined whether the MAP kinase cascade activates Akt. EC were treated with the MEK inhibitor, PD98059, and the phosphorylation of Akt was assessed. As shown in Fig. 6b, MEK inhibition had no effect on Akt phosphorylation by HDL, suggesting that MAP kinase is not an upstream effector of Akt. We also determined the effect of an Akt dominant negative mutant on MAP kinase phosphorylation by HDL. As shown in Fig. 6c, dominant negative Akt had no effect on HDL stimulation of MAP kinase, but the phosphorylation of eNOS was fully prevented. This suggests that Akt activation is not required for MAP kinase phosphorylation induced by HDL. DISCUSSION In the present study, we have demonstrated that HDL stimulation of eNOS occurs through two kinase pathways (Fig. 7). By binding to SR-BI, HDL causes eNOS phosphorylation at Ser-1179 via TK-PI3 kinase-mediated activation of Akt kinase. However, concomitant TK-PI3 kinase-mediated stimulation of MAP kinase is necessary in order to enhance eNOS enzymatic activity in response to the lipoprotein. The phosphorylation of eNOS regulates the activation of the enzyme by various stimuli including VEGF, estrogen, and shear stress (16 -19). We have found that HDL also causes the phosphorylation of eNOS at Ser-1179 and that the phosphorylation is required for the activation of enzymatic activity (Fig. 1, a-c). In contrast to Ser-1179, HDL had no effect on the phosphorylation state of Thr-497 (Fig. 1d), suggesting that the lipoprotein does not regulate eNOS activation through the dephosphorylation of that residue. We also identified Akt as the kinase responsible for HDL-induced eNOS phosphorylation at Ser-1179 (Fig. 2, a-d). HDL stimulated Akt phosphorylation at Ser-473, one of the major phosphorylation sites of Akt in the regulatory domain that is often targeted by PI3 kinase (Fig. 2, a and b). The kinase-inactive mutant of Akt (AktAAA) efficiently inhibited HDL-induced eNOS phosphorylation. Furthermore, we determined whether PI3 kinase is an upstream regulator of Akt in the pathway leading to eNOS activation (Fig. 3, a-d). PI3 kinase activates Akt by recruiting the latter to the plasma membrane, which allows the phosphorylation of Akt at two key regulatory sites (Thr-308 and Ser-403), and Akt was recruited to plasma membrane by HDL stimulation (Fig. FIG. 5. HDL activation of eNOS requires MAP kinase stimulation through Src family kinase and PI3 kinase. a, endothelial cells were treated with HDL (10 or 50 g/ml) for 20 min, VEGF (100 ng/ml) for 5 min, or medium containing 10% fetal calf serum (FCS) for 5 min. Cell lysates were analyzed by immunoblot using anti-phospho-MAP kinase polyclonal antibody (pMAPK) or anti-ERK2 monoclonal antibody (ERK2). b, endothelial cells were treated with HDL (30 g/ml) for 0 -60 min. Cell lysates were analyzed by immunoblot using anti-phospho-MAP kinase polyclonal antibody or anti-ERK2 monoclonal antibody. c, endothelial cells were pretreated with the MEK inhibitor PD98059 (0 -50 M) for 30 min and then treated with or without HDL (30 g/ml) for 20 min. Cell lysates were analyzed by immunoblot using anti-phospho-MAP kinase polyclonal antibody or anti-ERK2 monoclonal antibody. d, endothelial cells were pretreated with PD98059 (50 M) or wortmannin (Wort) (1, 10, or 50 M) for 30 min and then treated with or without HDL (30 g/ml) for 20 min. Cell lysates were analyzed by immunoblot using anti-phospho-MAP kinase polyclonal antibody or anti-ERK2 monoclonal antibody. e, endothelial cells were pretreated with PP2 (0 -10 M) for 30 min and then treated with or without HDL (30 g/ml) for 20 min. Cell lysates were analyzed by immunoblot using anti-phospho-MAP kinase polyclonal antibody or anti-ERK2 monoclonal antibody. f, endothelial cells were pretreated with vehicle alone (basal (B)) or the MEK inhibitor PD98059 (50 M) for 30 min, and then eNOS activity was assessed by measuring [ 3 H]L-arginine to [ 3 H]L-citrulline conversion in cells exposed to vehicle (basal), HDL (10 g/ml), or HDL plus PD98058 (50 M) over 20 min. *, p Ͻ 0.05 versus basal; †, p Ͻ 0.05 versus HDL alone. 2c). The inhibition of PI3 kinase by wortmannin resulted in decreased Akt phosphorylation at Ser-473 (Fig. 3a). In addition, the selective inhibitor of PI3 kinase LY294002 or dominant negative PI3 kinase also led to decreased eNOS phosphorylation and activation by HDL (Fig. 3, b-d). These cumulative results indicate that PI3 kinase stimulation of Akt leading to eNOS phosphorylation at Ser-1179 is critically involved in the activation of the enzyme by HDL (Fig. 7). Additional proximal signaling events have been elucidated. We have demonstrated that a protein TK, most likely an Src family kinase, is a further upstream stimulator of the PI3 kinase/Akt pathway (Fig. 4, a-d). Typical PI3 kinase has regulatory subunits with two Src homology 2 domains that allow the enzyme to be activated by phosphotyrosine residues of a TK (39,40). We speculate that HDL binding to SR-BI directly or indirectly causes tyrosine phosphorylation of Src kinase. Both the C-terminal and N-terminal domains of SR-BI are facing the cytoplasm and thus available for interaction with other proteins. At present, there is no evidence that SR-BI binds to a receptor TK or a nonreceptor TK. Further, it is not known whether HDL causes the phosphorylation of a tyrosine residue (tyrosine 489) within the C-terminal cytoplasmic domain of SR-BI, which could potentially bind to Src homology 2-containing adaptor molecules. Detailed studies of possible SR-BI-tyrosine kinase interactions are now warranted. In addition to modulation by PI3 kinase-Akt, MAP kinases are also known to play a role in eNOS regulation by certain agonists (9,(25)(26)(27). In the present study, we have shown that HDL stimulates MAP kinase phosphorylation (Fig. 5, a-c) and that both PI3 kinase and Src kinase are upstream activators of MAP kinase activation by HDL (Fig. 5, d and e). It has been previously observed that HDL stimulates MAP kinase in EC as well as other cell types (41)(42)(43). However, the mechanisms mediating that process were not known prior to the current studies. Furthermore, the activation of MAP kinase is absolutely required for eNOS activation (Fig. 5f). However, MAP kinase activation does not play a role in HDL-induced Akt phosphorylation (Fig. 6b) or in eNOS phosphorylation (Fig. 6a). Moreover, dominant negative Akt had no effect on MAP kinase activation by HDL (Fig. 6c). Thus, there is no cross-talk between the Akt kinase and MAP kinase pathways, and the activation of both pathways is necessary for enhanced enzymatic activity in response to HDL (Fig. 7). Further experiments will be needed to elucidate the mechanism(s) by which MAP kinase contributes to eNOS stimulation by HDL, including the potential modulation of intracellular Ca 2ϩ homeostasis. HDL binding to SR-BI leads to the activation of a tyrosine kinase (Src), which causes the activation of PI3 kinase. The process(es) coupling SR-BI to tyrosine kinase is yet to be determined. PI3 kinase induces the independent activation of both Akt and MAP kinase pathways. Akt phosphorylates Ser-1179 of eNOS, and the basis for MAP kinase-mediated activation of eNOS is currently unknown. Importantly, the concerted impacts of both the Akt and MAP kinase cascades are required for HDL-induced stimulation of eNOS enzymatic activity. Recently, Li et al. (5) reported that HDL stimulates eNOS in a ceramide-dependent manner. C2-ceramide stimulates MAP kinase via tyrosine kinase and PI3 kinase-mediated mechanisms in cultured airway smooth muscle cells (44,45). It is possible that the HDL stimulation of MAP kinase observed in the present work occurred through ceramide production. Li et al. (5) also showed that HDL does not induce Akt kinase activation in CHO cells expressing eNOS and SR-BI. However, their studies were limited to assessments of Akt phosphorylation in the transfected CHO cells. Discrepancies between the observations made in the two studies may be related to the use of different cell paradigms. In the present work, we used ECs for endogenous Akt phosphorylation and COS M6 cells for cell transfection in which dominant negative mutants were employed to show that Akt is responsible for eNOS phosphorylation and activation (Fig. 2). We also used a combination of PI3 kinase inhibitors (Fig. 3) and dominant negative mutants (Figs. 2 and 3) to confirm the involvement of the PI3 kinase/Akt pathway in HDL-induced eNOS phosphorylation and activation. The eNOS phosphorylation observed in the ovine EC was also confirmed in the human and bovine endothelium. As such, multiple approaches have been employed in the current studies to implicate a key role for Akt kinase and phosphorylation in eNOS activation by HDL. SR-BI is the high affinity HDL receptor, and HDL binding to SR-BI is required for HDL-mediated cholesterol flux (46,47). We showed previously that HDL binding to SR-BI is also required for eNOS activation by HDL (4). However, the initiating event that occurs upon HDL binding to SR-BI to cause the proximal processes in signal transduction by the lipoprotein is not known. We speculate that SR-BI mediates the effect of HDL by two possible mechanisms. First, alterations in membrane cholesterol pools may be involved. SR-BI mediates changes in the cholesterol content of the plasma membrane, and SR-BI and eNOS both reside in cholesterol-rich microdomains known as caveolae, which are most likely a subset of lipid rafts, which contain various signaling molecules including MAP kinases. It has been demonstrated that cholesterol alterations induced by ␤-cyclodextrin activate MAP kinases (48). Along with potential cholesterol-related mechanisms, there may be involvement of SR-BI C-terminal binding protein(s). Using antibody blockade in isolated EC plasma membranes, we previously showed that the C-terminal domain of SR-BI plays a role in HDL-mediated eNOS stimulation (4). It is possible that a protein binding to the C terminus of SR-BI mediates signal transduction from SR-BI to a downstream effector such as a G protein and/or Src kinase. A PDZ-containing protein, PDZK1 (also known as CLAMP, Diphor-1, CAP70, or NaPi-Cap1) has been shown to bind to the C-terminal domain of SR-BI and to be involved in cholesterol regulation (49). Now knowing that a Src kinase is critically involved in the proximal signaling events induced by HDL binding to SR-BI, in depth experimentation focused on the possible roles of both cholesterol regulation and SR-BI adaptor proteins in the upstream process can be pursued. The intricate regulation of kinase cascades by HDL shown in this study may help explain the impact of HDL on various other functions in EC. For example, HDL stimulates the migration and proliferation of EC (50 -52). Since Akt is known to have a role in apoptosis, and MAP kinase is involved in the proliferation and migration of EC during reendothelialization after injury to the arterial wall (53,54), it is possible that the modulation of these pathways by HDL may be critical to the regulation of EC turnover and movement. As such, our finding that HDL is a potent stimulus of various kinases in EC enhances both our specific understanding of the capacity of the lipoprotein to modulate NO production and our overall knowledge of other mechanisms by which HDL may be atheroprotective.
6,262.2
2003-03-14T00:00:00.000
[ "Biology", "Computer Science" ]
Entropy Increases from Different Sources Support the High-affinity Binding of the N-terminal Inhibitory Domains of Tissue Inhibitors of Metalloproteinases to the Catalytic Domains of Matrix Metalloproteinases-1 and -3* The avid binding of tissue inhibitors of metalloproteinases (TIMPs) to matrix metalloproteinases (MMPs) is crucial for the regulation of pericellular and extracellular proteolysis. The interactions of the catalytic domain (cd) of MMP-1 with the inhibitory domains of TIMP-1 and TIMP-2 (N-TIMPs) and MMP-3cd with N-TIMP-2 have been characterized by isothermal titration calorimetry and compared with published data for the N-TIMP-1/MMP-3cd interaction. All interactions are largely driven by increases in entropy but there are significant differences in the profiles for the interactions of both N-TIMPs with MMP-1cd as compared with MMP-3cd; the enthalpy change ranges from small for MMP-1cd to highly unfavorable for MMP-3cd (−0.1 ± 0.7 versus 6.0 ± 0.5 kcal mol−1). The heat capacity change (ΔCp) of binding to MMP-1cd (temperature dependence of ΔH) is large and negative (−210 ± 20 cal K−1 mol−1), indicating a large hydrophobic contribution, whereas the ΔCp values for the binding to MMP-3cd are much smaller (−53 ± 3 cal K−1 mol−1), and some of the entropy increase may arise from increased conformational entropy. Apart from differences in ionization effects, it appears that the properties of the MMP may have a predominant influence in the thermodynamic profiles for these N-TIMP/MMP interactions. The matrix metalloproteinases (MMPs) 3 catalyze the turnover of components of the extracellular matrix and have important roles in tissue remodeling, wound healing, embryo implantation, cell migration, and shedding of cell surface proteins (1,2). MMP-1 is a well characterized collagenase that catalyzes the turnover of collagen fibrils in the matrix, whereas MMP-3 (stromelysin 1) cleaves multiple extracellular matrix compo-nents and functions in tissue remodeling and other processes (2). Collagenolysis is a key feature of biological processes including development, morphogenesis, and wound repair yet unregulated collagen breakdown contributes to important diseases including cancer, arthritis, emphysema, and fibrosis (1). An understanding of the molecular basis of the regulation of MMP activities is crucial for understanding MMP-associated diseases and developing therapies for them. The tissue inhibitors of metalloproteinases (TIMP-1 to Ϫ4) are a family of four endogenous MMP inhibitors that can form high affinity 1:1 complexes with most MMPs. TIMP-3 also inhibits some members of the distantly related disintegrin metalloproteinase (ADAM) and disintegrin metalloproteinase with thrombospondin type 1 motif (ADAMTS) families (2). A loss of balance between the TIMPs and their target proteases is linked to diseases such as cancer and arthritis (2,3). TIMPs are slow, tight-binding inhibitors of the MMPs with K i values typically in the sub-to low nanomolar range (3). Mammalian TIMPs have two domains, a larger (ϳ125 residue) N-terminal domain that can be expressed separately and carries the MMP inhibitory activity (2,3) and a smaller C-terminal domain that is absent from the TIMPs of some invertebrates (3). In the crystal structures of the MMP-3⅐TIMP-1, MMP-1⅐N-TIMP-1, MT1-MMP⅐TIMP-2, and MMP-13⅐TIMP-2 complexes (4 -7), most of the MMP interaction surface is located within the N-domains of the TIMPs; as shown in Fig. 1, the N-terminal region (residues 1-5) inserts into the active site of the MMP, whereas the ␣-amino group together with the carbonyl oxygen of Cys 1 coordinate the catalytic zinc (4 -7). Modification of the ␣-amino group by addition of an alanine (8,9), carbamylation (10), or acetylation (11) radically reduces the inhibitory activities of TIMPs for MMPs. In the all inhibitory TIMP⅐MMP complexes, the side chain of residue 2 of the TIMP (Ser or Thr in vertebrate TIMPs) sits over the mouth of the key S 1 Ј subsite of the MMP active site (4 -7), and substitution by glycine results in large loss of affinity for most MMPs (9,12). However, both the Ala extension and Thr 2 to Gly mutation in N-TIMP-3 have little effect on the inhibition of ADAM-17, ADAMTS-4, or ADAMTS-5 (9,13). The inhibitory domain of TIMP has an OB-fold structure, a 5-stranded ␤-barrel structure with two small helices. Other regions in TIMPs that contact MMPs include the connector between the C and D ␤ strands and the loops connecting the A to B, and E to F strands (4 -7). In all structurally characterized complexes of TIMP-1 and TIMP-2 with MMPs, the core of the interaction site in the TIMP is a surface ridge formed by the N-terminal five residues, Cys 1 -Thr-Cys-Val-Pro 5 (TIMP-1 sequence and numbering) and the connector between ␤-strands C and D, residues Met 66 -Glu-Ser-Val-Cys 70 in TIMP-1, which are linked by the Cys 1 to Cys 70 disulfide bond (3). Other regions that make variable contributions to MMP binding are the loops connecting ␤-strands A and B and strands E and F and the C-terminal end of ␤-strand D (3). The A-B loop of N-TIMP-2, which is longer than that of N-TIMP-1 by 7 residues (Fig. 1), makes multiple contacts with MT1-MMP in the TIMP-2⅐MT1-MMP and TIMP-2⅐MMP-13 complexes (6,7). Although no structure is available for the N-TIMP-2⅐MMPcd complexes investigated here, NMR studies have shown that MMP-3cd binding reduces the internal motions of the large TIMP-2 A-B loop, suggesting that it interacts with the protease (14). The truncated N-terminal domains of TIMPs (N-TIMPs) and isolated MMP catalytic domains (MMPcd) have been extensively used in studies of the interactions between TIMPs and MMPs (12)(13)(14)(15)(16)(17)(18)(19) and the majority of the intermolecular interactions in structurally characterized inhibitory TIMP⅐MMP complexes involve residues in these domains. Some interactions involving the C-domains of TIMPs have been observed also and it has been suggested that these might affect the relative orientations of inhibitor and MMP in their complexes (6,7). Although it is possible that interactions involving the TIMP C-domain might modulate the affinity for some MMPs, they are not necessary for binding because N-TIMPs are fully active as MMP inhibitors (12)(13)(14)(15)(16)(17)(18)(19). The one clear example of selectivity in TIMP/MMP interactions is the weak affinity of TIMP-1 for the membrane-type MMPs, including MT1-MMP and MMP-19, whereas TIMP-2 is a potent inhibitor of these enzymes (3). This inhibitory selectivity resides in the TIMP N-terminal domains (19,20). However, the C-domain appears to have a role in TIMP interactions with other metalloproteinases; for example, TIMP-1 and TIMP-3 are potent inhibitors of ADAM-10 but their truncated N-domains are ineffective (21). A previous ITC study of the thermodynamics of N-TIMP-1 binding to the catalytic domain of MMP-3cd (22) revealed that the interaction has a positive (unfavorable) enthalpy change (⌬H) that is compensated by a large increase in entropy (⌬S). The relatively small negative heat capacity change (⌬C p ) for the interaction implied that the hydrophobic effect accounts for only a fraction of the favorable entropy change (22). Increased conformational dynamics could be a major source of the entropy increase because NMR-based backbone dynamics results suggested that binding to MMP-3cd enhanced the mobility of the backbone of the core of N-TIMP-1 as reflected in fluctuations on the picosecond to nanosecond scale, and more widely on the time scale of microsecond to millisecond (22). We report here an investigation by ITC of the interactions of N-TIMP-1 with the catalytic domain of MMP-1 (MMP-1cd) and N-TIMP-2 with the catalytic domains of both MMP-1 and MMP-3. Together with the previous study of the N-TIMP-1/ MMP-3cd interaction (22), this provides thermodynamic profiles of four TIMP/MMP interactions that may help to illuminate the physical basis of functional specialization among different TIMPs (3). Based on the magnitude of the heat capacity change for binding (⌬C p ) it appears that increases in solvent entropy (hydrophobic effect) make a major contribution to the binding of both N-TIMPs to MMP-1cd, but apparently less to the binding to MMP-3cd where the "missing" entropy might arise from enhanced conformational dynamics in the inhibitor. The thermodynamic profiles are discussed in the light of the structural information on the various TIMP⅐MMP complexes. Active N-TIMP-1 was separated from N-acetylated inactive protein in some preparations by medium pressure cation exchange chromatography as described previously (11) but, as previously noted (22), the presence of the inactive form does not interfere with the results and the material from the CM-52 separation was used in most experiments. The concentrations of active N-TIMP-1 and N-TIMP-2 (about 44% of the total) were determined by activity titration and ITC. MMP-1cd and MMP-3cd were expressed and folded as previously described (12) and were purified by ion exchange followed by gel filtration with a column (2.5 ϫ 35 cm) of Superdex-75 pre-equilibrated and eluted with 50 mM Tris-HCl, pH 7.5, containing 150 mM NaCl and 20 mM CaCl 2 . The eluate was collected in 4.5-ml fractions at a flow rate of 1.5 ml/min. Fractions containing folded N-TIMPs were pooled and concentrated using Centriplus YM-3 centrifugal filter devices (Millipore). Fluorescence Assays for N-TIMP-1 and N-TIMP-2 Activity-The inhibition of MMPs by N-TIMP-1 and N-TIMP-2 was measured by assaying MMP activities for hydrolysis of fluorogenic substrates using a PerkinElmer LS50B luminescence spectrometer. TNC buffer (50 mM Tris-HCl buffer, pH 7.5, containing 150 mM NaCl, 10 mM CaCl 2 , and 0.02% Brij-35) was used for both dilution of MMP and TIMP samples and all assays. To determine the K i app of N-TIMP-1 and N-TIMP-2 for MMP-1cd, and of N-TIMP-2 for MMP-3cd at 25°C, various concentrations of inhibitor were incubated with 5 nM enzyme at 30°C for 3 h before addition of the Knight substrate (for MMP-1cd) or, for MMP-3cd, the NFF-3 substrate (Mca-Arg-Pro-Lys-Pro-Val-Glu-Nva-Trp-Arg-Lys(Dnp)-NH 2 ) to a final concentration of 3 M and the fluorescence intensity was measured as described above. Reaction velocities were measured as the slope of the linear portion of the fluorescence curve. The percentage of residual MMP activity was calculated by dividing the velocities measured with inhibitor by the velocities measured without inhibitor (v/v 0 ). K i app values were calculated by fitting inhibition data into the following equation, where v is the experimentally determined reaction velocity, v 0 is the activity in the absence of inhibitor, E is enzyme concentration, I is inhibitor concentration, and K is the apparent inhibition constant (K i app ) (9). For stoichiometric titration of N-TIMP-1 and N-TIMP-2 various concentrations of the inhibitor were incubated with MMP-3cd (300 nM) for 4 h at 37°C, diluted 300-fold with TNC buffer, and immediately assayed with 1.5 M NFF-3 substrate, as described above. Residual MMP activity (%) was calculated as described above and plotted against the molar ratio of TIMP/ MMP (0 -4 in this case). The stoichiometry was determined by linear regression analysis of the appropriate data points. Isothermal Titration Calorimetry of N-TIMP/MMPcd Interactions-Protein solutions were dialyzed extensively against various buffers at 20 mM concentrations containing 250 mM NaCl, 10 mM CaCl 2 , and 50 M ZnCl 2 and degassed prior to use. N-TIMPs (12-30 M) were titrated with the MMPcd (120 -300 M) at different temperatures using a MicroCal VP-ITC microcalorimeter. Titrations of N-TIMP-1 were conducted at pH 6.8 as in the previous study of the N-TIMP-1/MMP-3cd interaction but, because this pH is close to the pI of N-TIMP-2 (6.98 cf. 8.58 for N-TIMP-1), the studies with N-TIMP-2 were conducted at pH 7.4 to avoid solubility problems at the protein concentrations needed for ITC. The instrument was programmed to carry out 16 injections of 10 -20 l each over 16 s, spaced at 300-s intervals. The stirring speed was 200 rpm. Heats of binding were determined by integrating the signal from the calorimeter, and binding isotherms were generated by plotting the heats of binding against the ratio of enzyme to inhibitor. The data were corrected for heats of dilution of the MMPs and the Origin 5.0 from Microcal Inc. was used to calculate the enthalpy changes (⌬H) and stoichiometry (N). To determine the heat capacity, the following relationship was used: ⌬H was measured at a series of temperatures and the data were fitted to Equation 2. ⌬H was measured at a series of temperatures and ⌬C p was determined by linear regression analysis. Correlation of Thermodynamics with Structure-It has been proposed that the ⌬C p o of protein interactions is related to changes in non-polar and polar accessible surface areas (26,27), ⌬ASA np and ⌬ASA pol (where surface burial has a negative sign). This expression has been routinely used for analyzing interactions (27,28). Structures are not available for N-TIMP-2 or TIMP-2 complexes with either MMP-1 or MMP-3 but the structure of the N-TIMP-1⅐MMP-1cd complex has been determined (5) (PDB code 2JOT). The parameterizations of the coefficients for changes in nonpolar and polar surface used here were a ϭ 0. The enthalpy of binding (⌬H o ) at 60°C (the mean melting temperature of a group of proteins used in the analysis (27)) was calculated using the relationship, where c is Ϫ7.27 cal mol Ϫ1 Å Ϫ2 and d is 29.16 cal mol Ϫ1 Å Ϫ2 (26). ⌬H o at 25°C is then calculated using the calculated value for ⌬C p . Changes in polar and apolar surface areas were measured from atomic coordinates using NACCESS 4 and ProFace (30). Isothermal Titration of N-TIMP-2 by MMP-1cd and MMP-3cd Both Show an Enthalpy Increase Compensated by Favorable Entropy-The isothermal titration of N-TIMP-2 with MMP-1cd at 291 K exhibits heat uptake until the protease is saturated with the inhibitor; analysis of the titration data indicate that the interaction has a positive enthalpy change of about 3.2 kcal/mol under these conditions (Fig. 2B). Although uncommon among protein-protein interactions, this result is similar to previous calorimetric observations for the binding of N-TIMP-1 to MMP-3cd although the enthalpy increase is less. A similar result was obtained for the titration of N-TIMP-2 with MMP-3cd. However, the integrated value for ⌬H obs is smaller (2.22 versus 3.21 kcal/mol in Hepes buffer at 291 K). The stoichiometry at 288 -303 K ranged from 0.36 to 0.42, which is consistent with the fraction of active N-TIMP-2 (ϳ40%), determined by titration with MMP-3cd and MMP-1cd. To determine the contribution to ⌬H obs of enthalpy changes arising from protonation or deprotonation on complex formation, the enthalpies of binding (⌬H obs ) for the interactions of N-TIMP-2 with both MMPs were measured at 291 K in buffers of different enthalpies of ionization, Pipes, Hepes, Bes, and Aces (Table 1 and Fig. 3). These were analyzed by linear regression based on the relationship: ⌬H obs ϭ ⌬H°i nt ϩ N Hϩ ϫ ⌬H ion , where N Hϩ is the number of protons taken up (positive values) or released to the buffer and ⌬H°i nt is the enthalpy change independent of buffer (31). Fig. 4 shows graphical plots of these data that show a small fractional uptake of protons (0.14) for the interaction with MMP-1 but the release of about 0.7 protons on binding to MMP-3cd. The K a values for the N-TIMP-2 interactions are too large (ϳ10 9 ) to be reliably determined by ITC and were calculated from TABLE 1 Enthalpies of binding for different N-TIMP/MMP interactions in buffers of different enthalpies of ionization at 18°C and derived intrinsic enthalpy change (⌬H int ) and ionization change (N H ؉) As explained in the text, the studies with N-TIMP-1 were conducted at pH 6.8 and those with N-TIMP-2 at pH 7.4. Table 2). For both interactions ⌬H obs decreases linearly with temperature ( Fig. 5) but the slopes of the two lines differ; regression analysis of two sets of data gave values for ⌬C p o of Ϫ278 Ϯ 10 and Ϫ54.7 Ϯ 3.7 cal K Ϫ1 mol Ϫ1 for MMP-1 and MMP-3, respectively. As indicated by the experimental data shown in Fig. 2, ⌬H obs for the interaction with MMP-1cd changes from positive to negative over this temperature range. A titration at 30°C (303 K) gave a negligible signal, in keeping with the plot shown in Fig. 5. It should be noted that because the N-TIMP-2/MMP-3 interaction (and also the N-TIMP-1/ MMP-3 interaction discussed below) is associated with significant ionization changes and the experimentally determined Energetics of N-TIMPs Binding to MMP-1 and -3 MAY 13, 2011 • VOLUME 286 • NUMBER 19 ⌬C p values require correction for the contribution of the buffer (32). However, these corrections are relatively small (ϩ7.6 and Ϫ5 cal/mol K, respectively, Table 2). ⌬C p o is generally regarded as a measure of the extent of solvent release on binding; the entropy of desolvation, ⌬S solv , at any temperature T, can be estimated from the relationship ⌬S solv ϭ ⌬C p o ϫ ln(T/T S *), where T S * is the reference temperature (385 K) at which the hydrophobic contribution to ⌬S is zero (33,34). Based on this interpretation, it can be estimated that the hydrophobic effect (ϪT⌬S solv ) contributes Ϫ21 kcal/ mol to the free energy of interaction of N-TIMP-2 with MMP-1cd but only Ϫ3.6 kcal/mol to the N-TIMP-2/MMP-3cd interaction (Table 3). Thermodynamic Profile for the Interaction of N-TIMP-1 with MMP-1cd-Previous studies of the interaction of N-TIMP-1 with MMP-3cd showed a similar pattern to the N-TIMP-2/ MMP-3cd interaction: a positive ⌬H compensated by a large favorable ⌬S of which only a small fraction appears to arise from the hydrophobic effect (22). This suggests that the prop-erties of the MMP may predominate in determining the thermodynamic profiles for the N-TIMP/MMPcd interactions. This view is supported by ITC data for the N-TIMP-1/MMP-1cd interaction under the conditions previously used for calorimetric studies of the N-TIMP-1/MMP-3cd interaction (22). The results (Tables 1-3) show that the N-TIMP-1/MMP-1cd interaction is characterized by the uptake of about 1 proton and that the extrapolated value for the ionization-independent enthalpy of interaction, ⌬H int , is small and favorable (Ϫ0.9 kcal/ mol at 25°C). ⌬G o calculated from the K i value is comparable with those of the other interactions, being slightly more negative than for the N-TIMP-1/MMP-3cd interaction. ⌬H obs for the N-TIMP-1/MMP-1cd interaction shows a strong negative dependence on temperature (Fig. 5), giving a value for ⌬C p o of Ϫ189 cal/mol (corrected to Ϫ194 for buffer contribution). Using this heat capacity change, ϪT⌬S solv is estimated at Ϫ13 kcal/mol, having a much larger (negative) magnitude than for the N-TIMP-1/MMP-3cd interaction (Ϫ3.8 kcal/mol). Sources of Binding Energy for the Different N-TIMP/MMPcd Interactions-There is an entropic cost to molecular associations that arise from a loss of translational and rotational freedom T(⌬S trans ϩ ⌬S rot ). This was estimated to be about 3 (Ϯ2.4) kcal/mol for associations in general (35,36), including N-TIMP interactions with MMP catalytic domains. Based on this approximation and the ϪT⌬S int values for the interactions given in Table 3, it can be estimated that the hydrophobic contribution (ϪT⌬S solv ) for the interaction of N-TIMP-1 with MMP-1cd accounts for most of the entropic contribution to the free energy of binding. In the case of the N-TIMP-2/MMP-1cd interaction, it more than accounts for ϪT⌬S int . In contrast to the interactions with MMP-1cd, the magnitude of the hydrophobic effect and association-derived entropy loss suggested by ⌬C p do not account for Ϫ17.4 and Ϫ16 kcal/mol of ϪT⌬S int for the interactions of MMP-3cd with N-TIMP-1 and N-TIMP-2, respectively. Structural Correlations with Thermodynamics-Crystallographic structures are currently available for the TIMP-1⅐MMP-3cd and N-TIMP-1⅐MMP-1cd complexes. Although the crystallographic structures of the corresponding complexes of N-TIMP-2 are not available, a NMR study of the effects of MMP-3cd binding on the conformational dynamics of N-TIMP-2 has been reported (14) and crystallographic structures are available for complexes of TIMP-2 with MMP-13 and MT1-MMP (6, 7). Analyses of the chemical nature of the contact surfaces and atoms in the interfaces of the N-TIMP-1⅐MMP-1cd and N-TIMP-1⅐MMP-3cd complexes have been previously reported (5,22). Analyses of the interaction sites indicate that the interface in the MMP-1 complex is significantly less hydrophobic than in the MMP-3 complex (60 versus 70% non-polar) ( Table 4). This difference largely reflects differences in the active sites of the two MMPs because the structure of the N-TIMP-1 component is similar in the two MMPcd complexes and the core of the TIMP reactive sites in the two complexes are the same, although there are some variations in contacts at the periphery of the binding site. A comparison of the structures of the free form of MMP-1cd (8) (PDB 1CGE) and the structure in its complex with N-TIMP-1 (9) (PDB 2JOT) indicates that the interaction with N-TIMP-1 induces only minor conformational changes. If the interactions result in insignificant structural changes in both of the proteins, the interaction interfaces will correspond to the accessible surface areas in the free proteins that are buried on complex formation (⌬ASA). This appears to be the situation for the N-TIMP-1/MMP-1cd interaction because when the nonpolar and polar surface areas for the complex determined using NACCESS or ProFace are used in Equation 3 with the parameters from Ref. 29, values for ⌬C p of Ϫ202 and Ϫ169 cal mol Ϫ1 K Ϫ1 are calculated (Table 4), in good qualitative agreement with the corrected ⌬C p of Ϫ194 cal mol Ϫ1 K Ϫ1 measured by ITC (Table 1). When the values were employed to calculate ⌬H o (60°C) (Equation 4), and then adjusted to 25°C, values of Ϫ4.1 and Ϫ7.2 kcal/mol were obtained, which agree less well with the experimentally determined value of Ϫ0.9 kcal/mol ( Table 4). When the analysis is conducted using the solution NMR structure of free N-TIMP-1 and the bound form of MMP-1cd, the calculated ⌬H o is Ϫ1.2 kcal/mol, in good agreement with the experimental value. In contrast, the calculated values of both ⌬C p and ⌬H o for the N-TIMP-1/MMP-3 interaction differ substantially from those determined experimentally. It was previously proposed that the positive enthalpy change for the N-TIMP-1/MMP-3cd interaction must include a strongly unfavorable enthalpic contribution, which might arise from conformational changes, particularly in MMP-3cd (22). Differences between the structure of free MMP-3cd and the structure in the complex include a rearrangement of the N-terminal region of the MMP-3 that includes disruption of a salt bridge between the ␣-amino group of Phe 83 and the side chain carboxyl of Asp 237 and a 14 Å movement of residues 83 to 90 to interact with Met 66 of TIMP-1. Also, as with the binding of other inhibitors to MMP-3 (37), the S 1 Ј loop (Leu 222 to Arg 231 ) moves along with the side chain of Tyr 223 , which covers the S 1 Ј pocket in the uninhibited state. Although the solvation or desolvation of non-polar surfaces has been useful for correlating structural changes associated with protein interactions and unfolding/folding processes with heat capacity changes, ⌬C p (27,28,34,36), there have been a number of previous reports of anomalous heat capacity effects (see Refs. 38 and 39). In the present case, ⌬C p o for the N-TIMP-1/MMP-3cd interaction is much less negative than the structure-based prediction suggesting that there is a source of positive change in ⌬C p . There are two potential explanations. One is the conformational change in MMP3cd induced by the binding of N-TIMP-1 that exposes 600 Å 2 of surface outside of the interface (22). This results in the exposure of non-polar surface, thereby reducing the net hydrophobic effect arising from the interaction and diminishing ⌬C p . This brings the predicted ⌬C p of Ϫ112 cal/mol/K into better agreement with the experimentally measured ⌬C p o of Ϫ50 cal/mol/K for the association (Table 4). A second explanation, suggested by previous NMR studies (22), is that a large component of the entropy increase driving the interaction of N-TIMP-1 and MMP-3cd could arise from an increase in conformational entropy in the core of the TIMP ␤-barrel; this could result from the disruption of cooperative non-covalent interactions, a phenomenon that has been linked to "unconventional" positive heat capacity changes, (39). A modest disruption of non-covalent packing interactions in N-TIMP-1 could also contribute to the positive ⌬H of binding. From the limited data that are currently available it appears that the MMPcd component may determine the general fea- tures of the thermodynamic profiles of each MMPcd/N-TIMP interaction. Based on this, it would be predicted that the interaction of N-TIMP-2 with MMP-3cd, but not with MMP-1cd, is accompanied by structural rearrangements and increased dynamics. Currently, there are no structural data to support this hypothesis but NMR studies of the effects of MMP-3cd binding to N-TIMP-2 indicate that it enhances the mobility of some residues of the inhibitor remote from the interface in two of the segments that, in N-TIMP-1, show the most pronounced increase in backbone dynamics arising from MMP-3cd binding. Specifically, MMP-3cd binding caused the 15 N T 2 values of Ile 19 -Thr 21 , Leu 84 , and Ala 86 of N-TIMP-2 to fall below the average T 2 , suggesting microsecond-millisecond exchange broadening (14). Although not conclusive, these results are consistent with a possible contribution of conformational dynamics to the entropy increase that drives the interaction of MMP-3cd with N-TIMP-2 as well as N-TIMP-1. Ionization Changes on Complex Formation-The differences in the ionization effects associated with the different complexes (Table 1) could arise from sequence differences in the interaction sites, roles of different residues in stabilizing different complexes, and changes associated with interaction-induced conformational transitions. Which groups undergo ionization state changes upon association is a matter of speculation. The interactions of both N-TIMPs with MMP-3cd have N H ϩ of 0.8 to 0.9 less than their interactions with MMP-1cd. The association of N-TIMP-2 with MMP-3cd is accompanied by release of a proton, suggesting the presence of a group in the MMP-3cd active site, but not in MMP-1cd, which can release a proton. This could conceivably result from the partial deprotonation of the ␣-amino group (pK 7.7 Ϯ 0.5) (40) of the N-terminal residue, Phe 83 (4), of MMP-3 at pH 7.4 when its interaction with Asp 237 is disrupted. The residue that becomes protonated at pH 6.8 on the association of N-TIMP-1 with MMP-1cd could be Glu 219 , the general base in the active site. Its pK appears to be around 6 in free MMP-3 (see Holman et al. (41)) but it seems unlikely that this would increase sufficiently on binding N-TIMP-1, to account for this change. Alternatively, Glu 67 of TIMP-1 (Ser in TIMP-2) might take up a proton upon association at pH 6.8. Glu 67 makes 12 contacts with Ser 227 and His 228 in the MMP-1cd complex but only 2 with His 211 in the MMP-3cd complex. Conclusions-Protein-protein interactions have key roles in numerous biological processes and understanding the structural and biophysical bases of high affinity binding and specificity is of fundamental interest in structural biology. The goal of the present study was to determine the thermodynamic and structural bases of high affinity binding and affinity differences in TIMP/MMP interactions. These are of particular interest because of their relevance to engineering TIMPs to be targeted inhibitors of MMPs for possible clinical application in diseases such as cancer and arthritis (3). Structural studies of the complexes of TIMP-1 with MMP-1 and MMP-3 and TIMP-2 with MMP-13 and MMP-14 have shown that the same binding surface of the inhibitor can make different interactions with different MMPs involving both shared and distinct chemical groups in the active site; also, shifts in orientation of TIMP and changes in interactions with loops at the periphery of the active site have been noted (4 -7). However, a major interaction-induced structural transition has only been observed in MMP-3cd in its complex with TIMP-1. The thermodynamic profiles reported here show that the interactions of both N-TIMP-1 and N-TIMP-2 with MMP-3cd have nearly 4 -5-fold smaller (negative) heat capacity changes, greater (positive) entropy changes, and less favorable enthalpy changes than those for the corresponding interactions with MMP-1cd. These may reflect conformational changes in MMP-3cd in its complexes with both N-TIMPs but only minor changes in MMP-1cd in corresponding complexes (4,5); a redistribution of dynamics when N-TIMPs bind to MMP-3cd may also underlie the "anomalous" profiles. Disorder to order structural transitions in protein interactions have been linked to negative ⌬C p values of greater magnitude than predicted from the composition of buried surfaces (42); it appears that the converse might apply to the interactions of the two N-TIMPs with MMP-3cd where the interacting N-TIMPs may be more ordered than their complexes. Based on these considerations we would hypothesize that the N-TIMP-2⅐MMP-3cd complex will show similar conformational changes to those in the TIMP-1⅐MMP-3cd complex but the N-TIMP-2⅐MMP-1cd complex will not. Because large structural adjustments have not been observed in other TIMP⅐MMP complexes (5-7), it is possible that they are unique to those involving MMP-3, but structural studies of additional TIMP⅐MMP complexes are needed to clarify this.
6,620.6
2011-03-28T00:00:00.000
[ "Chemistry", "Biology" ]
Enhancing seed oil content and fatty acid composition in camelina through overexpression of castor RcWRI1A and RcMYB306 Seed triacylglycerol (TAG), a major component of vegetable oil, consists of a glycerol esterified with three fatty acids. Vegetable oil has industrial applications and is widely used as edible oil. The increasing demand for plant oils, owing to population growth, it is crucial to enhance the oil content in seeds. We found castor WRINKLED1A (RcWRI1A) and R2R3-type MYB domain protein 306 (RcMYB306) which have homology with Arabidopsis WRI1 (AtWRI1) and AtMYB96 which regulate genes involved in fatty acid and TAG synthesis, respectively. These castor genes were separately and jointly overexpressed using seed-specific promoters in an oil crop, camelina ( Camelina sativa ). Overexpression of RcWRI1A , RcMYB306 , or RcWRI1A + RcMYB306 increased the total seed oil content in camelina. However, this increase was not significantly different from that observed during the overexpression of RcWRI1A or/and RcMYB306 . RcWRI1A overexpression increased the fatty acid content, including 16:0, 18:2, 18:3. Contrastingly, RcMYB306 overexpression increased the 18:1, 18:2, 18:3, 20:0 and 20:1 fatty acid. In the RcWRI1A + RcMYB306 lines, changes in fatty acid composition demonstrated the combined effects of these transcription factors. These results suggest that RcWRI1A and RcMYB306 can be used to improve the productivity of oil crops. Introduction Vegetable oil is an edible oil and is also used as a raw material in the industrial production of plastics, cosmetics, and lubricants [1].It contains triacylglycerol (TAG), which comprises three fatty acids esterified with one glycerol [2].The fatty acid composition of TAG in seeds differs depending on the plant species [3].Olive oil has a high oleic acid (18:1 Δ9 ) content and is used for frying or salad dressing [4,5].Perilla, flaxseed, and walnut oil have health benefits and are high in α-linolenic acid (18:3 Δ9,12,15 ), an omega-3 fatty acid [6].Castor (Ricinus communis L.) oil contains 80-90% ricinoleic acid (18:1 Δ9 -OH) with a hydroxy group on 12th carbon.It is used as an industrial raw material in various manufacturing processes, such as lubricant and paint production [7].The demand for vegetable oils is increasing worldwide [8].However, crop productivity is decreasing because of climate change caused by global warming [9].Therefore, technological advancements are required to increase the TAG content of seeds, subsequently increasing vegetable oil production per unit area.Camelina (Camelina sativa) is an oil crop belonging to the Brassicaceae family.It has a short life cycle, grows in barren areas, and is easily transformed through the floral dipping method, a genetic engineering platform for plant oil enhancement [10,11]. The transcription factors and genes involved in fatty acid and TAG synthesis have been identified in Arabidopsis (Arabidopsis thaliana) [12].Fatty acids synthesized in plastids are exported to the cytosol to form acyl-coenzyme A (CoA) pools [12].In the Kennedy pathway, three acyl-CoA molecules are sequentially added to one molecule of glycerol-3-phosphate to synthesize TAG using glycerol-3-phosphate acyltransferase, lysophosphatidic acid acyltransferase, and diacylglycerol acyltransferase 1 (DGAT1) in the endoplasmic reticulum (ER) [13,14].DGAT is a rate-limiting enzyme that transfers acyl-CoA to the sn-3 position of diacylglycerol (DAG) to generate TAG [15,16].Unsaturated fatty acids from the sn-2 position of phosphatidylcholine (PC) are transferred to the sn-3 position of DAG by phospholipid: DAG acyltransferase (PDAT) to synthesize TAG [17]. Arabidopsis WRINKLED1 (AtWRI1) belongs to a family of APETALA2 (AP2) transcription factors that regulate genes involved in fatty acid synthesis [18].Orthologs of AtWRI1 have also been found in Brassica, corn, camelina, soybean, and castor [19][20][21][22][23]. MYBs are the largest family of the transcription factor in plants.The MYB protein is divided into several classes, one of which is the R2R3-type MYB TF, divided into 23 subgroups [24].R2R3-type MYB domain protein 96 (MYB96), which exhibits drought stress or disease resistance related to ABA signaling, regulates PDAT1 and DGAT1 expression [25].Furthermore, seed-specific overexpression of AtMYB96 enhances the TAG content in Arabidopsis seeds [25].In contrast, AtMYB89 is a negative regulator for oil accumulation [26]. Castor is grown in tropical or subtropical regions and contains 30-50% oil in seeds [7].Even though castor has high oil content in seeds, not much research has been conducted on the transcription factor of castor.In this study, two transcription factors, RcWRI1A and RcMYB306, were isolated from castor.It remains to be revealed whether overexpression of RcWRI1A and RcMYB306 effectively increases oil content in oil crops.Therefore, seed-specific overexpression of RcWRI1A and RcMYB306 were conducted in camelina as well as coexpression of RcWRI1A and RcMYB306.We investigated whether heterologous expression of RcWRI1A or/and RcMYB306 enhances the metabolism of fatty acids and TAG in camelina seeds and ultimately improves oil content. Plant materials and transformation Camelina sativa cultivar Suneson was used for transformation.Wild-type and transgenic plant seeds were germinated on moist filter paper in a culture chamber at 16 °C under a 16 h light/8 h dark photoperiod.Plants were grown at 20 °C in a growth chamber under a 16 h light/8 h dark photoperiod.The Agrobacterium strain GV3101 was used to transform the pBinGlyRed3 vector.Agrobacterium cells were inoculated into 500 mL LB medium and incubated at 28 °C.Cultured cells were centrifuged at 3000 × g for 10 min at 4 °C.They were subsequently dissolved in a solution containing 5% sucrose and 0.05% (v/v) Silwet-L77.Eighteen plants were transformed using the floral dipping method [10].After floral dipping, the plants were wrapped in black plastic for a day to maintain the humidity.This process was performed three times at intervals of 5-7 d.The fluorescent seeds were obtained using a green flash and a red filter to identify transgenic plants. Gene cloning and vector construction Total RNA was extracted from developing castor seeds, and cDNA was synthesized using a cDNA synthesis kit (Takara, Kusatsu, Japan).We designed primers covering the full-length open reading frame (ORF) to amplify the coding sequence (CDS) of RcWRI1A and RcMYB306 (Table S1).The PCR products were eluted using a PCR purification kit (Cosmo Genetech, Seoul, Korea) and cloned into the pGEM-T vector (Promega, Madison, WI, USA).The final vector, pBinGlyRed3, included the glycine promoter for seed-specific expression and DsRed3 as a selection marker [27].There is an EcoRI site on both sides of the CDS in the pGEM-T vector and a unique EcoRI site in pBinGlyRed3.Therefore, RcWRI1A and RcMYB306 were cloned using EcoRI.To construct a co-expression vector containing both genes, RcMYB306 was cloned into a pKMS2 vector containing an oleosin promoter using NotI enzymes on both sides of the pGEM-T vector.Then, the oleosin promoter:RcMYB306:terminator cassette was cut out using the AscI restriction enzyme.This cassette was inserted into the AscI site of the pBinG-lyRed3-RcWRI1A vector to complete the cloning of the co-expression vector. Phylogenetic tree and protein sequence alignment All protein sequences used for phylogenetic tree and sequence alignment were obtained from the National Center for Biotechnology Information (NCBI) and The Arabidopsis Information Resource (TAIR).The phylogenetic tree was generated by the Neighbor-Joining method with 1000 bootstrap replications in the MEGA7 program.Protein sequence alignment was conducted in DNA-MAN program using Clustal W method. Fatty acid analysis The fatty acid composition and total oil content were analyzed through gas chromatography (GC).Seven seeds were reacted with 500 µL toluene and 1 mL of 5% H 2 SO 4 including pentadecanoic acid (15:0) standard (100 µg/ mL) in an 85 °C water bath for 2 h.Next, 1 mL of 0.9% NaCl and 1 mL hexane were added.The solution was mixed and centrifuged at 330 × g for 2 min to extract fatty acid methyl esters (FAME).The addition of 1 mL hexane, followed by centrifugation, was repeated three times.Then, the solution was evaporated using nitrogen gas.The FAME was diluted with 200 µL hexane and transferred into a GC vial for analysis using a GC-2030 (Shimadzu, Kyoto, Japan) machine and a DB-23 column (30 m × 0.25 mm, 0.25 μm film; Agilent, Santa Clara, CA, USA).The oven temperature range increased from 190 °C to 230 °C by 5 °C per min. RT-PCR and RT-qPCR analysis Total RNA was isolated from developing seeds in camelina transgenic plants using the method described in reference [28]. 2 µg of cDNA was synthesized using the PrimeScript 1st Strand cDNA Synthesis Kit (Takara, Kusatsu, Japan).Developing seeds were obtained during the oil accumulation period (18-24 DAF) based on reference [29].Reverse transcription real-time PCR (RT-qPCR) was performed using TB Green Premix Ex Taq™ II (Takara, Kusatsu, Japan) reagent in a StepOne-Plus Real-Time PCR System (Thermo Fisher Scientific, Waltham, MA, USA).By subtracting the Cт values of the target gene and the endogenous control, the Cт value was determined.ACTIN2 was used as a control.To measure the relative expression level, the two Cт values were subtracted and obtained the value of 2(-ΔΔCт).Primers used in this study are listed in Table S1. Isolation of castor WRI1A and MYB306 cDNA for increasing seed oil in camelina AtWRI1 functions as a master regulator by regulating transcription level of fatty acid and TAG biosynthesis genes [30].The heterologous expression of various WRI1 genes increases the TAG content of seeds in several plants [18,19,31,32].In addition, the overexpression of AtMYB96 using a seed-specific promoter increased the TAG content in Arabidopsis [25].In this study, we expressed RcWRI1A and RcMYB306 individually or simultaneously to increase TAG levels in camellia seeds. We attempted to compare the efficiency of these transcription factors in increasing oil content.Furthermore, we investigated whether the co-expression of two transcription factors could increase oil content compared to a single transcription factor. The castor genes with the highest amino acid sequence homology to AtWRI1 and AtMYB96 were selected from the castor genomic database (http://www.plantgdb.org)using the BLASTP program.The highest homology with AtWRI1 and AtMYB96 in the protein database translated using the entire castor transcript was 30,069.m000440and 30,138.m004082,with values of 6.0e − 102 and 6.0e − 91 , respectively.Analysis of BLAST and phylogenetic tree was conducted on the isolated transcript in the castor genomic database.As a result, 30,069.m000440and 30,138.m004082were estimated to be RcWRI1A and RcMYB306, respectively (Fig. 1).It was confirmed that castor WRI1 has a 14-3-3 binding motif, a conserved VYL motif, and two AP2 domains such as other WRI1 (Fig. S1).The amino acid sequence of RcWRI1A showed 61% identity with AtWRI1.RcWRI1A has the same sequence as a spliced transcript of castor WRI1 [23].The phylogenetic tree between RcMYB306 and AtMYBs revealed that the RcMYB306 belongs to subgroup 1 and is close to AtMYB30 (Fig. S2).However, since AtMYB306 does not exist in Arabidopsis, the phylogenetic tree was generated between RcMYB306, several plant MYB306, and AtMYBs belonging to subgroup 1 (Fig. 1b).As a result, it was confirmed that RcMYB306 was closer to MYB306s than to AtMYB30.In addition, the R2 and R3 domains of RcMYB306 are conserved similar to other MYB306s (Fig. S3).According to the alignment result and phylogenetic tree, two genes were designated as RcWRI1A and RcMYB306 in this study. Selection of camelina transformants introduced with RcWRI1A, RcMYB306, and RcWRI1A + RcMYB306 vectors To investigate whether RcWRI1A and RcMYB306 enhance TAG biosynthesis in seeds, these two genes were cloned into seed-specific expression vectors.Two vectors were constructed to express RcWRI1A and RcMYB306 using a seed-specific glycinin promoter (Fig. 2a).To coexpress RcWRI1A and RcMYB306, the two genes were cloned into a single vector containing glycinin and oleosin promoters (Fig. 2a). Transgenic plants transformed with GlyP: RcWRI1A, GlyP: RcMYB306, or GlyP: RcWRI1A + OleP: RcMYB306 vectors were selected from T 1 seeds.T 1 plants overexpressing (OE) RcWRI1A, RcMYB306, or RcWRI1A + RcMYB306 were designated as W1-W10, M1-M10, and WM1-WM10, respectively.To detect only the transgene and not the endogenous genes, a forward primer was designed to target the rear of the transgene, and a reverse primer was designed to target the front of the terminator (Fig. 2a).PCR of the genomic DNA confirmed the expected 477 bp (RcWRI1A) and 456 bp (RcMYB306) PCR bands in transgenic plants but revealed no PCR bands in wild type (WT) (Fig. 2b). Four independent plants were randomly selected among the 10 T 1 plants in each transgenic plant and RNA was extracted from the developing seeds of the T 1 transgenic plants.Then, RT-PCR was performed to assess RcWRI1A and RcMYB306 expression.This analysis confirmed that the two genes were stably expressed in each of the four transformed lines (Fig. 2c).The total FAME content in the seeds was analyzed in transgenic plants.The total FAME per mg in T 2 seeds was increased in the three kinds of transgenic plants compared to the WT but was not significantly increased in some of the overexpressed lines (Fig. 2d).Because the T 1 generation was not a homozygous plant, W5, W8, M5, M7, WM7, and WM10 lines, which had high transgene expression in developing seeds, were selected and transmitted to the T 2 generation plants.The PCR was conducted using gDNA to confirm whether the transgene was inserted in the T 2 generation (Fig. S4).Furthermore, two independent lines were selected for the T 3 generation. Since the total fatty acid content of the three types of overexpression plants increased, the change in fatty acid content per seed was investigated compared to WT (Fig. 3c).The RcWRI1A OE lines significantly increased the 16:0, 18:2, and 18:3 contents in seed oil.In the RcMYB306 OE lines, 18:1, 18:3 contents significantly increased, and 18:2 content slightly increased.In addition, 20:0 and 20:1, which did not change in RcWRI1A OE lines, increased in RcMYB306 OE lines.The RcWRI1A + RcMYB306 OE lines showed changes in fatty acid increase that were somewhat different from those of the RcWRI1A and RcMYB306 OE lines, respectively.The increases in 16:0, 18:2, and 18:3 were similar to RcWRI1A OE lines, but the increases in 18:1 which showed increases such as RcMYB306 OE lines (Fig. 3c).In addition, 20:0 and 20:1 was decreased in RcWRI1A + RcMYB306 OE lines compared that of RcMYB306 OE lines (Fig. 3c).This result showed that seed fatty acid content was increased in the RcWRI1A and RcMYB306 OE lines.There was no significant difference in total FAME but composition of FAME per seed in the RcWRI1A + RcMYB306 OE lines was different compared to the RcWRI1A or RcMYB306 OE lines (Fig. 3).Even though total FAME was increased in RcWRI1A, RcMYB306, and RcWRI1A + RcMYB306 OE lines, seed size was not affected in all lines (Fig. S5). Regulation of fatty acid and target gene expression by RcWRI1A and RcMYB306 in camelina The expression of fatty acid biosynthesis genes targeted by each of transcription factors was analyzed between RcWRI1A, RcMYB306, and RcWRI1A + RcMYB306 OE lines (Fig. 4).A comparative analysis was performed on one line of each OE lines.The transgenes (RcWRI1A and RcMYB306) were overexpressed in a seed-specific manner for each OE line (Fig. 4a).In the developing seeds of the RcWRI1A OE line, the expression of camelina BCCP2, KASI, KASII, MAT, PDH and PKP2, which were reported as targets of AtWRI1, was upregulated as expected.In addition, expression of FAD2 which is a desaturase enzyme, was also increased.However, the expression of FAE1, DGAT1, and PDAT1 was downregulated (Fig. 4b, c).In the RcMYB306 OE line, expression of KASI, KASII, MAT, PDH and PKP2 were upregulated similarly to the RcWRI1A OE line, and expression of FATA and FAD3, which were unchanged in the RcWRI1A OE line, were increased.Expression of FAE1 and DGAT1 was also upregulated (Fig. 4b, c).The RcWRI1A + RcMYB306 OE lines showed the expression pattern of mixed genes of RcWRI1A and RcMYB306 OE lines.MAT, PKP2, PDH, and PDAT1 genes showed a greater increase than in the single OE line.However, the expression of KASI, FAD2, and FAE1 was reduced more than that of overexpression of single gene (Fig. 4b, c). Discussion Much research has been performed to improve the oil content in oil crops [33,34].One of the strategies for increasing TAG in leaves promote the increase of fatty acid content by transcription factors (Push), transfer fatty acids to TAG through acyltransferases (Pull), and maintain the oil body structure (Protect); this is referred to as the 3P strategy [35,36].In this study, push and pull were induced by seed-specific overexpression of RcWRI1A and RcMYB306 in camelina seeds (Fig. 2).Seed-specific overexpression of RcWRI1A and RcMYB306 increased the fatty acid content by 14% and by 7-10% respectively in camelina T 4 seeds, with positive effects observed for RcWRI1A and RcMYB306 individual overexpression (Fig. 3).The simultaneous expression of RcWRI1A and RcMYB306 did not more increase seed oil content but resulted in different fatty acid composition compared with that of RcWRI1A or RcMYB306 OE lines (Fig. 3). The increase of 20:1 content in RcMYB306 OE lines is presumed to be an increase in FAE1 expression.This result is consistent with the increased expression of FAE1 and 20:1 content in the AtMYB96 overexpression line [39].In addition, AtMYB96 has been shown to upregulate the expression of DGAT1 and PDAT1 [25].Expression of DGAT1 was also elevated in RcMYB306 OE lines, but PDAT1 expression was not significantly different from WT (Fig. 4b).When the seed fatty acid composition was examined in AtMYB96 OE lines, the 16:0, 18:0, 18:1, 18:2 and 18:3 contents all increased [25].This is consistent with the increase in most fatty acid levels in RcMYB306 OE lines (Fig. 3c).In particular, the 18:1 content did not increase in the RcWRI1A OE line but increased in the RcMYB306 OE line.The reason is the increase of DGAT1 expression in the RcMYB306 OE line, which results in an increase of 18:1 content, which is a DGAT1 substrate [40].In addition, the increase in 18:3 content may be due to increased expression of FAD3.However, the RcMYB306 OE line also showed increased expression of KASI, KASII, MAT, PKP2, PDH, and FATA unlike AtMYB96 OE lines (Fig. 4b). MYB306 research was conducted in several plants [41][42][43].Paeonia suffruticosa MYB306 (PsMYB306) negatively regulate bud dormancy by directly binding to the promoter of PsNCED3 and activating it [41].NtMYB306a is highly expressed in trichomes and is involved in wax alkane biosynthesis in leaves [42].Apple MdMYB306-like interacts with MdMYB17 and MdbHLH33 and inhibits anthocyanin synthesis [43].Since MYB306 shows various functions in various crops, additional research in other fields as well as controlling the oil content of RcMYB306 appears to be necessary. The RcWRI1A + RcMYB306 OE line did not differ significantly from the increase in total fatty acid content of RcWRI1A and RcMYB306 OE lines (Fig. 3).This means that among the fatty acid biosynthesis genes targeted by each transcription factor, the expression of some genes increased more than that of the single OE lines, on the other hand, the expression of some genes decreased more than that of the single OE lines (Fig. 4).This suggests that co-overexpression of the exogenous transcription factors RcWRI1A and RcMYB306 causes the interaction between them, leading to upregulation of some target genes and downregulation of some genes in fatty acid metabolism in camelina (Fig. 4). In conclusion, we demonstrated the potential of RcWRI1A and RcMYB306 to enhance fatty acid content in camelina seeds when expressed individually.The simultaneous expression of both transcription factors changes the fatty acid composition but not lead to an increase in seed oil content compared to that of RcWRI1A or RcMYB306 OE lines.Complex genetic interactions, metabolic imbalances, and regulatory interference may have contributed to this outcome.Therefore, further investigations are required to address these challenges and optimize strategies for increasing camelina seed oil production. Fig. 3 Fig. 3 Fatty acid analysis of T 3 transgenic plants.(a) Analysis of total FAME content (µg/mg) in T 4 seeds and (b) total FAME content (µg/seed) in T 4 seeds.(c) The FAME per seed in T 4 seeds compared to the WT.Error bars are SD of the mean.Statistical significance is indicated by an asterisk and was evaluated using one-way ANOVA with Tukey's multiple comparison test (*p < 0.05, **p < 0.01, ***p < 0.001) Fig. 4 Fig. 4 RT-qPCR analysis in developing seeds of transgenic plants compared to the wild type.(a) Expression pattern of RcWRI1A and RcMYB306 in transgenic plants.(b) Expression pattern of fatty acid biosynthesis genes and acyltransferase genes.(c) Upregulated and downregulated genes are indicated by red and blue color in RcWRI1A OE, RcMYB306 OE lines, and RcWRI1A + RcMYB306 lines.Error bars are SD of the mean.Statistical significance is indicated by different letters and was evaluated using one-way ANOVA with Tukey's multiple comparison test (p < 0.05)
4,608.8
2024-08-24T00:00:00.000
[ "Agricultural and Food Sciences", "Materials Science" ]
Lexical development of noun and predicate comprehension and production in isiZulu This study seeks to investigate the development of noun and predicate comprehension and production in isiZulu-speaking children between the ages of 25 and 36 months. It compares lexical comprehension and production in isiZulu, using an Italian developed and validated vocabulary assessment tool: The Picture Naming Game (PiNG) developed by Bello, Giannantoni, Pettenati, Stefanini and Caselli (2012). The PiNG tool includes four subtests, one each for subnoun comprehension (NC), noun production (NP), predicate comprehension (PC), and predicate production (PP). Children are shown these lexical items and then asked to show comprehension and produce certain lexical items. After adaptation into the South African context, the adapted version of PiNG was used to directly assess the lexical development of isiZulu with the three main objectives to (1) test the efficiency of the adaptation of a vocabulary tool to measure isiZulu comprehension and production development, (2) test previous findings done in many cross-linguistic comparisons that have found that both comprehension and production performance increase with age for a lesser-studied language, and (3) present our findings around the comprehension and production of the linguistic categories of nouns and predicates. An analysis of the results reported in this study show an age effect throughout the entire sample. Across all the age groups, the comprehension of the noun and predicate subtests was better performed than the production of noun and predicate subtests. With regard to lexical items, the responses of children showed an influence of various factors, including the late acquisition of items, possible problems with stimuli presented to them, and the possible input received by the children from their home environment. Introduction Research on the emergence of early lexical comprehension and production is very important for both enhancing our understanding of language acquisition and diagnostic purposes. In the South African context, improving numeracy and literacy skills remains a huge challenge in the public education sector (Department of Basic Education, 2014). Language policies favour L1 learning in the foundation years (ages 6-9 years), yet psycholinguistic measures of children's competencies are not included in either educational or political initiatives. Existing literature is very limited regarding the cognitive performance of children who speak a South African indigenous language (also linguistically classified as a Bantu language) in their early language acquisition. The other challenge is finding locally appropriate standardised tools that are relatively easy to use in any given language. It is heartening, however, to note that several researchers in speech and language therapy have adapted several international standardised tests to the local context and are starting to include crucial information in the measurement of children's typical and atypical development. Having normative data allows for interventions in delayed or impaired language as well as effective language teaching and learning (Kathard et al., 2011;Pascoe & Smouse, 2012). Although these studies have greatly advanced our knowledge of less-studied languages, no one has managed to present comprehensive linguistic research on the developmental aspects of the lexicon inventory of the Bantu-speaking child, similar to that regarding children who speak western languages, such as English, French, and Italian. Researchers tend to focus on particular aspects within linguistic theory (Berko Gleason & Bernstein Ratner, 1993, pp. 326-327). This study seeks to investigate the development of noun and predicate comprehension and production in isiZulu-speaking children between the ages of 25 and 36 months. It compares lexical comprehension and production in isiZulu, using an Italian developed and validated vocabulary assessment tool: The Picture Naming Game (PiNG) developed by Bello, Giannantoni, Pettenati, Stefanini and Caselli (2012). The PiNG tool includes four subtests, one each for subnoun comprehension (NC), noun production (NP), predicate comprehension (PC), and predicate production (PP). Children are shown these lexical items and then asked to show comprehension and produce certain lexical items. After adaptation into the South African context, the adapted version of PiNG was used to directly assess the lexical development of isiZulu with the three main objectives to (1) test the efficiency of the adaptation of a vocabulary tool to measure isiZulu comprehension and production development, (2) test previous findings done in many cross-linguistic comparisons that have found that both comprehension and production performance increase with age for a lesser-studied language, and (3) present our findings around the comprehension and production of the linguistic categories of nouns and predicates. An analysis of the results reported in this study show an age effect throughout the entire sample. Across all the age groups, the comprehension of the noun and predicate subtests was better performed than the production of noun and predicate subtests. With regard to lexical items, the responses of children showed an influence of various factors, including the late acquisition of items, possible problems with stimuli presented to them, and the possible input received by the children from their home environment. To date, the available literature on early language acquisition has confirmed, through the study of many languages from different language typologies, that certain stages of language acquisition are universal (Harley, 2014). It is, therefore, widely accepted that all children begin their trajectory towards language through comprehension, while their motoric and cognitive apparatus lags in terms of language production. This study seeks to investigate the development of noun and predicate comprehension (PC) and production in isiZuluspeaking children between the ages of 25 and 36 months. The study is part of an international research collaboration that aims to investigate speech and co-speech gesture production and comprehension development in children. It compares lexical comprehension and production of two romance languages, Italian and French, and two South African Bantu languages, isiZulu and Sesotho, using an Italian developed and validated vocabulary assessment tool. This paper presents the preliminary findings of the lexical development of isiZulu speakers. The lexicon: Nouns and predicates We use words to communicate about everything related to our physical environment, including events, activities, people, objects, places, relations, properties, and states of being (Clark, 1995, p. 1). Words stored by language users are drawn from the lexicon that can also be understood as our mental dictionary. Bates and Goodman (1997) found that, as children transition from the first word stage to sentences and extended discourse, while learning productive control over the basic morphosyntactic structures of their native language, the emergence and elaboration of grammar are highly dependent upon vocabulary size. The lexicon is therefore linked to phonology, comprehension and production, and grammar (Gentner, 1982). The child's lexicon is dependent on the development of meaning construction and categorisation skills (Markman, 1991). Measuring comprehension and production of nouns and predicates is increasingly used as an important diagnostic and prognostic tool in atypical populations (Stefanini, Bello, Caselli, Iverson & Volterra, 2009). As such, the MacArthur Communicative Development Inventory (CDI) was initially developed to study the relationship between the lexical and grammatical development of English-speaking children (Bates, Dale & Thal, 1995;Fenson et al., 1994;Fenson, Marchman, Thal, Dale & Reznick, 2007) but has since been adapted for use in more than 62 languages. It is well documented that typically developing children are accurately able, by 3.5 years of age, to produce most of the basic morphosyntactic structures of their languages such as relative clauses, the passive construction and other complex forms (Bates & Goodman, 1997;Demuth, 2003). Our present study seeks to look at one component of the lexicon: the comprehension and production of the nouns and predicate categories of words in isiZulu. Nouns and predicates are characterised by differences in their perceptual and cognitive complexity (Davidoff & Masterson, 1996;Gentner, 1982;Gentner & Boroditsky, 2001), which leads to distinct mental representations (Slobin, 2008). In the cross-linguistic study by Caselli et al. (1995) the authors highlight the noun-verb sequence in early acquisition as proposed by the 'Whole Object Constraint' (Markman, 1991). Gentner (1982) reported the late appearance of verbs, which are more complex in structure than the underlying semantic structure of nouns. O'Grady (1987) pointed out that nouns are used as 'arguments' or 'primaries' that refer to entities or a class of entities, whereas verbs and adjectives are often used as predicates or 'secondaries' (Caselli et al., 1995, p. 162). This means that, for a child to produce verbs and adjectives successfully, nominal arguments have to be in place. The acquisition of verbs will therefore be affected by the child's mastery of nouns. These theoretical arguments, however, have been challenged recently by the appearance of language groups that show children mastering verbs at a faster rate in Mandarin (Cheng, 1994) and Korean (Gopnik & Choi, 1995). Literature does, though, confirm that verbs, adjectives, and function words appear later in early child acquisition (Caselli et al., 1995). This study therefore seeks to document the development of the noun and predicate in isiZulu. Bantu language acquisition Bantu languages are typologically similar and share several typical grammatical features. IsiZulu is a South Eastern Bantu language of the Nguni cluster spoken primarily in South Africa (especially the southeastern areas of Kwa-Zulu Natal), but it also has speakers in neighbouring countries. IsiZulu is highly mutually intelligible with other Nguni languages, such as isiNdebele, isiXhosa, and siSwati. In 2011, South Africans citing isiZulu as their home language numbered 11.5 million, or 22.7% of the population, the language that has the highest number of speakers (Census, 2011). IsiZulu is a Subject-Verb-Object (SVO) language with a high number (about 15) of noun classes, triggering the agreement of verbs, adjectives, and other elements. In other words, 'nominal and verbal modifiers follow the noun and verb respectively, and grammatical morphology is prefixed to both nouns and verbs' (Demuth & Suzman, 1997, p. 2). The subject can be dropped and it is therefore a pro-drop language as well (Gxilishe, Villiers & Villiers, 2007;Kunene, 2010;Suzman, 1985Suzman, , 1991. It has a very rich system of tense and aspect. These are expressed in a variety of simple tenses with optional aspectual affixes, compound tenses allowing composition of many of the simple tenses, and a large number of auxiliary verbs (Buell, 2005, p. 6). Demuth (2003, pp. 1-4) gives a thorough overview of South African Bantu language acquisition studies of siSwati, isiZulu, isiXhosa, Setswana, and Sesotho. Most studies have looked at the noun class prefix and nominal agreement, consonants and clicks, acquisition of word order, relative clauses, and morpho-phonology (for a review on studies of Bantu language acquisition, see Demuth, 2003; and for a contemporary overview of studies on SA Bantu language see Gxilishe, 2008;Pascoe & Smouse, 2012). From existing literature, we know that child speakers of isiZulu, Sesotho, siSwati, and isiXhosa have fully acquired the nominal class system by the age of three years. We also know, from a study of isiXhosa-speaking children (Gxilishe et al., 2007), that the plural agreement is better produced than the singular subject agreement. Despite the numerous studies on the isiZulu verb (or related languages), we have not come across literature that documents the acquisition of verbs, adjectives, and adverbs or the noun and its morphology. Studies on South African Bantu languages are definitely increasing, but as yet there has been no study that has looked at simultaneous comprehension and production during lexical development. Aims and objectives We focus on the lexical development of comprehension and the production of nouns and predicates from a speech perspective. Specifically, this article seeks to explore the lexical development of isiZulu using the adapted assessment tool with three main objectives. These are to: • Test the effectiveness of the adaptation of a vocabulary tool to measure isiZulu comprehension and production development. • Test the universal finding that both comprehension and production performance increase with age for a lessstudied language, isiZulu. • Present our findings on the comprehension and production of the linguistic categories of nouns and predicates. PiNG assessment tool Early childhood development research shows a strong interdependence between vocabulary, phonology, and grammar in both typical and atypical populations (Marchman & Thal, 2005;Stoel-Gammon, 2011). If children who show delays in their expressive vocabulary repertoires can be identified in time, this could assist in early intervention for children at high risk for language impairment, as shown in the studies by Desmarais, Sylvestre, Meyer, Bairati and Rouleau (2008) Ellis and Thal (2008). Constructed and validated in Italy, the Picture Naming Game (PiNG) was specifically developed to assess lexicon production and comprehension in children between the ages of 19 and 37 months, involving the consideration of both nouns and predicates, and based on the Italian MB-CDI. Previous studies have shown that it is extremely relevant to investigate the relationship between vocabulary comprehension and production as well as between nouns and predicates, as these skills and their relationships are indicators of both the level of language development and conceptual organisation. In general, studies using the PiNG tool with Italian children proved that the comprehension subtests were easier than the production subtests, thus allowing for their administration in younger children and resulting in fewer errors. Similarly, children found the noun subtests easier than the predicates subtests. Therefore, lower variability was found in vocabulary comprehension compared with production, and in nouns compared with predicates for hearing children (Bello et al., 2012;Rinaldi, Caselli, Di Renzo, Gulli & Volterra, 2014). The PiNG tool consists of two sets of colour pictures and contains two tasks; comprehension and production tasks which in turn contain four subtests. The first set has 22 images (20 test pictures and two pre-test pictures) of objects and tools, animals, food and clothing (e.g. a fork, a lion, bananas, gloves) and is used in evaluating the comprehension and production of nouns in the noun comprehension subtest (NC) and the noun production subtest (NP), respectively. The second set contains 22 images (20 test pictures and two pretest pictures) showing actions, location adverbs, and/or adjectives (for example, to push, close by or far away, big or small) and is used to evaluate the comprehension and production of predicates in the PC subtest and the predicate production subtest (PP), respectively. The original PiNG test for Italian children was adapted from the Italian MB-CDI and the items had different levels of difficulty. It included items that were 'easy', 'moderately easy', and 'difficult', based on the Italian normative sample (Bello et al., 2012;Pettenati, Sekine, Congestri & Volterra, 2012;Pettenati, Stefanini & Volterra, 2009;Stefanini et al., 2009;Stefanini, Recchia & Caselli, 2008). In this paper, we report on the adaptation of PiNG to isiZulu. The PiNG tool has already been successfully adapted to other languages and cultures. For example, a study by Pettenati and colleagues provided the first occasion for a cross-cultural comparison of gestures and vocabulary production and comprehension of 22 Italian and 22 Japanese children between 25 and 37 months of age . The PiNG tool was also used to assess vocabulary production and comprehension in toddlers in a study carried out in Australia by Hall, Rumney, Holler and Kidd (2013). The Australian study focused on a group of 50 typically developing children between 18 and 31 months of age, investigating the interrelationship between play, gesture use, and spoken language development. Stage 1: Translation of the PiNG lexicon into the target languages Translation of the set of nouns (20 target nouns in the comprehension task +20 target nouns in the production task +2 × 2 = 4 lexical items for the pre-tests) and the set of predicates (20 target verbs and/or adverbs and/or adjectives in the comprehension task +20 target verbs and/or adverbs and/or adjectives in the production task +2 × 2 = 4 lexical items for the pre-tests) was carried out in isiZulu by the http://www.sajcd.org.za Open Access researcher, who is a native speaker of isiZulu and a linguist, together with two isiZulu-speaking research assistants, who are also linguists. The translation was further tested in a pilot study of native Zulu adults for validation (see Stage 2). Of particular interest in the international collaborative comparative study is the different language typology of romance languages that are analytic and Bantu languages that are agglutinative. In the initial adaptation, a conscious decision was made to adapt the protocol questions as closely as possible to the original Italian version, that is, questions were to be 'neutral' so as not to give a clue to the answer. For example, in the Italian version, the question would be translated to 'show me running', which does not provide any clue to the participant for the comprehension task and, therefore, the speaker cannot get a clue on the referent. A participant could choose any item he or she deemed fit. However, because of the Noun Class system, agreement concords, and morphosyntax structure of Bantu languages, the question must have the relevant noun class and subject concord, which may give a clue to the item. For instance, in isiZulu, a semantic translation for the above example would read ngikhombise ogijimayo (show me the person that is running): this would not allow a speaker to select a different card in the task because the question requires the speaker to select a card with a person who is doing something, for example, running. Stage 2: Pilot study with adults Twenty two adults (11 males and 11 females) participated in the Zulu adult pilot study. Participants were drawn from the pre-dominant isiZulu speakers of Kwa-Zulu Natal, the south-east region of South Africa. Participants were university students from various communities in the Kwa-Zulu Natal area, 60% of whom were from the Empangeni area. The other 40% were from surrounding areas: Pongola, Harrismith, Durban, Eshowe, and Pietermaritzburg. Applying a neutral questioning style did not work. Participants would reformulate the question or stop the interviewer to ask for more clarity. If the questions were amended, participants answered with no difficulty. The inclusion of the class prefixes did not affect the results but rather assisted the participant in understanding what was requested of him or her. This was indicated by the fact that once the correct class prefixes were used, the participants would indicate that they did not recognise an item, or they would give an answer if they did. With the neutral questions, the participant would simply halt the interview, seek clarification, and personally supply the class prefixes. When the interviewer asked why the participant reformulated the utterances, all participants stated that the 'neutral' utterances were grammatically correct, but ambiguous and confusing. It is interesting to note that all 22 participants corrected the utterances. Agreement between participants was 95% for the comprehension subset and 86% for the production subset. Four items under the NP subset produced either no responses or 'I do not know' answers, referring specifically to bidet, radiator, penguin, and seal. Under the PP, two predicate task words produced a low frequency of correct target words (spinning, heavy, far apart). Stage 3: Modification of the material From the adult pilot results, it became clear that the isiZulu version of PiNG needed further adaptation before the pilot study with children was initiated. The adults seldom produced words in isiZulu for 'seal' and 'penguin' and so these items were changed to 'snail' and 'crocodile', respectively. As both 'radiator' and 'bidet' are foreign cultural objects, these two items were replaced by 'heater' and 'toilet'. Some pictures were specifically cultural, such as the picture of the 'roof', which was a European type of roof, but in order to allow a systematic comparison with other languages in the four-language project, some items were retained for future adaptation. Stage 4: Pilot study with children After the changes to the above-mentioned picture items, a pilot study was conducted with 15 Zulu children. The group included five children aged 25 months (±1 month), five children aged 30 months (±1 month), and five children aged 36 months (±1 month). This was done in order to test the corresponding adaptation of PiNG on a small sample in case further adaptation was needed before going onto the main study. Participants were drawn from Kwa-Zulu Natal, the same area where the adult pilot study was conducted. Data was collected from the urban townships of Empangeni and Ngwelezane on the northern coast of Kwa-Zulu Natal. The principals and caregivers were very helpful in providing the researchers with the children's clinic cards. These vaccination cards aided the researchers in selecting participants for the appropriate age cohorts. The files provided by the teachers also gave the researchers additional information, including, for example, whether a child had been born prematurely or had a learning disability and needed to be excluded from the selected participants. We worked with nine crèches to ensure that our three age groups were exact. Many crèches could not form part of our participant sample because those children were bilingual and would alternate between naming items in English and isiZulu. We finally chose two schools from the city centre of Empangeni and four schools in the Ngwelezane township. Administration of the PiNG tool began with a familiarisation phase that involved playing various card games, counting games, and naming games with the children. Researchers also played with the children in the school playgrounds on the swings and slides. Once the researchers felt that the children were comfortable enough, they asked the children to play a game with them in front of the camera. Main study The main study for the isiZulu data was collected from Soweto in the Gauteng province. The move from KZN was purely logistical as the researchers were all Gauteng based. Monolingual isiZulu-speaking children were chosen with the help of their caregivers. Children's vaccination cards were examined to exclude premature babies or those with any recorded pathologies. All crèches require clinic or vaccination cards in order to enrol the child. The caregivers also assisted in selecting children, who they said showed no language delays in comparison to their peers. All selected children had parental consent (see the Ethics section). Forty-nine children from four neighbouring crèches participated in the study. Nine children were excluded from the data sample for various reasons: some children did not complete the two tasks, some children were bilingual and code-switched regularly, some children spoke too softly for the camcorder to record sound, and one child was sleepy and had to go for a nap. For this study, 36 participants were chosen with 12 per age cohort in consideration of gender balance. There were a total of 19 females and 17 males across the different age cohorts (Table 1). Procedure The procedure of this study followed those of previous studies (Bello et al., 2012;Pettenati et al., 2009Pettenati et al., , 2012Stefanini et al., 2008Stefanini et al., , 2009. The tool began with the comprehension task picture, followed immediately by the productioneliciting picture of the noun sets. There was a short break after the noun items and then the predicate items would be elicited, again starting with the comprehension task picture, followed by the production-eliciting picture of that set. The following section details the procedure that was followed in our study. After the familiarisation period, during which we played different card games with the children, all children were tested individually at their schools. Three sets of pictures per set were presented to each child on a small table. The first part of the task was comprehension, in which a child was asked 'Where is the cat? Show me the cat' for a noun comprehension item or 'What is this child doing? What is this one doing?' for the PC item. The second part of the each subset was production, in which the child was asked, 'What is this?' for the NP subset or 'What is he doing?' for the PP subset. The third card was a distractor to eliminate the choice by luck. A total of 22 cards were presented for the noun subtest, and another 22 cards were presented for the predicate subtest. The first two sets were pre-test cards to ensure that the child understood what was expected. The data was based on the remaining 20-card set per subtest. For the comprehension task, only one prompt was considered. For the production task, if the child struggled with producing the correct item, a second prompt was used. All elicitations were filmed for later data coding and analysis. Two research assistants and the researcher of this study collected the data. All researchers are first-language, native speakers of isiZulu, linguists with fieldwork experience in the collection of data from children. Coding and transcription of the data The coding system was adapted from previous studies (Bello et al., 2012Pettenati et al., 2012. All the children's responses were coded later from the video data with an annotation system that was designed for the purpose of coding for gestures as well as for wordings from the child and the experimenter. All tasks administered to the children were coded on ELAN, a linguistic annotation tool created by the Max Planck institute (ELAN, n.d.; Wittenburg, Brugman, Russel, Klassmann & Sloetjies, 2006). For the comprehension subtests (NC and PC), if the child indicated (either by pointing, showing, or verbalising) the photograph corresponding to the item indicated by the adult researcher, the answer was considered correct. If the child selected the no target photograph or did not respond at all, the response was coded as incorrect or no response, respectively. Similarly, in the production subtests, if the child produced the target lexical item, their response was coded as correct. If the child produced a non-target item or did not respond at all, their response was coded as incorrect or no response, respectively. For some photographs, more than one answer was accepted as correct; for instance, for the 'diaper' item, some children called it ipampers referring to the brand of disposable nappies or 'ikhimbi' also referring to a brand of disposable nappies. For the production subtest, children were prompted twice if their initial response was incorrect. If a correct response was produced after the second prompt, the answer was considered to be correct. Synonymous items were considered to be correct synonyms, for instance ilorri (a lorry) and itruck (a truck) for the truck item. Responses that had a semantic relationship to the item depicted were coded as NTS, no target, but semantically correct to measure if the concept was in place even though the production was not successful. Incorrect responses occurred where the production was not the target response nor semantically related, for instance, isidudu (motorbike) for 'truck'. Validation and reliability Three trained native isiZulu-speaking research assistants and the researcher independently coded the verbal transcription, that is, orthographical transcription directly from the film footage. Two different research assistants, who are also trained linguists, coded the classification of the speech responses as well as those of gesture. Disagreements were resolved through discussion. After the annotation phase was completed, all data was exported to Excel for an ultimate verification (internal consistency on the coding) and statistical analysis. Ethical considerations Ethical considerations guided the pilot study as well as the main study. All children who participated were recruited on a voluntary basis after their caregivers signed an informed consent form, which was provided in their language, and after they themselves agreed to participate at the start of the task ('Nouns' or 'Predicates'). Parents or members of the crèche were welcomed in the room during the administration of the tool. The tasks were interrupted or ended if a child verbalised a desire to stop, or expressed discomfort by crying and/or withdrawing. Children's identities were kept confidential, and data obtained from this project were not disclosed to any third party. The Wits HREC Non-Medical Ethics Committee granted ethical clearance for the study (protocol number H13/08/43). Results Children's responses were analysed according to the coding criteria listed in the method section. For our first objective, we analysed the comprehension and production tasks across the three age groups to test whether the PiNG assessment tool was effective in detecting the development of comprehension and production in isiZulu. Our second objective overlapped our first, and so our first finding addresses both of our objectives. For the comprehension and production tasks, an analysis of variance, ANOVA, was run with the age group as the independent variable. The correct answers for the comprehension and production task items are illustrated in Table 2. A significant difference across the age groups emerged at (F[2.69] = 3.143, p < 0.05 for the comprehension task. Post-hoc Bonferroni analysis showed that the effect of age was significant at 0.05 among all age groups throughout the entire sample, but was not significant between the 25-month-old and the 30-month-old groups. Similarly, for the production task, there was a significant age effect across the whole sample at (F[2.69] = 6.567, p < 0.05. Post-hoc Bonferroni analysis showed that the difference was not significant between the 25-month-old and the 30-month-old groups. However the 36-month group performed significantly better than the two younger groups at p < 0.003 to the 25-month group and p < 0.035 to the 30-month group. Lexical item composition In order to test the comprehension and production tasks per lexical categories, we looked at the performance of the nouns and predicates across the age groups. We performed an ANOVA between groups per lexical subset. The comprehension of the noun and predicate subtests was better performed than the production of noun and predicate subtests across the age groups. In Table 3 and Table 4, Zulu children performed better at labelling the correct items for nouns (F[2,33] = 3.70, p < 0.04) than labelling the correct items for predicates (F[2,33] = 0.94, p < 0.40) with post-hoc test Bonferroni confirming that there was a significant difference between the 36-month group and the 25-month group at p < 0.03. There was no significant difference between the 25-month and the 30-month groups, nor between the 30-month and the 36-month groups for the noun subtest. For the PC subtest, post-hoc Bonferroni test confirmed that age had no significant effect, although there was a developmental trend between the different age groups. For the production of nouns and predicates, we noted a similar pattern in that children performed better at labelling the correct items in the NP subtest (F[2,33] = 7.4, p < 0.002) than in the PP subtest (F(2,33) = 0.96, p < 0.393). Post-hoc Bonferroni tests confirmed that there was a significant difference between the 36-month group with both the 25-month group and 30-month group, at p < 0.003 and p < 0.02, respectively. For the PP subtest, the post-hoc Bonferroni test confirmed that age had no significant effect across the three age groups. The 36-month children had an equal chance of correctly labelling PP items as the children in the 25-month and 30-month groups, despite the developmental trend we observe in Table 3 and Table 4. Percentage of correct responses per item For our third objective, we present our findings on the performance of the lexical items. In order to have a better understanding of the performance on the comprehension and production task, as well as noun and predicate subtests, we ranked the items in terms of correctness according to the total sample shown in Table 5. Comprehension task Under the noun comprehension subtest, three items were perceived correctly by all 36 children; 'doll', 'hat', and 'boots' had a 100% response rate across all ages. Five items had less than 50% success; 'mountain', 'snail', 'elephant', 'bib', and 'hammer', meaning that fewer than 18 children across the three age groups found these items 'difficult'. The photograph of the mountain showed a rising mountain of the European Alps, a type of geographical feature that is not commonly seen in South Africa. It was interesting to note that the children had difficulty identifying 'snail' and 'elephant', but managed easily to identify the domestic animals such as 'cow'. The 'bib' and 'hammer' were also not easily identified: the children gave answers that focused more on the baby who was wearing the 'bib' and did not respond at all to the 'hammer' or said, angazi (I do not know). Some of the responses for 'walk' were uhamba kuphi (walking where?), as the picture showed a young boy walking along the passage of a house. The smaller children would either focus on the clothes the child was wearing or on the distractor card, which showed a young boy playing with toys. This performance did not show a developmental trend, which meant the 36-month-old children also had a similar chance of not perceiving or identifying 'the walking' from this picture. The item 'embrace' also produced some interesting comments across the age groups, similar to 'kiss' found in the PP subtest. Variables Percentage of correct answers by children Predicate comprehension subtest: To pull 94 To sweep 92 To comb 92 To drink 89 To bite 86 Dirty 83 To swing 81 To run 81 To scramble up 75 Big 69 To build 69 High 56 To greet 56 To walk 47 Behind 44 Close 39 Full 31 Outside 31 To embrace 31 To push 94 To drive 89 To eat 89 To phone 83 To wash 72 To play 69 To kiss 67 To open 64 To fall 64 To laugh 61 To swim 53 Small 50 Clean 28 To turn 17 Empty 6 In front of 6 Heavy 3 Far 3 Inside 0 Long 0 The older the children, the more they avoided discussing intimacy with responses like bayaganga (they are being naughty) or they would simply avoid looking at the picture and focus on other pictures. The adjectives 'short', 'full', 'outside', and 'behind' were very difficult items that showed a distinct developmental pattern with responses that were semantically related, so for 'short', a child would say yincane (it is small) referring to a short pencil in the picture, and yet the prompt was for them to point to the object, which does not necessitate a verbal response. Production task Overall, production items scored lower than comprehension items, as seen in Table 5. Under the production subtests, for the NP, the items 'comb' and 'socks' had the highest response rate at 92%, with children from all age groups correctly labelling these items. Ten items had a success rate of less than 50%; the items were: 'gloves', 'diaper', 'picture', 'truck', 'book', 'beach', 'lion', 'crocodile', 'roof', and 'flags'. The 36-month-old group was more familiar with the leather gloves depicted in the picture with most children saying izandla/into yezandla zikamama (hands or something for my mum's hands). Glove sizes in South African shops start for children who are about 4 to 5 years of age, and mittens are not commonly found. The older children labelled the item 'diaper' correctly, compared with the young children who tended to call it ipenti (a panty). The 'picture' item was also more familiar to the 36-month-old group, even though most children across the age groups identified it as a TV, because the picture looked like a flat screen TV. The picture depicted a lone beach with a blue sky, the ocean, and a strip of sand. The picture also garnered ambiguous responses from the adults in the pilot study. The item 'truck' produced the semantically related 'a car or a bus or a big car'. This item did not display any developmental trajectory, as 36-month-old children were equally likely to produce 'car' rather than the target word itruck or ilori. The 'book' item mostly produced ibhayibheli (a bible) across age groups as well. The 'beach' item produced equally random responses, with children either focusing on the water in the photo or on the sky. The wild animals 'lion' and 'crocodile' produced very interesting responses such as ikgokgo (a monster) or some onomatopoeia sounds illustrating that 'this is a scary thing that will eat me'. A few children avoided even looking at the picture or quickly pushed the card to the researcher. The 'roof' item depicted a European type roof and a portion of a house with trees surrounding it. Most responses were either indlu (house) or isihlahla (a tree). This picture did not reveal an age pattern because the responses were random. The item 'flags' was extremely difficult, with no child giving a correct response. Most responses were iAfrica (Africa) or yiduku (head scarf), and these semantically related responses increased with age. In the PP subtest, the following seven items were difficult for the children: 'to turn', 'empty', 'in front of', 'heavy', 'far', 'inside', and 'long'. The item for 'to turn' depicted a group of children on a merrygo-round. Children's responses included bayadlala/bahleli (they are playing or they are sitting). This response did not reveal a developmental trajectory because the responses were random across all age groups. For the adjectives and adverbs 'empty', 'in front of', 'far', 'inside', and 'long', the responses did not show correct labelling across all ages. These items showed a developmental pattern when considered in the light of that the responses showed that the children acquired this concept with age. Interestingly enough, five of these items were in the same set at the PC subtest, which shows consistency in the production of these categories. All the children had difficulty perceiving the item 'heavy'. This item depicted a young child carrying a brown torn box while grimacing. Children's responses focused on the child or the box being torn. Discussion Our study begins with an Italian picture naming assessment tool being adapted to a Bantu language, isiZulu. The tool is designed to directly observe the lexical composition of vocabulary in two related tasks; comprehension and production. To our knowledge, most studies on isiZulu or related Bantu languages have either directly observed either comprehension or production, but never both at the same time. Several pilot studies in both adult and children populations enabled us to alter obvious elements of cultural bias. Although some other problematic items remained, we decided to preserve our initial goal by keeping as many items as necessary for our systematic comparison with two romance languages, Italian and French, as well as another South African Bantu language, Sesotho. Certain items such as the picture of a 'roof' did not depict a 'roof' as seen by many South African children. Some items such as 'to turn', which depicted a merry-go-round produced unexpected results in that one would expect most children to have been exposed to a merry-go-round as these are commonly found in parks, but instead the children focused more on the people in the picture than on what they were doing. Despite the cultural differences that may have stemmed from the images of our stimulus, we note that our findings confirm what has been long documented in literature: that comprehension comes before production. In a related cross-linguistic comparison, Japanese children showed a lower lexical production compared with the Italian children which resulted in the authors finding that cultural factors could influence the design of the test which was originally for Italian children. In terms of development, our results show how age affects both comprehension and production: 36-month-old children performed better than 30-month-old children who in turn performed better than 25-month-old children. Our statistical analysis did not discern a significant difference between the 25-month-old group and the 30-month-old group for comprehension, but it detected a significant difference between the 36-month to the 25-month and 30-month groups in terms of production. This was not surprising as, in the first study by Bello et al. (2012), it was found that noun comprehension increases between 19 and 30 month, followed by a plateau. The Zulu children also showed this plateau, which meant there was little difference between the 25-month and 30-month groups. The PC showed a similar trend, even though the scores were lower than those of the noun comprehension. It is, however, very interesting to note that there was no significant difference in PC among the groups, which may mean that predicates in this task, with the exception of adverbs and adjectives, may be mastered earlier. Gxilishe et al. (2007) found that isiXhosa-speaking children between 24 and 30 months correctly employed subject agreement markers on different verb roots. In terms of production, our findings showed an overall effect of age, which meant that production does indeed lag behind comprehension. Moreover, the larger number of culturally foreign pictures in the noun comprehension subtest suggests that it may be necessary to further adapt the assessment tool to more effectively evaluate the isiZulu lexicon. Children used a semantic description strategy to try and explain unfamiliar items, which shows that though they may have the concept, the lexical item is not 'concrete' enough for them to relate to their physical environment. If perception is difficult, it is harder to retrieve the semantic representation from the lexicon and, as such, retrieval will be impaired (Harley, 2014). The production of the noun category showed an age effect but the predicate category did not show a similar difference. The higher number of adverbs and adjectives in the PP subtest was difficult for all the age groups, which could explain the lack of a significant difference. Alternatively, if verbs are acquired earlier, it shows that the children have reached a plateau phase between the ages of 24 and 36 months. Further research on the vocabulary spurt in isiZulu would shed more light on this issue. Limitations The lack of a child inventory like the MacArthur Bates CDI for isiZulu and related languages is a disadvantage. We have no idea which items are acquired first, nor do we have the exact age of the vocabulary spurt in isiZulu. Future analysis should look at the gender effect. Although gender-related data is available in this study, it has not yet been analysed. It would be interesting to see whether girls have an advantage over boys, as has frequently been reported in western languages. Conclusion Developing normative lexical data on Zulu-speaking children or those speaking other related Bantu languages is important for research for both acquisition and clinical purposes. Finding a standardised assessment tool that can be used for South African Bantu languages is an ongoing challenge. The PiNG assessment tool has proved to be robust and effective in directly assessing a child's lexical vocabulary. For further research into isiZulu, the tool would need further adaptation by replacing some images with items of local content. The literature states that children begin with language comprehension and, when the motoric and cognitive apparatus develops, language production follows suit. This is a universal linguistic parameter. Children start with objects and events around their immediate environment and quickly learn people's names, concrete objects around them, and familiar routines coming from their home environment. The first words will therefore largely depend on this input, and cultural as well as linguistic constraints may affect this development. This study shows that as children get older, comprehension and production improve. It would therefore be very important for researchers or speech therapists to factor input into their intervention therapies. Children will understand and talk more about what they know and what surrounds them. Some linguistic phenomena like adjectives and adverbs are complex and not yet acquired by the Zulu child at 36 months.
9,942.8
2016-07-28T00:00:00.000
[ "Linguistics" ]
Diffusion equations and different spatial fractional derivatives We investigate for the diffusion equation the differences manifested by the solutions when three different types of spatial differential operators of noninteger (or fractional) order are considered for a limited and unlimited region. In all cases, we verify an anomalous spreading of the system, which can be connected to a rich class of anomalous diffusion processes. Introduction Actually, the fractional calculus represents an important tool which has been successfully applied to several contexts (DAS; MAHARATNA, 2013;DEBNATH, 2003;MACHADO et al., 2014;GLÖCKLE;NONNENMACHER, 1995;HILFER, 2000;SHLESINGER et al., 1994).For example, electrical response (LENZI et al., 2013;SANTORO et al., 2011), biological systems (CASPI et al., 2000;PLOTKIN;WOLYNES, 1998), finance, viscoelasticity (GLÖCKLE;NONNENMACHER, 1991) and anomalous diffusion (PEKALSKI; SZNAJD-WERON, 1999).In particular, the last point has received much attention since that the usual approach (RISKEN, 1989;GARDINER, 2009) does not provide a suitable description of the experimental results and, consequently, requires extensions.In this sense, by using fractional calculus, the diffusion equation (or Fokker -Planck equation) and the Langevin equation have been extended and, consequently, used to investigate several situations such as the ones present in Refs.(HILFER, 2000;METZLER;KLAFTER, 2000;LENZI et al., 2009;METZLER et al., 1994;SOKOLOV, 2012).However, there is more than one definition of the fractional (or noninteger) differential operators which have been used to investigate these situations.In this sense, our goal is to investigate the differences manifested by the solutions when three representative fractional operators are incorporated to in the diffusive term, i.e., the usual spatial derivative is replaced by a fractional differential operator.The operators analyzed here are the Riemann -Liouville (PODLUBNY, 1999), Caputo (PODLUBNY, 1999), and the one proposed by Qianqian et al. (2010) which reminds us the usual case.For these operators, we consider the situations characterized by a limited and no limited regions in order to establish the differences.Analysis is performed next, followed by discussion and conclusion. Diffusion equations and different fractional operators Let us start our analysis about the differences of these operators by investigating the behavior of the solutions when these fractional differential operators are incorporated in the diffusive term.The first spatial differential operator considered here is the Riemann -Livouville, defined as (PODLUBNY, 1999) where: n is a integer and Using this definition for the spatial derivative the usual diffusion equation can be extended to to describe, for simplicity, a system defined in the interval ] , [ b a and subjected to boundary and initial conditions.The usual diffusion equation can also be extended by incorporating the fractional differential operators in the Caputo (PODLUBNY, 1999) sense.In this case, the diffusion equation is modified to which is very similar to Equation (2), however the differential operator, to be considered in this case, is given by to work out the operators present in the diffusive term in the asymmetric and symmetric form.In addition, it is interesting to mention that the presence, in the diffusive term, of these differential operators can related to a long-tailed behavior of the jump probabilities which characterize the diffusive process manifested by the system.For these operators when the system is defined in a infinite interval, i.e., ) , (   , both equations lead us to the same solution since that the operators in this limit are equivalent as discussed in Ref. (PODLUBNY, 1999).This feature may be observed from the numerical from point of view.In fact, by using the discrete form of these operators known as the L2 method (QIANQIAN et al., 2010), we obtain, for the Grünwald-Letnikov fractional differential operator, Whereas, the discrete form of the Caputo's operator, using the same procedure, can be written as follows Notice that equations for only differ by the first two terms before the sum, and these terms are inversely proportional to L .Therefore their discrepancies become smaller if we increase the size of the system, as illustrated in Figure 1.Interesting characteristics of Equation ( 2) and (3) appear when they are considered in a finite interval where the surface effects may play an important role.In order to face this point, we investigate the differences between Equation ( 2) and (3) when the system is considered, without loss of generality, in a limited interval, i.e., .We also assume that ( , ) x t  is ( 1) n times continuously differentiable and that ( ) ( , ) for 2 1   .Equation 5shows a connection between the Riemann -Liouville and Caputo differential operators which implies in the presence of the addition terms.These terms related to the boundary conditions, imposed by the problem under consideration, may behave as a source or sink by introducing or removing particles of the system such as adsorption and/or desorption process.Thus, the extensions of the diffusion equation given by Equation ( 2) and (3) have different solutions and, consequently, are not equivalent.This feature is also verified for a semi-infinity interval, i.e., [0, )  .Now, let us consider the fractional differential operator proposed in Ref. [2].This fractional operator is defined as Incorporating this operator in the usual diffusion equation, we obtain that The solution for this equation can be formally written as: for a system defined in an interval [ , ] a b with and   x n  subjected to boundary conditions.From this previous development, we verify that the definition of fractional operator present in Ref. [2] is very interesting since that it allows us to use some the properties of the integer differential operator and introduces a noninteger index. To investigate a possible equivalence with the previous fractional diffusion equations, we consider the solutions of Equation ( 6) in a limited region.In particular, we analyze the behavior of the solutions of these equations in two situations, when the system is subjected to absorbing boundary conditions ( (0, ) The first of them is to consider, for simplicity, the initial condition   , 0 ( / 2) sin( ) for Equation. (2), (3), and (6).For this case, the results shown in Figure 2 illustrate the differences evidencing the nonequivalence of the fractional diffusion equations.In the second case, we restrict our analysis, without loss of generality, to Equation ( 3) and ( 6) by taking into account the initial condition . By using this initial condition is possible to show that a solution of Equation (3), satisfying the required boundary condition, is given by where 2   recovers the usual solution connected to the initial condition.Figure 3 illustrates Equation ( 9) for different values of  in order to illustrate the effect of the noninteger index of the differential operator and the time evolution of the solution. The solution for Equation ( 6), by considering the previous absorbent boundary condition, can be written as follows: Equation ( 10) is illustrated in Figure 4 in order to show the effect of the index  on the solution and the time evolution.From Figures 3 and 4, we observe that solutions obtained from Equation ( 3) and ( 6), subject to the certain initial condition, are different when the system is defined in a limited region.This point is illustrated in Figure 5 and shows the nonequivalence of these spatial differential operators for the conditions considered here. Results and discussion We have investigated the solutions of the fractional diffusion Equation ( 2), (3), and (6) by considering different situations in a finite interval. The results found for these equations shown that the surface effects play an important role on the solutions of these equations.This point is shown, for example, by Equation ( 5) which implies that Equation ( 2) is equivalent to Equation (3) if additional terms connected to the surface are considered.Similar situation is evidenced in Figure 2 when Equation ( 2), (3), and (6) are compared for an initial condition.The equivalence is only found when the system is defined in the interval ( , )   where the surface effects are absent. Conclusion The results presented here for a system subjected to a finite interval shows that the fractional operators are not equivalent.In this sense, it is interesting to note that the surface effects play an important role on these operators. from the form point of view, Figure 1 . Figure 1.This figure illustrates the difference between the solutions of Equation (2) and (3) obtained in the interval [-L,L] as a function of L for t = 0.1 and 0  x .We consider, for simplicity, 8 . 1   and absorbent boundary conditions.Note that difference decrease by increasing the value of L. Figure 4 . Figure 4.The figures, (a) and (b), illustrate the behavior of Equation (10) for different values of  and t, for simplicity, by considering the initial condition Figure 5 . Figure 5.This figure illustrates the solution of Equation (9) and (10), for simplicity, by considering  = 1.8, the initial condition    
2,045.4
1974-01-01T00:00:00.000
[ "Mathematics", "Physics" ]
Modeling Jupiter with a Multi-layer Spheroidal Liquid Mass Rotating Differentially With the aim of improving the Jupiter equilibrium liquid model consisting of two distorted spheroids (“spheroidals”) of our last paper, we generalize it here to any number l of layers, demanding that the calculated gravitational moments, J2n (n=1,.., 4), agree with those surveyed by the Juno mission, which is fulfilled with a much higher accuracy than for l=2. The layers are of constant density and concentric (but otherwise free from any specific constriction between their semiaxes), each rotating with its own distribution of differential angular velocity, in accordance with our law in a past work. We point out that the angular velocity profiles are a consequence of the equilibrium itself, rather than being imposed ad initio. Although planetary structure aspects are not contemplated in our models, we arrange matters so that they can be compared with Gudkova’s and Guillot’s, paying attention on the distributions of mass and pressure. Our procedure is exact, in contrast with the self-consistent CMS (Concentric Maclaurin Spheroids) method developed by Hubbard, whose inexactitude resides in maintaining a single constant angular velocity while the spheroids are deformed. Our model predicts a differential rotation for Jupiter with periods for pole and equator of 9h38m and 10h14m corresponding to a mean period of 9h55m. Introduction In a past paper [1], a Jupiter model consisting of two liquid concentric distorted spheroids ("spheroidals"), core and envelope, rotating differentially was proposed, demanding agreement between the calculated gravity moments J2n and those surveyed by the Juno mission, and that the model was in hydrostatic equilibrium. For the envelope, the mean rotation period found was 9.92 h, the average being taken over the values at the equator and the pole. The core resulted highly distorted, too small, and rotating very fast. The calculated J 2n fell slightly off center of their uncertainty regimes. This model resulted to be stable. In an attempt to improve it, in the current paper the body is generalized to l layers. The minimization procedure used for the calculation of the J 2n is achieved with a mean accuracy of ∼ 10 −10 , falling much nearer to the uncertainty centers than for l=2, where the accuracy was ∼ 10 −1 . Next, the corresponding equilibrium figures are established. The theory of stellar structure has evolved considerably. Although the interior of a star cannot be seen, there is little doubt of what is going on there. In the case of giant planets, the tendency is to imitate the stellar theory as much as possible. In recent times, there have been some important advances, but the uncertainty in the picture of the interior of planets still persists. Important problems occur principally, among others, in the not yet fully understood over-presence of heavy elements, the lack of exact knowledge of equations of state (EOS) for H/He or heavy elements at high pressures, as well as phase changes. The Juno mission allows the determination with good precision of Jupiter gravity, opening the opportunity to check the plausibility of the several interior structure models proposed thus far. Efforts to explain the observed gravity are made, for instance, by [2], on the abundance of heavy elements, or by [3], which compare various EOS. An alternative to EOS is related to a technique employing simulations known as Density Functional Molecular Dynamics [4] which, at a quantum level, avoids the approximations and uncertainties of EOS. The Jupiter rotation state is also an open field. For example, [5] suggest the possibility that its atmosphere rotates differentially, but that internally rotates as a solidbody. Some of the cited references, and others like [6][7][8][9][10][11][12][13] use the CMS (Concentric Maclaurin Spheroids) method, developed by [14,15] to determine the J 2n . This method is not exact because the expression used for equilibrium is not valid (see sections 4 and 7). The interior structure models are not affected by the imprecision if they are not forced to reproduce the J 2n . However, if they are, as in the case of simulation procedures like those of [5], in which models are picked up from all the simulated ones that reproduce the observed J 2n , as inferred with the CMS procedure, this structure models might actually not have the correct mass distributions for reproducing the observed J 2n . Our method to establish the J 2n is based on an exact equilibrium equation, described in detail in [16], and applied in [1]. This time, an application of the l-layer generalization based on existing internal structure models is made. For this purpose, we have chosen the models of [17] and [18], although more recent ones are available. Our main interest is to describe how to use the model, without intending to establish the suitableness of a particular one. Moreover, their models present quantitative results that are adequate for our method. One of our results concerns with a differential rotation for the envelope, with periods of pole and equator of T p =9h38m and T e =10h14m. We point out that the angular velocity is a consequence of the theory, and hence it does not require of any constraint. Furthermore, the layers are deformed spheroids ('spheroidals'), with adjustable size and deformation by the numerical procedure. The Figures' Shapez As in previous works [16,19], the layers' surfaces are analytically proposed from the beginning and given in the normalized form by the equations The Gravitational Moments The gravitational potential of a mass distribution of density ( ) r ρ (cylindrical symmetry assumed) is established as a series of gravitational moments given by a is the body's equatorial radius, τ the volume, P 2n the Legendre polynomial of order 2n and ϑ the azimuthal angle measured from the pole to the equator. Let τ i be the total volume limited by the th i layer's outer surface, and l the number of layers; notice that τ 1 =τ. Since ρ (r)=ρ i for points r of th i layer only, equation (3) can be written as and cylindrical coordinates (R, ϕ, z) have been used for convenience. The integrals in equation (5) are known analytically and given in the Appendix of [1] ( [1], n ≤ 4). The Equilibrium State In papers [16,1], the equilibrium equation for a rotating fluid was stated, specializing it for a two-layer model in [1]. The generalization to l layers is immediate. In the steady state, the generalization of Bernoulli equation is valid for each point of the medium: p is the pressure, V the gravitational potential and f an arbitrary function that is determined by the boundary conditions in the layers' interfaces: The total potential V i (R, z) in each interface point is a function of R only since z can be expressed in terms of R by equation (1). Functions f i are related to the angular velocity Ω (=ω 2 /Gρ 1 ) through ( [16]). Hence, according to equation (8) Ωi for layer i is given by Here, r is an abbreviation for R2. Finally, from equation (7) the pressure in each point of a layer is For numerical procedures, the gravitational potential is normalized to Gρ1a2. In the center, the pressure is The Numerical Procedure The numerical method employed in our approach was already described in [1], so it will succinctly be reproduced here for a rapid reference. The method is carried out in two steps: the construction of the mass distribution that reproduces the observed J 2n , and the establishing of the angular velocity profile that sustains equilibrium. It must be emphasized that the last is not a restriction that needs to be imposed to the model; rather it obeys a law that we deduced and explained in detail in [16]. Hence, in our model the angular velocity is of a predictive character. For the first step, the inputs are the following facts for Jupiter where a, b, M J are the equatorial and polar radii and the mass of Jupiter. Higher moments were not considered because the error bars increase considerably ( [20,21]), besides they are for the procedure irrelevant. With this input, the method produces as output a mass distribution having the observed parameters. It is characterized by the following variables: equatorial e i and polar z Mi radii, distortion parameters d ni and densities ρ i , of all layers. This comes from an optimization procedure as is explained next. Since the theoretical moments that we call , are known analytically [1], it is demanded that variables e i , z Mi , d i , ρ i on which depend be such that This problem is solved numerically by minimizing the sum in equation (14) from which the model parameters result. Knowing the model parameters, we come to the second step. The unknown here is the angular velocity that must satisfy a law depending on the gravitational potential (equations (8 to 10) coming from the equilibrium conditions), which can be determined for the already known mass distribution. For a specific application of the method, see sections 5 and 6. Bizyaev et al. Model Although this topic is not related specifically with Jupiter, it deals on rotating equilibrium figures of ideal fluids' density stratifications worked out in an exact fashion [22], without generally supposing that the angular velocity is constant. In particular, they consider an application of their method for a stratification consisting of confocal spheroids (all shells have common focal points). The results are already known ( [23], [24]) and are presented and applied here because their method is more general and can also be used for continuous stratifications. Constraining the l-layer model to be composed of confocal spheroids, the focal length c is fixed already by the Jupiter polar and equatorial radii: where c n is the normalized focal length normalized Jupiter polar radius). It is important to remark that since all layers are limited by confocal surfaces, the gravitational potential at each surface point is a linear function of r (=R 2 =x 2 + y 2 ), and the angular velocity must be constant within each layer (equation (10), dV/dr=const). It is the exceptional case of a fluid in which the layers can rotate as solid bodies. For other instances, constant layer's angular velocity is not possible or it must be differential. Given the stratification, and hence the potentials, Bizyaev et al. come to the rotating state that sustains equilibrium: meaning z Mni the normalized polar radius, ∆ i =ρ i − ρ i− e and ∆ 0 =ρ 0 =atmosphere density. Knowing the layer's equatorial radii e i , by the confocality condition, the polar ones are fixed: and with them all angular velocities (16) (in terms of the densities ρ k ). Certainly, not all layers' geometrical configurations interest here, but only those reproducing the ascertained gravitational moments in the Juno mission. Hence, the attempt is made to find a configuration that reproduces the observed gravitational moments with the procedure described in section 3. The result should be to get the correct layer's radii e i (or z Mni ) and densities ρ i , and with them the necessary angular velocities (16) that maintain equilibrium. Many efforts to find an expected stratification was made, without success. We believe that this is due to the condition that a confocal stratification must exist that is able to reproduce the observed moments. However, experimenting with J 2n -values (especially J 2 ) that differ too much from the observed ones, a configuration can be found. Gudkova Models For the Gudkova models [17], for example model MJ8, the relevant information here is presented in Table 1, which we call model M1. R is the normalized distance from Jupiter center, ρ and P are density and pressure at the layers' interfaces. In this model, the density increases rather moderately in the first two layers (1.0-0.9 and 0.9-0.81), however, becomes considerably larger in the central layers, reaching the value 13.020 g cm −3 at the center that is about 250 times the value in the atmosphere. Regarding the pressure, its behavior is similar to the density's. The low initial growth comes to the two Mbar level at a depth of ∆R=0.2 and reaches so high values as 50 Mbar at the center. We now construct a 5-layer spheroidal model taking the above radii as the equatorial ones since our model is not spherical. We demand, firstly that the mass distribution be such that the gravitational moments J 2 , J 4 , J 6 and J 8 agree with those provided by the Juno mission. Putting aside the restriction of a layer fixed radius, we find several models with the exact J 2n , that is, the solution is not unique. Certainly, with the knowledge of only some gravitational moments, it is not possible, in general, to find a single mass distribution alone that reproduces J 2 ,..., J 8 and gives the potential (infinite series) solely. However, we do not accept any of these models, besides those that are in equilibrium under the action of gravity, pressure and rotation. In most cases, equilibrium is not satisfied because the centripetal force necessary for rotation cannot exist within a region. Although the error bars are more or less narrow, the observed J 2n are not exactly known. For this reason, we select randomly the moments within the error bars and seek the best mass distribution that reproduces them. We find several models that are somewhat alike. Next, their equilibrium is tested, i. e., we attempt to find an angular velocity profile sustaining equilibrium. A typical equilibrium model is M2 given in table 2. For this model one gets as gravity moments: 6 6 14696. 5 The model with parameters of M2 has low distortion (small d) in the three outer layers and, hence, they differ little from the spheroid (d=0); additionally, their flattening is low. The two innermost layers, on the contrary, are considerably more distorted and flattened. The density grows toward the center up to about ten times: from 0.209 to 2.030 g cm −3 . The pressure has a remarkable behavior: from the atmosphere inwards, it increases rapidly reaching the two Mbar level in the bottom of the first layer and 5 Mbar in the second one; it continues rising at a lower rate, reaches a maximum of 19 Mbar and then decreases to 17 Mbar at the center. The pressure here has no restriction, like in a barotropic relation or an equation of state, it only must satisfy expression (7). The angular velocity of a layer is influenced by that of the layer above it (equation (10)). As R decreases, the angular velocity increases, causing a greater increase near the center, resulting in a greater centripetal force, and hence in a reduction of the pressure gradient. The angular velocities are almost constant, not so the atmosphere (outermost layer) which clearly rotates differentially, with the pole running faster than the equator. The mean period of the atmosphere is 9.93 h, pretty near the accepted one [25]. Therefore, our model reproduces poorly Gudkova's general parameters (or conversely). One source for the discrepancies probably resides in the layer concept. Our model's layer is a bounded region of constant density delimited by non-spherical surfaces. For Gudkova's (and some other researchers in the planetary structure field) a layer is a finite region also characterized by certain chemical and physical properties, such as composition, equation of state, entropy and so on. Their layers are commonly limited by spherical surfaces, and the density changes from a point to point. In order that our model comes closer to Gudkova's, and from a gravitational viewpoint, the structure model will accordingly be Liquid Mass Rotating Differentially considered composed by three (or four) shells: an external shell, including the thin atmosphere, a middle layer and a central core. In the outer part the density increases almost linearly to the center; in the one or two shells core, the density is approximately constant. Gudkova's models [17] have densities that increase nearly linearly with the (normalized) radius R , from the surface to the core ( 1 . 0 ≈ R for a one-layer core or 15 . 0 ≈ R for a two-layer one); the core has practically constant layer density. Consequently, a spheroidal multilayer body is built here, say of 15 shells, divided into three regions: an outer part consisting of a constant density thin atmosphere; a twelve or thirteen layered region with a nearly linear density profile, and a one or two constant density layer core. A set of four gravitational moments , , , is randomly generated within the confidence intervals' limits of the observed J 2n . As explained in [1], a best mass distribution is looked for that reproduced the observed moments, that is, for which (see also section 3) where is the theoretical moment calculated with the functions given in the Appendix of [1]. It depends of polar and equatorial radii, distortion parameter and relative density difference of neighboring layers. The procedure's succeeding does not guarantee an equilibrium state for the particular matter distribution, i. e. to find a differential angular velocity distribution that sustains gravity and pressure. Regularly, the configurations with proper J 2n are not equilibrium figures. To obtain an equilibrium model with good J 2n , a huge number of gravitational moments' sets is randomly generated and from each one, the correct mass distribution is obtained. From these procedures we found a 15-layer equilibrium model characterized by model M3 of In this model, the layers' surfaces are not severely distorted, although the innermost deviates more from the spheroid's shape than the peripheral one. The density attains higher values than in the model M2, beginning at 0.344 and ending at 5.781 g cm −3 , with a similar behavior as the pressure that reaches about 30 Mbar in the center. Regarding rotation that supports equilibrium of the distorted model, the central part rotates about four times faster than the outer one; moreover, the angular velocity profile is markedly more differential in the first. The model's mean rotation period is 9.63 min that practically agrees with the observed one. Certainly, models with higher inner densities were found; however, they did not correspond to Gudkova's layers or were non-equilibrium models. Figures with a nearly linear density profile in the outer part and same size as Gudkova's layers, seem to have a central density limit of about 6.000 g cm −3 , a value close to a half of model M1. We presume that models with higher central densities cannot be found satisfying the particular conditions. In our model, density increases inward more rapidly first than in Gudkova's, since, for instance, the layer e 1 =0.89 to 0.72, with a midpoint ≈ 0.81, has density 1.311, whereas for R=0.81 Gudkova gets 0.920. Near the core, the relative difference between the two models decreases visibly. It is in the nucleus where Gudkova's model density rises sensibly, deviating much more from ours. The not too severely different behavior of the two density profiles in the most of the models, could be attributed to a lack of distortion (and rotation) in Gudkova's model. Though, the excessive density jump in the core that may be influenced by the remaining layers, is due perhaps to some weakening in the structure considerations; model MJ6 comes closer to model M3 in the sense that the central density is slightly lower and core mass is small, about 3 M⊕ (in our model is about 1.5 M⊕). Our rotating distorted model reproduces well the observed gravitational field, but does not take yet into account structure equations; in particular, temperature (that surely affects density and pressure, low in our model) is absent. Guillot Models Regarding Guillot models [18,5], the relevant data are not given in the form of model M1, but as a relation between density ρ and pressure P that we write in Table 4: These results are to be compared with M3, expressed as a relation ρ -P, where ρ is a mean density ((ρ i−s +ρ i )/2) taken in each interface of our model and P is the pressure at the pole of the interface. It leads to model M5 of Table 5. After a closer inspection, it is found that pressure considered as a function of density is always below for model M4 as for M5. From the surface to ρ ≈ 4gcm −3 it rises much more slowly than in M5, afterwards P increases sensibly in the last small interval, reaching the values of M5. In other words, there is a gap between the pressures of both models that widens first up to ρ ≈ 4gcm −3 and afterwards diminishes more and more. If we use a mean polytrope P=Kρ n to describe roughly the models, then (M4) has a higher mean index n=3.2 than M5, n=1.3, but constant K of the former is smaller than for the last. However, we must emphasize that M5 was built supposing a nearly linear density profile in the outer part of the model, and we have insufficient data to know if M4 fulfill this condition. Nonetheless, recent similar models [26] shows a linear tendency of ρ against R. Although the ρ-P gap between the two models is not narrow, the order of magnitude of the variables is similar, and practically the same near the surface and the core. Discrepancies in the rest of the models could be attributed to a lack of physical and chemical conditions in our model; or to a need of more refinement in the chemical restrictions and (differential) rotation and deformation in the model M4. Perhaps a more accurate model will be between M4 and M5. Discussion The models used here for a comparison date from 1999, although more recent ones are available [5]. They were chosen because they present numerical (graphical) results which are necessary for our model. From the papers, indistinctly if they are recent or not, one cannot get exact quantities, especially if they are given in graphical form. However, the principal interest here was to show the way to the application of our model, without the purpose yet to favor one particular internal structure model. Later, we will integrate into our model equations of internal structure. With the data extracted from the models of Gudkova and Guillot, we have seen that our model predicts too lower pressures and densities near the center of Jupiter than that Gudkova's, but nearly of the same magnitude as that of Guillot's. Conversely, in the remaining of the model, the coincidences are better with the first than with second. To get a global picture of this, let us represent the models roughly by average polytropes (logarithmic scale): 6 it is seen that Gudkova's and our model have practically the same mean tendency and pressures, P Ga and P Ou , are slightly separated from one another; nonetheless, we know that there is a high discrepancy between the two in the central region. On the other hand, Guillot's model pressure grows too much more rapidly with density than in the other cases, although it begins at a lot lower values; however, the order of magnitude of P Gt is alike P Ou . Why is the increasing rate of P Gt stronger than of P Ga ? Surely, it resides in the assumptions made in both models; but here cannot be decided what differences in the models induce, for example, a rapid pressure growth, or a very high central density or pressure. Last behavior is not probably due to a simple or multilayered core since Gudkova [17] gets similar ρ in both cases. On the other hand, with the data of the two models and that observed for Jupiter, our model predicts a Jupiter differential rotation of the atmosphere, that is given by the points where ω is expressed in rad h −s . Our models lack presently of physical and chemical properties of the fluid; the only supposition being that the layers are of constant density ρ i . Moreover, the stratification is deformed and rotates differentially to sustain equilibrium. For an arbitrary homogeneous fluid with cylindrical symmetry, the angular velocity in the surface is necessarily given by 2 2 dV dR ω = − (19) where V is the gravitational potential at a surface point at distance R from the rotation axis. In Maclaurin spheroid case, the slightest deformation [14] causes that the potential will differ from the simple quadratic form in (x, y, z) (see Section 4) and the angular velocity will no longer be constant. Nonetheless, Hubbard does not allow to change ω. Certainly, for a multilayered object, ω in each interface point is expressed by an equation like (10). The model M2 or M3 based in Gudkova's and Guillot's models foresees a Jupiter surface rotating differentially with periods in pole and equator of T p =9h38m and T e =10h14m, with a mean period T m =9h55m. The method used here to predict the angular velocities in Jupiter surface is similar to that of [25,27] in the sense that they use the Jupiter gravitational field only to deduce a rotation period T=9h55m; they assume from the beginning solid body rotation. They, like us, do not put any constraints on the internal structure, hence the method is not structure model specific. Here was demanded only that density changes approximately linearly with the equator radius (inspired by Gudkova's and Guillot's models). In the recent time, it is common to calculate the J 2n moments with the CMS procedure developed by [15]. It is not an exact method because his basic expression does not correspond to the equilibrium state: U=V + Q=const., where 2 2 (1 (cos ) / 3 2 Q r P ω θ = − . According to [16], a rotating axial-symmetric fluid is in equilibrium generally if it rotates differentially, independently of its shape. For a rotating stratification, each layer rotates differentially with its own angular velocity. As a special case, when the layers are confocal spheroids, their angular velocities are constant (Section 4). On the other hand, differential rotation cannot be common to all shells. For instance, let us suppose only two spheroid layers rotating with the same angular velocity. Equation (10) reduces to (Ω 2 =Ω 1 , Ω 1 is the common angular velocity) 2 2 1 dV dr Ω = − (20) Indices 1 and 2 refer to external and internal surfaces. By means of equations (8) and (9), we conclude that 2 1 dV dV dr dr = (21) It is to be noticed that both sides of equation (21) are evaluated at the same distance R (R 2 =r) from the rotation axis, but at different z (z 1 > z 2 ): point 1 is farther than point 2 from the center, hence the derivative at point 1 is smaller than at point 2, and the equality (21) is not valid. According to our procedure, whose theoretical basis is presented in [16], there is no freedom in the choice of ω: it is determined by the gravitational potential at each point of the fluid (equations (9), (10) and (20)), and in general it is of differential type.
6,356
2020-02-13T00:00:00.000
[ "Physics" ]
Risk Assessment of Express Delivery Service Failures in China: An Improved Failure Mode and Effects Analysis Approach : With the rapid growth of express delivery industry, service failure has become an increas-ingly pressing issue. However, there is a lack of research on express service failure risk assessment within the Failure Mode and Effects Analysis (FMEA) framework. To address the research gap, we propose an improved FMEA approach based on integrating the uncertainty reasoning cloud model and the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) method. The cloud model describing randomness and fuzziness in uncertainty environment is adopted to achieve the transformation between the qualitative semantic evaluation of occurrence (O), severity (S), and detection (D) risk factors of FMEA and the quantitative instantiation and set up the comprehensive cloud of risk assessment matrix for express delivery service failure (EDSF). The TOPSIS method calculates and ranks the relative closeness coefficients of EDSF mode. Finally, the rationality and applicability of the proposed method are demonstrated by an empirical study for the express delivery service in China. It is found that among 18 express delivery service failure modes, six service failure modes with high risk are mainly located in the processing and delivery stages, while six service failures with the relatively low risk are involved in the picking-up and transportation stages. This study provides insight on how to explore the risk evaluation of express delivery service failure, and it helps express delivery firms to identify the key service failure points, develop the corresponding service remedy measures, reduce the loss from service failures, and improve the service quality. Introduction Thanks to the rapid development of e-commerce, the express delivery service industry has witnessed a rapid growth in China. According to the statistics of the State Post Bureau of China, the total volume of package delivered exceeded 83 billion in 2020, representing a 30% increase over the previous year. The annual per capita volume of packages sent or received reached 59 items, representing a growth rate of 31%. Meanwhile, the revenue of this industry reached about $135 billion, representing a 16.7% increase over the previous year. Nevertheless, the unprecedented growth is accompanied with frequent service failure (SF), such as sluggish websites, payment problems, privacy security, lost packages, delayed delivery, damaged products, and rough sorting and handling. In 2020, a total of 188,326 customer complaints involving express service were formally filed with State Post Bureau of China alone; the complaints received by local government and express delivery companies were not included. The finding by Dospinescu et al. [1] indicates that express delivery option has significant influence on e-commerce customer's satisfaction level. It is thus clear that frequent service failure not only leads to losses of business and damaged reputation for e-commerce and express delivery companies but also causes the emotional anxiety of customers and negatively affects their satisfaction which then leads to negative Word of Mouth (WOM) as well as complaints and change of repurchase behavior. Holloway et al. [2] adopted the Critical Incident Technique (CIT) to analyze the online retail service faults and indicated that the main types of service faults are delivery problems, website design problems, payment problems, security problems, product quality problems, and customer service problems. Forbes et al. [3] also carried out CIT analysis on e-commerce service failures, which involve product packaging errors, slow delivery, system checkout errors, missing information, and website design errors. Compared to other intangible service quality factors by using both exploratory and confirmatory factor analysis, Subramanian et al. [4] pointed out that to be competitive, e-retailers in China must pay increasing attention to the express delivery service from third-party logistics companies. Zemke et al. [5] collected information from online shoppers and concluded that the SF modes related to express delivery service include delayed delivery, extra transportation costs for the punctual arrival, incomplete delivery of orders, and damaged objects. As shown above, early research mainly focuses on relatively narrow segments in analyzing the sources of service failures, types of service failures, remedial measures, and so on. More recently, Chen et al. [6,7] studied the influence of causal attributions on trust conflicts and the service failure recovery policies in e-commerce. Kim et al. [8] investigated consumer-perceived attribution of service failures and its influence on negative emotions and post-purchase activities in social commerce. The finding by Vakeel et al. [9] shows that there is a three-way interaction among deal proneness, locus of attribution, and past emotions; compared with external locus of attribution in the context of online flash sales (OFS), service failures attributed to internal locus of attribution also have a negative impact on reparticipation. Saini et al. [10] also presented a novel contextual scale to measure OFS e-commerce service failures and studied its impact on recovery-induced justice on customer's loyalty. As such, the recent research diverts attention on service failure attributions based on customer-perception, the relationship of service failure and recovery policy, and customer emotion or post-purchase behaviors. However, there has been lack of research on occurrence (O), severity (S), and detection (D) of service failures and their risk priority number (RPN), which is related to the determination of recovery strategies in e-commerce. In addition, express service failures are often neglected or overlooked. It should be noted that express delivery service is a complex and special multi-link and multi-participator activity process that crosses organizations, regions, and even borders, so it plays a prominent role in the success of electronic transactions. In this regard, express delivery service failure (EDSF) is a critical issue that should be mitigated by scientific and viable methods. As a result, this research aims to identify the EDSF modes and evaluate their risk levels in a comprehensive way, as well as reveal the importance of individual service failure modes in the different operational stages of express delivery process. The work will help express delivery companies to develop the proper service remedial measures, reduce waste of resources, lower the operation risks, and improve customer satisfaction. To bridge the above research gaps, this paper attempts to answer the following research problems: (1) what are the critical modes of EDSF and where are they located in express delivery process? (2) under the context of randomness and fuzziness of semantic assessment information, how can the risk of EDSF be more effectively evaluated? (3) how serious are the risk factors, namely occurrence (O), severity (S), and detection (D) of different modes of EDSF? As such, the objective of this study is to address the above research problems by presenting a systematic quantitative approach to investigate the risk assessment issue for EDSF. Specifically, we present an improved Failure Mode and Effects Analysis (FMEA) approach based on integrating the uncertainty reasoning cloud model and the TOPSIS method. The cloud model for uncertainty reasoning is adopted to quantify the semantic evaluation of the occurrence (O), severity (S), and detection (D) risk factors of EDSF and set up the comprehensive cloud of risk assessment for EDSF. Finally, the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) method is adopted to calculate and rank the relative closeness coefficient of express delivery service failure modes. Compared with the existing multi-criteria decision making (MCDM) techniques, the proposed decision scheme in this paper is unique by constructing decision matrices of expectation Ex, entropy En and hyper entropy He in the cloud model. Therefore, it not only describes the randomness and fuzziness in uncertain information but also gives the comprehensive closeness coefficient by TOPSIS, which makes the decision results more comprehensive and reasonable. In this way, it contributes to the theoretical basis for the risk detection ability of EDSF. The remainder of this paper is arranged as follows. Section 2 provides a brief literature review. Section 3 proposes the research methodology, including the verification of express delivery service failure modes, construction of semantic evaluation of express delivery service failure, and quantitative transformation of evaluation variables based on cloud model, determination of the weight of FMEA risk factors, and calculation of risk assessment comprehensive cloud and rank the risk of express delivery service failure. Section 4 presents an empirical study for the express delivery industry in China based on the proposed methodology. In addition, the results are analyzed, and management implications are discussed. Finally, in Section 5, major findings and contributions of this study are summarized, and future research directions are pointed out. Service Failure On the general perspectives of service failure, there is a wealth of literature. Some representative works are briefly summarized in the following. Bitner et al. [11] indicated that service failure incurs when the firms provide the services which fail to meet the requirement of the customers or are inconsistent with standard operating procedures in the service execution, as well as below the acceptable level. Similarly, Maxham et al. [12] suggested that when the business service is lower than the customer's expectation or a customer's request fails to be realized, service failures take place. Michel [13] suggested that service failures occur when customers perceive that the received goods or service does not achieve what they expected, which is consistent with the viewpoint of Voorhees et al. [14]. Tan et al. [15] categorized e-commerce service failures as functional, information, and system failures, proposed a theoretical model of e-commerce service failure classifications and their consequences, and tested the relationship between three failure categories and consumers' disconfirmed expectancies. Moreover, it is commonly recognized that a critical factor of service failure is the occurrence severity of service failure, and the increase of service failure severity leads to the increase of customer dissatisfaction [16,17]. Hsieh and Yen [18] indicated that customers are more inclined to blame service failures on service providers, which turns into dissatisfaction with service and firm. The consequences of service failure include lower customer satisfaction, distrust, negative evaluation and diminutive employee motivation, customer loss, and revenue decrease [19][20][21][22]. In addition, Gelbrich [23] investigated the essential role of helplessness in interpreting weird coping responses to anger and frustration after service failure. It is clear from the above analysis that the definition of service failure, categorization, consequence of service failure, and its relationship with customer satisfaction have attracted strong attention from many scholars. Express Delivery Service Failure (EDSF) For EDSF, its footprint could be found in the research on e-commerce SF and logistics service quality. Compared with tangible products, less attention has been received. For example, the findings by Durvasula et al. [24] show that the SF occurrence does not imply that the logistics service provider is insufficient; even the best service supplier makes a mistake and perfect service is impossible in the context of B-to-B marketing. Zhong et al. [25] considered that the perceived information sharing quality from express service providers could affect logistics service performance of online shoppers. Holloway et al. [2] discovered that delivery problems are the main types of service faults besides payment problems, security problems, and customer service problems. Subramanian et al. [4] pointed out that e-retailers should pay increasing attention to the express delivery service from third-party logistics providers. Zemke et al. [5] collected information from online shoppers and concluded that the SF modes related to express delivery service include delayed delivery, extra transportation costs for the punctual arrival, incomplete delivery of orders, and damaged objects. Saura et al. [26] pointed out that the timeliness, personnel quality, information accuracy, and order response speed of logistics service quality all have clear, positive, and significant impact on customer satisfaction. Therefore, it is expected that the abovementioned logistics service components will lead to the most significant service failure. Importantly, all services must recover in time because of the time sensitivity of delivery service. Ping et al. [27] suggested that for logistics firms, a high level of logistics services can improve customer satisfaction, maintain customer loyalty, gain potential customers, and improve profits and competitiveness. Giovanis et al. [28] analyzed the impact of logistics service quality on customer satisfaction and loyalty. Ma et al. [29] developed a combined SERVUQAL-AHP-TOPSIS method to assess the quality of service (QoS) of the city express service industry. It was believed that the main dimension of logistics service quality consists of product availability, order accuracy, timeliness, order condition, ordering procedures, personnel contact quality, information quality, order discrepancy handling. Furthermore, the consequences of logistics delivery failure have been reinforced in more recent studies [30][31][32]. In addition, by analyzing the common main topics of complaints from consumers and suppliers in express delivery, Gyu [33] showed that the parcel delivery industry faces challenges such as delay, loss, wrong delivery and fierce competition, and customers demanding higher quality of express logistics service. In studying the problems faced by strategic distribution and transportation in the e-commerce environment in China, Liu et al. [34] argued that the solution of logistics service problems will be a determining factor to define the success or failure of the future development of e-commerce. To summarize, the existing research efforts have recognized the importance of EDSF to e-commerce market and logistics service quality, but the quantitative studies on how to evaluate the seriousness of risk factors for different EDSF modes have been lacking. FMEA Failure Mode and Effects Analysis (FMEA) is a systematic analysis tool of product function or service quality reliability, which was first proposed in the 1950s. This method can be used to identify the potential failure pattern of product or service and the risk degree and rationally allocate resources to take corresponding intervention measures to avoid the failure of product or service quality. In the FMEA method, Risk Priority Number (RPN) is generally calculated to define the risk level, and different control measures are taken according to its ranking, i.e., RPN = O × S × D (O, S, and D are the risk factors representing occurrence, severity, and detection respectively). To date, FMEA has been widely used in aerospace, medical, service, and other fields to provide forward-looking and operational decision support for enterprise management [35][36][37]. The traditional FMEA may not be very effective as a result of ignoring the fuzziness of evaluation information and inaccuracy of RPN obtained by multiplication of Q, S, and D. To address this issue, researchers have made extensive efforts, and improvements have been suggested in literature. Wang [38] presented fuzzy risk priority numbers (FRPN) to measure the risk priority in a more credible way. Gargama [39] constructed a criticality assessment model for FMEA by applying fuzzy logic to accomplish the convert randomness of evaluated data into a convex normalized fuzzy number. Pillay et al. [40] introduced fuzzy rule base and grey relation analysis (GRA) theory into the marine industry and solved the prioritization of potential failure modes in the situation of the same RPN value but different actual risk levels. Liu [41] proposed an RPN evaluation method using evidential reasoning (ER) method and gray correlation operator, which improved the effectiveness of traditional FMEA. Geum [35] also studied FMEA and GRA to identify and assess potential faults in hospital services. Ahmet et al. [42] determined experts' evaluation of risk factors O, S, and D through fuzzy sets and realized the calculation of RPN by using fuzzy analytic hierarchy process (FAHP) and TOPSIS method. Hyung et al. [43] proposed analyzing the service risk and service reliability of supermarkets based on FMEA and grey correlation theory. Liu et al. [44] employed the VIKOR method in a fuzzy environment to obtain the priority order of failure modes in general anesthesia process. Liu et al. [45] developed a risk assessment approach in FMEA based on combining fuzzy weighted average with fuzzy decision-making trial and evaluation laboratory (DEMATEL). Liu et al. [46] adopted an intuitionistic fuzzy hybrid TOPSIS approach to improve the FMEA. Kok [47] indicated that perception calculation could be used to solve the uncertainty of FMEA in language evaluation. Moreover, Vodenicharova [48] discussed the use of FMEA method in the logistics processes in manufacturing plants and showed that FMEA is a method that can maintain the connection between logistics elements for analysis and follow the logical sequence of "cause-and-measure". Zhang et al. [49] suggested a consensus-based group decision-making method for FMEA to classify failure modes into several ordinal risk categories. Alvand et al. [50] presented a combination model based on FMEA, stepwise weight assessment ratio analysis (SWARA), and weighted aggregated sum product assessment (WASPAS) approach under fuzzy environment. Khalilzadeh et al. [51] developed an FMEA approach by integrating SWARA and PROMETHEE techniques with the augmented e-constraint method (AECM) for risk assessment in the planning phase of the oil and gas construction projects in Iran. In view of the above analysis, due to the imperfection of FMEA in uncertain environments, a few MCDM methods have been integrated into FMEA decision process to increase its performance, such as various types of fuzzy sets, GRA, FAHP [52], VIKOR, DEMATEL [53], etc. While efforts have been made to decrease the fuzziness of decision information in the uncertainty environment, they may not be effective in the processing of randomness, which is an essential component for uncertain cases. In essence, fuzziness and randomness are often equally important in the decision-making process. As such, new FMEA approaches for EDSF risk assessment, which can address fuzziness and randomness simultaneously to improve the reliability of FMEA risk priority ranking, are called for. Research Methodology To address the research gap, we propose an improved FMEA approach by integrating the uncertainty reasoning cloud model and the TOPSIS method to investigate the risk assessment of EDSF. The cloud model developed by Li et al. [54] employs the basic principles of probability theory and fuzzy set theory to form the mutual transformation between qualitative linguistic variable and quantitative value through specific algorithms, and thus high-quality uncertainty can be obtained. In addition, the cloud model converts the quantitative random number into interval number, which decreases the information loss in the transformation process and will be convenient for decision-making evaluation. In this paper, the cloud model is selected to achieve the transformation between the qualitative semantic evaluation of the occurrence (O), severity (S), and detection (D) risk factors and the quantitative evaluation. Meanwhile, the TOPSIS model is used for obtaining the relative proximities between an evaluation objective and its optimal scheme and worst scheme, respectively, and it has no tight restrictions on the data distribution and sample size. Thus, it is adopted to calculate and rank the relative closeness coefficients of EDSF modes. It is believed that the proposed approach can more objectively assess the risk degree of EDSF for empirical studies. The framework of methodology consists of multiple steps, which are briefly described as below in Figure 1: Step 1: The semantic evaluation of express delivery service failure is constructed to measure risk assessment indicator of EDSF based on the FMEA. Then the key process mainly realizes transformation between the qualitative FMEA semantic evaluation and the quantitative cloud model. Step 2: The entropy weight method is used to calculate the weight of risk factors of EDSF, so that the cloud matrix of the risk assessment of express delivery service failure is established. Step 3: According to the comprehensive digital characteristics of the cloud model, the risk assessment comprehensive cloud of express delivery service failure modes is determined. Step 4: Based on the TOPSIS method, Hamming distance and closeness degree of positive ideal solution and negative ideal solution of the comprehensive cloud for risk assessment are calculated and ranked. Cloud Model for Uncertainty Reasoning "Cloud model", first proposed by Li et al. [54,55], is a methodology that studies fuzziness and randomness as well as their correlation and constitutes the mapping relationship between qualitative linguistic indicators and quantitative values. Cloud model is not only competent for the modeling and calculation of imprecise, fuzzy, and incomplete information but also has unique advantages in dealing with random information. It has thus become a new uncertain information processing theory with high research popularity in many fields [56][57][58][59]. It has been recognized that the cloud model possesses the capability of the semantic conversion between quantitative and qualitative and increasing the accuracy of risk assessment. In the cloud model [60], C is a qualitative linguistic variable defined on U, which represents the universe of discourse. Cloud refers to the distribution of the mapping of concept C from U to the interval [0, 1] in the numerical domain space. x represents a cloud droplet and the distribution of x over U is a cloud. The cloud model is a normal cloud constructed by expectation Ex, entropy En and hyper entropy He, which can be denoted asĈ = (Ex, En, He). Expectation Ex is the point that makes the most contribution when describing qualitative concepts and represents the expectation of the spatial distribution of cloud droplets. Entropy En describes the uncertainty degree of qualitative concept, which is determined by both the randomness and fuzziness. The larger the entropy is, the larger the cloud granularity will be (where cloud granularity reflects the degree of dispersion of cloud droplets). Hyper entropy He refers to the index changing under the random state, which represents the uncertainty of entropy. The higher the hyper entropy, the thicker the cloud (where cloud thickness reflects the dispersion of cloud), then the greater the randomness of membership degree of x belonging to U. Membership can be calculated from the definition of cloud model and its numerical value: , which satisfies x ∼ N Ex, En 2 , En ∼ N En, He 2 , and the certainty degree of x belonging to concept C. The distribution of x over U is a normal cloud. Figure 2 shows the numerical characteristic cloud of the cloud model. In the figure, Ex = 0.5, En = 0.1, He = 0.01. Semantic Evaluation of FMEA Risk Assessment Indicators For the risk assessment of EDSF, in accordance with professional knowledge of expert team, cloud quantitative evaluation can be conducted to measure the occurrence (O), severity (S) and detection (D) of the FMEA model. This study used 7-point Likert scale, including extremely low, very low, low, moderate, high, very high, extremely high, respectively. The semantic evaluation of FMEA assessment indicators is shown in Table 1. In the risk assessment of express delivery service failure modes, linguistic variables as shown in Table 1 are used for expert assessment. Numerical domain U = (X min , X max ) is determined by experts. The mapping relationship between the qualitative language variables and the cloud model is established by the Golden Section, also known as the Golden Ratio, in which a line segment is divided into two parts so that the ratio of one part to the whole length is equal to the ratio of the other part to this part. The ratio is an irrational number with an approximation value 0.618. In this paper, U = [0, 1], t = 7 and the corresponding semantic variables are selected as (extremely high, very high, high, moderate, low, very low, extremely low). If the moderate cloud is set as Y 0 (Ex 0 , En 0 , He 0 ), the t normal clouds are arranged from left to right in order and can be denoted as: ). Then the numerical characteristics of the 7 generated clouds are calculated as follows [61], The transformation between semantic variables of evaluation level and the benchmark cloud model is shown in Table 2. The benchmark cloud model levels is generated by the forward normal cloud generator, and the digital characteristics of the reference cloud of the evaluation level listed in Table 2 are shown in Figure 3. Weight Determination of Risk Factor for EDSF In this study, experts with managerial roles from receiving, transportation, customer service, quality, transit, and delivery departments in express delivery companies are invited to evaluate the risk factors in an electronic questionnaire survey, and then these semantic evaluation data are converted into quantitative values through three parameters of cloud model. Note that before the comprehensive expert survey, the risk factors need to be identified by working with a focus group of people and then finalized using a customeroriented investigation. Suppose that experts E k (k = 1, 2, · · · t) use semantic evaluation variables to describe the occurrence O, severity S and detection D of express delivery service failure modes and quantify the semantic evaluation through three parameters of cloud model. Then the cloud quantitative evaluation values of service failure mode FM i (i = 1, 2, · · · , n) from the kth expert are as follows: The risk assessment cloud of risk factors O, S, and D is obtained by cloud synthesis of the evaluation values from t experts: y iO = (Ex iO , En iO , He iO ), y iS = (Ex iS , En iS , He iS ), y iD = (Ex iD , En iD , He iD ). The cloud risk assessment matrix of EDSF under the risk factors of Occurrence O, Severity S, and Detection D is as follows: On the above quantitative evaluation values of O, S, and D risk factors, the weights of risk factors for each express delivery service failure mode will affect the comprehensive risk assessment value of the individual failure modes. To avoid the subjectivity influence, this study adopts the entropy weight method to calculate the weight of risk factors. The basic steps of this method are as follows: (1) Assuming that there are n evaluation indexes and m evaluation objects, the risk factor evaluation matrix is constructed as follows: (2) The evaluation matrix is normalized: R = r ij n+m , r ij is the value of the jth measurement object on the index i, and r ij ∈ [0, 1]. (4) The weight of risk factors for EDSF is: Comprehensive Cloud of EDSF Risk Assessment The comprehensive cloud is formed by the combination of two or more same clouds generated in the same theoretical domain. Qualitative variables are often assigned by comments of experts using language description. The digital characteristics of the comprehensive cloud generated by n cloud models described semantically by experts are calculated as:      Ex = w 1 Ex 1 En 1 +w 2 Ex 2 En 2 +···+w n Ex n En n En 1 +En 2 +···+En n En = w 1 En 1 +w 2 En 2 + · · · +w n En n He = w 1 He 1 En 1 +w 2 He 2 En 2 +···+w n He n En n w 1 En 1 +w 2 En 2 +···+w n En n In Equation (7), Ex 1 , Ex 2 , . . . , Ex n refer to the expectations, En 1 , En 2 , . . . , En n refer to the entropies, He 1 , He 2 , . . . , He n refer to the hyper entropies of the express delivery service FMEA, and w 1 , w 2 , . . . , w n represent the weights obtained from Equation (6). Suppose two clouds are Y α = (Ex α , En α , He α ), Y β = Ex β , En β , He β , then the Hamming distance (HMD) is: The semantic evaluation value of the risk factors O, S, and D by the expert group is expressed by a basic cloud. Considering the weight of risk factors, the digital characteristics of the comprehensive cloud assessment of the EDSF risks are calculated as follows:      Ex i = w O Ex iO En iO +w iS Ex iS En iS +w D Ex iD En iD w O En iO +w S Ex iS +w D Ex iD En = w O En iO + w S Ex iS + w D Ex iD He i = w O He iO En iO +w iS He iS En iS +w D He iD En iD w O En iO +w S Ex iS +w D Ex iD (9) where w O , w S , and w D represent the weights of occurrence (O), severity (S), and detection (D) risk factors in the ith failure mode, respectively. Risk Ranking for EDSF Based on TOPSIS To sort failure modes, TOPSIS method is adopted. Often combined with fuzzy theory, it is a method used in multi-objective decision analysis for various applications [62][63][64][65]. The risk ranking of the failure modes is determined by relative closeness coefficient (U i ). The specific steps are as follows [66]: Step 1: Determine the weights of the risk factors of the target attributes according to Equations (4)- (6). Then take the weights into the risk assessment cloud matrix B to obtain the weighted cloud matrix B . Step 2: Establish the risk assessment cloud matrix and determine the cloud positive ideal solution (CPIS) and the cloud negative ideal solution (CNIS). The CPIS is the cloud with the least risk, while the CNIS is the cloud with the greatest risk [67]. For the efficiencyoriented index (J + ), the CPIS selects the cloud with the greatest risk, while the CNIS selects the cloud with the least risk. For the cost-oriented index (J − ), the CPIS is represented by: the CNIS is represented by: where maxb ij represents the b ij that maximizes Ex, and if the Ex values are the same, select the b ij that leads to the lowest E n and He; minb ij represents the b ij that minimize Ex, and if the Ex values are the same, select the b ij that leads to the lowest E n and He. Step 3: Calculate the distance between the comprehensive cloud risk assessment according to Equation (9) and the cloud positive and negative ideal solution according to Equations (11) and (12). The distance to the cloud positive ideal solution is The distance to the cloud negative ideal solution is Step 4: Calculate relative closeness coefficient (U i ) for EDSF to determine the risk ranking: Risk Assessment Indicators for EDSF To assess the soundness of methodology for risk assessment of EDSF, we conducted an empirical study for express delivery service in China. The main reason is that the express service industry has experienced significant growth in China, and it has become a critical sector for the society. The enormous user base could lead to the convenience of information collection, while the results could potentially benefit the important industry. The research team carried out a field study by working with the major express delivery service companies, such as SF Express, STO Express, YTO Express, ZTO Express, YunDa Express, and Chinese Post EMS in China. We interviewed a focus group of about 20 people from the quality management and customer service departments and consulted with them about the customers' complaints. As a result, the initial evaluation indices for risk assessment of EDSF were established and categorized in accordance with four major stages of express service operation process, including picking-up, processing, transportation, and delivery. The initial evaluation indicators are shown in Table 3. Receiving signature is-sue Release of packages without following the operational procedure (such as checking ID). Poor service attitude Impatient or rude service from the delivery people. No response to complaints Customer concerns and complaints are not handled in a timely manner. We further conducted a customer-oriented investigation for the importance of the initial evaluation indicators, which determines as the final EDSF risk assessment indicator system. In this regard, the entire list of indicators developed by working with the focus group was provided to the customers, instead of just those indicators that are directly connected to their personal experience. This is because most customers possess certain knowledge on "what could go wrong" in the entire express delivery process. In addition, by giving the initial list of indicators to the general customers, the meaning and cause of each indicator were explained in the survey. As a result, it was believed that the customers are empowered and less important indicators should be filtered out by the customers. The survey results from the general customers should still be regarded complementary to the field interview with the focus group. The importance of each index is scored by 7-point Likert scale: 1 means extremely unimportant, 2 means unimportant, 3 means somewhat unimportant, 4 means neutral, 5 means somewhat important, 6 indicates important, and 7 indicates extremely important. This survey was distributed to the customers who have used or received express services (namely sender or addressee) in all walks of life through phone calls, email, and/or social media platforms. A total of 500 questionnaires were collected in May to August, 2020. After eliminating the questionnaires with invalid entries, 491 questionnaires were kept. To test the quality of questionnaire and data, the reliability and validity tests were performed using SPSS 25.0. The results in Tables 4 and 5 indicate that the reliability of the questionnaire is 0.938 and validity of the questionnaire is 0.965, which reflects a good internal consistency between variables and questionnaire items. The cloud model requires the sample data to be normally distributed. The measured skewness and kurtosis should be less than 3 and 10 respectively. The analysis using SPSS 25.0 leads to the descriptive statistics of the survey, as shown in Table 6. The output results show that the absolute value of skewness coefficient of each item is 0.583 at most, which is far less than the standard value 3. The absolute value of the kurtosis coefficient of each item is 0.762 at most, which obviously meets the requirement that kurtosis is less than 10. It can be seen that the results all fit the requirements of normal distribution. In addition, the standard deviations of the obtained indicators are all less than 2, showing that the importance of each indicator is relatively consistent. On the other hand, the mean scores of "Unreasonable route" and "Unexpected charges" indicators are less than 4 points, while other indicators have the mean scores of higher than 4.5. As such, the two indicators are deemed less important and eliminated from further consideration. This is because in the 7-point Likert scale, the median is 4. Therefore, if the average score of an indicator is lower than the median, it is deemed less important. Finally, 18 EDSF modes are determined as the risk assessment indicator system, expressed as FM i (i = 1, 2, . . . , 18), as shown in Table 7. Development of Cloud Charts After the field interview with a focus group and the customer-oriented survey, the semantic evaluation data on risk factors of express service FMEA: occurrence (O), severity (S), and detection (D) were obtained through a comprehensive questionnaire survey with industry experts. The research team visited the above mentioned major express companies in China again. Experts with managerial roles from receiving, transportation, customer service, quality, transit, and delivery departments in the companies were invited to participate in the electronic questionnaire survey from December 2020 to February 2021. The invited expert composition is shown in Figure 4. Eighteen invalid questionnaires among the 118 responses were eliminated, and 100 qualified questionnaires were retained. The results are shown in Tables 8-10. Unauthorized delivery to a pick-up place FM 13 Privacy leakage FM 14 Inflexible pick-up time FM 15 Damaged package FM 16 Receiving signature issue FM 17 Poor service attitude FM 18 No response to complaints Tables 9-11 are used to achieve the quantitative conversion of the language evaluation of the occurrence O, severity S, and detection D for the express delivery service FMEA, and Equation (2) is used to calculate the risk evaluation cloud on the expert language value. According to Equations (4)-(6), the weights of risk factors O, S, and D are w O = 0.474, w S = 0.313, w D = 0.213, respectively. Finally, the comprehensive cloud of express delivery service FMEA can be obtained by Equation (9). The results are shown in Table 11. Figure 7b shows that Ex FM9 , Ex FM10 ⊂ (0 .309, 0 .500), the risk of EDSF in the transportation stage is between Very Low (VL) and Moderate (M). Figure 8a shows the risk level of comprehensive cloud of EDSF in the delivery stage compared with the benchmark clouds. Figure 8b in close-up view shows that Ex FM11 , Ex FM12 , Ex FM13 , Ex FM14 , Ex FM15 , Ex FM16 , Ex FM17 , Ex FM18 ⊂ (0 .309, 0 .500); the risk of EDSF in the delivery stage is between Very Low (VL) and Moderate (M). Ranking of EDSF Risks To obtain the ranking of EDSF risk more accurately and clearly, this paper uses the TOPSIS method. CPIS and CNIS are determined according to Equations (11) and (12): B + = (0.511, 0.269, 0.025); B − = (0 .347, 0.254, 0 .026). According to Equations (13) and (14), the distance between express delivery service comprehensive cloud with CPIS and CNIS are calculated respectively. Finally, the relative closeness coefficient (U i ) of express delivery service FMEA is calculated according to Equation (15). The results are shown in Table 12. According to the results, the overall ranking of the failure modes is: The EDSF risk ranking in the picking-up stage is: FM 3 > FM 1 > FM 2 > FM 4 . FM 3 has the highest risk among four service failure modes. This indicates that customers may most likely become unsatisfied and transfer a different express delivery firm if they encounter the inconsistent charges service failure. The EDSF risk ranking in the processing stage is: FM 8 > FM 7 > FM 6 > FM 5 . The service failure with the highest risk is FM 8 . Package loss not only bring the benefit losses to customers but also damages a firm's reputation. If it is not remedied in time, the customer's trust in the service provider will be greatly declined. The risk ranking of EDSF in the transportation stage is: FM 10 >FM 9 . Compared with FM 9 , the risk of express delivery service failure caused by lack of due diligence FM 10 will reduce the customer's satisfaction with the express company. Compared with the first three stages, more service failures are prone to occur in the delivery stage. According to Table 8, the EDSF risk ranking in the delivery stage is:FM 11 16 . Among them, FM 11 is the highest risk of service failure. If express packages cannot arrive as promised, the timeliness of express delivery service is disrupted, which increases the customer dissatisfaction and decreases customers' loyalty to a firm. FM 12 is also the second most significant one. While leaving packages in a pick-up place does provide great convenience for carriers and customers, the extra charge incurred is usually absorbed by the customers. Such action without consent from customers indeed leads to dissatisfaction. FM 13 is easy to occur due to the leakage of customer information. Under the increasing demand for privacy protection, customers regard this as a major risk. Managerial Implications Based on the above analysis, it can be seen that, among 18 EDSF modes in the four major stages, the service failures modes with the high risk in the processing and delivery stages are loss of package, rough handling, sorting error, privacy leakage, unauthorized delivery to a pick-up place, and poor service attitude. At the same time, six service failures with the relatively low risk involved in the picking-up and transportation stages are delayed transportation, lack of due diligence, inconsistent charge rate, service acceptance error, handover omission, and poor network coverage. On the basis of the research findings, the following suggestions are developed for express delivery companies, which help the companies to identify the key failure points, develop service remedial measures, reduce the loss from failures, and improve service quality and customer satisfaction. (1) Management should enhance the operations of sorting and processing, by standardizing the basic operation procedure and eliminating human caused errors. Firms are suggested to increase the investment on facilities and equipment to reduce the handling error caused by aging equipment or software. (2) The express delivery firms should establish an effective insurance claim system. To deal with weather and other force majeure factors, the firms should strengthen the effort to guide customers to purchase the necessary insurance to reduce the loss caused by those factors. As such, the interests of customers and firms are protected. (3) The firms should provide continuous education and training to employees to improve their work skills. Therefore, the employees can become better prepared to cope with various emergency scenarios and improve their sense of responsibility, service awareness, and adaptability. (4) The firms should enhance the tracking of service responsibilities. It is essential to be able to know who are responsible when service failures occur and to establish a rewarding system for the employees. Meanwhile, the after-sales service of express delivery should not be neglected so that customer feedback can be properly collected and analyzed. Conclusions In brief, the paper presents an improved Failure Mode and Effects Analysis (FMEA) approach based on the uncertainty reasoning cloud model and the TOPSIS method to evaluate the risk of express delivery service failure (EDSF). The approach is implemented in an empirical study for EDSF in China. The major contributions are summarized as follows: (1) This study addresses the research gap on the risk assessment of express delivery service failure. The established risk assessment indicators for EDSF by the empirical study provide a useful reference for the in-deep study and enrich the body of knowledge related to express delivery service failure. (2) Compared with the other decision techniques, this paper provides a new insight of FMEA by constructing decision matrices of expectation Ex, entropy En, and hyper entropy He of the cloud model, which describes the randomness and fuzziness in uncertain information and decreases the information loss in the transformation process. The integration with TOPSIS method further generates the comprehensive closeness coefficients. The approach provides a comprehensive decision process and makes the results more reasonable, and thus it enhances the risk detection ability of EDSF. (3) Based on the empirical study on express delivery service in China, this paper finds that six service failure modes with the highest risk are mainly located in the processing and delivery stages, while six service failures with the relatively low risks are involved in the picking-up and transportation stages. The findings provide the decision-making basis for the express firms to mitigate the express delivery service failure and take remedial measures. Limitation and Future Research Directions This study has certain limitations. First, although we shared the findings with the field experts, and they were generally in agreement. However, the rankings have not had the opportunity to be validated by their day-to-day operations through long-term data accumulation. Second, the data collection in the empirical study is only limited to the well-known major express companies, and it may not cover all representative customer groups in China. Third, although this research is intended to understand the critical express service failure modes through field study with focus group, customer survey, and expert questionnaires, the regional differences and correlation among service failure modes is not considered. Last, the research focuses on the risk evaluation of express service failure, while the corresponding recovery and remedial measures for the critical service failures are not addressed. In the future, the immediate research extensions are called to address the above limitations. While it is for the first time that EDSF has been classified using such classification, the findings need to be further validated through long-term data collection. In addition, research could be extended through data collection from small to medium sized delivery service companies in China and expanding the customer questionnaires to more user groups. In this way, differentiation in findings might be obtained between the major companies and the smaller companies and/or different customer groups. Similarly, the methodology may be adopted for investigating express service failures in other countries, such that new issues might be identified and regional difference could be revealed. Meanwhile, the remedial measures and recovery issues from express delivery service failures should be further studied to effectively prevent and avoid high-risk service failures. In addition, the proposed FMEA approach of combining the cloud model with the TOPSIS method deserves efforts for improvement. In this regard, other MCDM methods [52,53,68] could be attempted and compared with the proposed approach in this study. Lastly, when the dataset from empirical studies is meaningfully large, machine learning approaches could be incorporated to further improve robustness.
10,348.8
2021-09-21T00:00:00.000
[ "Engineering" ]
Attractive Workplaces: What Are Engineers Looking for? Competing for talents requires a conscious effort to offer an attractive workplace, which, until recently, involved increasing employee empowerment and engagement and offering opportunities for bottom-up innovation. Today, this is not sufficient, pushing tech companies to harmonize existing strategies with remote work. FOCUS: DEVELOPING YOUR SOFTWARE ENGINEERING CAREER three important trends in the literature: agile teamwork (focusing on social ties and collaboration), autonomy and empowerment (organizational democratization), and most recently, flexibility (the ability to work remotely). Motivated to understand how companies can navigate these transformations and trends to retain talent, we studied the journey of Spare-Bank 1 Ut vikling (SB1U) from having a high resignation rate to becoming one of the most innovative tech companies in Norway.We analyze their current situation as they discover the changing needs of their employees regarding remote work and readjust their strategy.The study began in 2018 when their employee turnover was high, and ended in 2022 when the company implemented policies for hybrid work to accommodate the need for developers' flexibility (Figure 2). The Case and the Study SB1U is a Norwegian software company owned by an alliance of banks.SB1U has used agile software development since 2012 and has worked for years on scaling the software development capacity.Therefore, hiring and retaining in-house developers was strategic.The bank has 25 software teams, each of which typically comprises five to six developers, a tester, a user experience designer, a product owner, and a team leader.The team size varies from five to 20 members.At the beginning of the study, SB1U had around 550 employees (including consultants), and at the end of the study they had 700 employees.The teams work on digital product development-including security, operation, and administration-for the web and mobile banking domains. Autonomy and Empowerment Ability to Take Decisions Individually and in the Team Flexibility Ability to Work Remotely 2017: Google Reports Findings from Aristotle Project: Psychological Safety as One of the Actions: Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply. We studied SB1U because the company has been dedicated to attracting and retaining talents for many years and has transformed itself into one of the most innovative tech companies in Norway, with a top-rated mobile banking app in Apple's App Store. Our study was longitudinal (2018-2022) and based on qualitative and quantitative data (surveys, interviews, access card records, and documents) obtained from four phases (Figure 2): • phase 1 occurred during the prepandemic period, when most employees were co-located (except for four distributed teams) • phases 2 and 3 occurred in the pandemic period, when employees were forced to shift to working remotely • phase 4 occurred in the intermediate pandemic period, when the offices reopened. Most of the interviewees and survey respondents were developers and testers, but we also interviewed human resources managers, developer managers, and team leads and received feedback on the preliminary findings from the leadership group.Furthermore, we collected documents from the case company.The interviews were coded in NVivo, using thematic analysis with predefined codes. Our analysis is descriptive and focuses on the challenges related to attracting and retaining talents in the four phases, the actions taken to address these challenges, and the results produced by these actions. We believe that the findings of this case study will be useful for reflection within both tech and nontech companies, and that the strategies implemented by SB1U address challenges related to job satisfaction, employee recruitment and retention, and will inspire those who face similar issues. SB1U's Journey Toward Offering Attractive Jobs Phase 1: Increase Empowerment and Engagement In 2018, SB1U was blighted by poor job satisfaction and high employee turnover, and in the following two years, the employees worked hard to change its reputation. Practices That Foster Autonomy and Commitment as the Starting Point.Autonomous teams and teamwork were at the heart of the ways of working.Teams had considerable freedom to decide how they worked, and most used a Kanban variant with elements of Scrum and coordination practices, such as backlog meetings, team meetings, and daily stand-ups.They adopted objectives and key results to guide their work as well as "Monday commitments" and "Friday wins" to strengthen teamwork.They also regularly performed team health checks with follow-ups in one-on-one conversations between team leads and individual team members.They used retrospectives to improve work practices and structured problem solving for continuous improvement. Contemporary Architecture That Enables Empowerment as the Next Step.For some years, SB1U worked on moving away from its legacy monolithic technical architecture, which is typical for banks, toward microservice architecture.A tech lead noted: We broke up one application into several applications. Then you are allowed to have teams around those applications. And then you force an organizational change. The modular architecture, tools, and automation were imperative for teams to have end-to-end responsibility and decision-making authority for their products, avoid handovers between teams, and be able to continuously develop software using DevOps. 9novation and Self-Development Time to Fight Turnover.At the beginning of 2018, the company had problems retaining and hiring new, qualified developers because of their high demand and because other companies offered better employment conditions (good salary and regular social and skillbuilding activities). To address these problems, the management asked employees for suggestions.One key suggestion was scheduling time for building new competencies, similar to those offered by tech giants (20% time at Google and FedEx day at Atlassian).After initial skepticism and cost-benefit calculations, SB1U decided to test a "20% policy" (a competence day) for six months.To the company's surprise, the employee turnover soon decreased.Since then, developers can spend every Thursday learning, testing new technologies, creating new solutions, or improving common code.This day also became a day for socializing and getting to know new people.One explained: It gives extra motivation. You get a bit of freedom to learn what you think is most important. Now I'm learning a testing tool that we will use here…. And the cool thing is that we gather in a room and learn together. The ability to spend a day on their own projects also made developers feel valued by the company, as one commented: It is unique that we can use one day a week as we want. I feel my company Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply. appreciates my skills and supports my professional development. Not the least, it also improved developers' well being and reduced stress, as one explained: I think more companies should implement this practice.It gives a feeling of freedom; one day a week, you can get your head above water. Unsurprisingly, the new 20% policy became important in attracting new developers, as none of the competitors offered similar opportunities. Networking and Collaborative Culture.In complement to individual learning, the 20% policy increased the activity in guilds, known as communities of practice.Earlier, community members had problems scheduling meetings; now they could meet on Thursdays.Communities of practice or guilds include groups of people with similar skills and interests who share knowledge, make joint decisions, solve problems together, and improve a practice.A tech lead explained: We have a security guild that often meets on Thursdays to discuss security-related topics, and we encourage others to stop by these days to learn with us. Phase 2: Preserve Job Satisfaction and Empowerment On Friday, 13 March 2020, all schools and kindergartens in Norway closed because of the COVID-19 virus outbreak.Social distancing was introduced as a national policy.The SB1U teams suddenly transitioned to a completely distributed, digitally mediated setup where all employees worked from home.This change had consequences for the individual job experiences and community feeling, as work became more individual.At the same time, the meaningfulness of the work at the bank remained, as the employees were reaping the benefits of their investments in empowerment and engagement.Employees reported being highly motivated during the first phase of the lockdown. The Work Became Less Social as Collaboration Dropped.In phase 1, co-location was an important enabler for the agile organization to function and for teams to have shared values and a high level of trust.When all employees suddenly began remote work, communication and opportunities to socialize changed.According to the survey in phase 2, 34% experienced that collaboration with other units had worsened, and 78% felt that the work became less social.One example was the competence day (the 20% policy), which was now organized over Microsoft Teams.A tech lead explained: [The competence] day was somewhat dead in the water in the beginning because it became difficult to get together. Some people continued working on their stuff, but it became very individual […]. There was little community around it. But we began restarting it now, even if it is on a video call. Decreased Spontaneity and the Rise of Plan-Driven Interaction.An important observation was that developers previously had more informal knowledge exchanges (coffee machine conversations, over-the-shoulder inquiries, and hallway chatting).Any informal deliberations now had to be formally scheduled, which flooded the calendars.Teams organized social gatherings, such as coffee breaks, on Microsoft Teams and arranged social quizzes using specific tools to preserve the community feeling.We observed a separation between formal knowledge deliberations (such as stand-ups) and more social events (such as digital coffee breaks). Technical Equipment.Despite the sudden transition to WFH, the necessary collaboration continued, as evident in SB1U keeping the same pace of deliveries.This was attributable to the teams that had the authority to find the best way to operate, the digital production tool chain using GitHub and Maven, and the tools for collaboration, such as Confluence, Trello, Microsoft Teams, and Slack.The first survey in 2020 revealed that the more employees relied on collaboration tools such as Slack, the more they carried out informal and nonwork-related conversations.The downside of computer-mediated communication was that many individuals chose to communicate over private channels or via direct messaging, which meant that they lost some of the informal knowledge sharing that happened when overhearing the chatter in the common areas of offices.Besides, some challenges remained, such as resolving complex issues together, which traditionally involved lengthy discussions and drawing designs and ideas on whiteboards.A developer commented: I miss the whiteboard so much; standing and looking at people when you talk and seeing that they don't understand anything, that I have lost them, and then explaining again.That is so much easier when collocated. Ensuring Well-Being and Ergonomics.SB1U cared for employee well-being FOCUS: DEVELOPING YOUR SOFTWARE ENGINEERING CAREER despite significant budget cuts in the first months of the pandemic because of uncertainty about the future economic situation.The cuts concentrated on reducing the number of consultants, whereas important practices, such as the 20% policy, were kept.In addition, SB1U launched a reimbursement program for home office equipment (€500) so that everyone could get external monitors and chairs to ensure an ergonomic setup. Managers maintained close contact with employees through regular one-on-ones, which was regarded as an important practice that helped teams sustain.One core conclusion from these sessions was that some employees did not have suitable conditions for WFH and some felt isolated.Therefore, some employees were encouraged to come to work even though the office was closed.Furthermore, they were allowed to hold physical meetings, if the task required so, in line with the social distancing safety regulations.A developer stated: We have some settings where we say, "Now we need a face-to-face workshop again."So I think I have, since the COVID outbreak, been at the office four times […]. But it is limited in a sense because we cannot fit that many in a meeting room […].Nevertheless, it is more effective, quite simply. Phase 3: Develop Healthy Work Practices After dealing with the immediate challenges of setting up full-time remote work, the bank moved to developing healthy work practices.Fewer interruptions were the main WFH benefit, but developers' workday experiences and team disturbances still called for improvements. Challenges With Online Meetings and Digital Interruptions.Many developers felt that too much time was spent in meetings.An increased number of meetings subsequently increased the number of interruptions and reduced the focused work time, which caused stress.Furthermore, high meeting load reduced employees' and teams' availability. One team member explained: A problem for me is that many others are in a lot of meetings, so it is difficult to fit into their calendars.A number of key people sit in meetings all day, and when I need a meeting with them, I have to go two weeks ahead in their calendars to find a vacant slot.And then my work also gets very delayed. Because agile processes depend on continuous deliveries and open communication, not being able to have short, spontaneous conversations was perceived as negative.To facilitate more unscheduled meetings and effective decision making, the number of scheduled meetings had to decrease. Healthy Meeting Culture.Researchers, together with an internal group of employees, created a survey in 2021 to better understand the meeting load and its effects.All survey respondents, independent of their meeting load, wanted more consecutive meeting-free hours.Therefore, several teams tested out reserving meeting-free time in their calendars and grouping team meetings.The latter was evidently a tradeoff between interruptions and potentially increased meeting fatigue. Having back-to-back meetings was challenging.One employee said, "I do not have time to take breaks between meetings."One tried-out strategy was to adjust the standard calendar time from autofilled onehour slots to 50-min slots starting at the hour.However, a shorter meeting duration did not change people's behavior, and most meetings continued over the full hour.Another attempt to set the standard meeting start time to 10 min into the hour yielded a significant improvement. Maintaining a Healthy Level of Digital Interruptions.To further reduce the number of interruptions, some employees shielded themselves by disabling Slack notifications for specific channels or over a particular time period.In addition, the teams were encouraged to Managers maintained close contact with employees through regular one-on-ones, which was regarded as an important practice that helped teams sustain. Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply. be more reserved when answering direct messages and to use open channels.One informant said that muting notifications was something they had wanted but seldom did because they would feel guilty.Furthermore, some feared missing out on important information.One stated: We use Slack so much during the day, I'm afraid of shielding myself too much. Increased Pairing.A final action to improve the quality of work was to encourage more pair programming. Pairing resulted in fewer interruptions, a healthier communication pattern because of constant feedback, and reduced use of pull requests.Pair programming became an important strategy for remote collaboration, as a team leader described: Both employees and the organization benefited from flexibility.Because each team had their own home zone, the bank now found a way to balance the individual, team, and organizational needs in the new flexible work life. Emerging Tradeoffs SB1U's journey to becoming an attractive workplace started by establishing continuous dialogue and conducting surveys to better understand their employees' needs.Mapping the efforts of SB1U with the core value propositions proposed in Mortensen et al., 2 one can see that the largest investment went into establishing and strengthening the connections, community, and social environment along with increasing the opportunities for growth and development.This is evidenced in the increased teamwork orientation, enabling autonomous tea mwork The actions taken to address the challenges faced by SB1U could be described as supportive leadership.Despite the temptation to "handle the crisis" using increased control, management continued to exhibit trust and support for both individual and team well being.Moreover, despite the difficult economic situation, the company remained committed to the chosen course by retaining the 20% policy and offered material support for home office equipment.However, the social environment suffered because of remote work. SB1U is not the only company that experienced how connections and community suffer when working remotely.A Microsoft study of over 60,000 employees shows that firm-wide remote work made the collaboration network more static and siloed, with fewer ties that cut across formal business units because of asynchronous communication. 11antos and Ralph studied coordination in hybrid software teams and found that the feeling of attachment and cohesion in these teams was decreasing. 12Finally, several studies found that the interest in collaborative work decreases when remotely doing it. 13,14ith the reopening of offices, the social connections and community life were expected to improve.However, the better-than-expected personal experiences and new working habits led many to continue to WFH. S o what can we learn from this? Employee willingness to continue to WFH 8,15 may indicate that one of the core value propositions emphasized in the past (connection and community) 2 has diminishing value in the employees' eyes.Alternatively, one factor be the habit of WFH.We can conclude that SB1U, together with similar companies, has experienced a recent increase in turnover.The reasons for this can be manifold, including decreased collaboration, community feeling, and sense of belonging 11,12 because of the inability of the recent hires to develop meaningful relationships, delayed decisions on changing jobs during the pandemic, and aggressive actions of job hunters.SB1U and many other companies have to decide the worth of attempting to satisfy everybody by allowing full flexibility.Ironically, full flexibility results in many being dissatisfied, as those who prefer to work in a social environment with many others want everybody to be back, whereas those who prefer to WFH would rather collaborate with everybody remotely instead of being second-class citizens in a hybrid setup.SB1U chose to continue their journey focusing on strengthening teams, social connections, and community.This, however, meant introducing mandatory office presence, like some other of SB1U's competitors who force people back three to four days a week.A similar strategy was adopted by Amazon and Apple.The definitive destiny of flexibility to WFH as a value proposition at an attractive workplace is yet to be determined and requires additional research. FIGURE 1 . FIGURE 1. Trends focusing on agile teamwork, autonomy and empowerment and flexibility (based on the number of articles published in Scopus), and corresponding corporate policies. FOCUS: DEVELOPING YOUR SOFTWARE ENGINEERING CAREERthrough architectural changes, supporting com mu n it ie s , and offering time to work on activities of free choice, which often happened in collaboration.Our findings show how these efforts increased job satisfaction and employee retention before the pandemic.Next, the forced work in isolation surfaced the employees' needs of individual flexibility (associated with material offerings 2 ).
4,246.2
2023-09-01T00:00:00.000
[ "Engineering", "Computer Science" ]
MODELING THE PROCESS OF POLYMERS PROCESSING IN TWIN-SCREW EXTRUDERS Manufacturers use single-screw extruders for continuous fabrication of products from thermoplastic polymers. Recently, they have also started to use twin-screw extruders. Twin-screw extruders demonstrate high productivity and mixing ability, as well as the stability of processing parameters at replacement of a forming tool (extrusion head) [1–4]. Currently, producers of equipment for processing of polymers offer many twin-screw extruders with various geometries, which significantly complicates the selection of necessary equipment. That is why mathematical modeling of the process of twin-screw extrusion acquires great importance. That makes it possible to select the most efficient equipment quickly. This possibility considerably reduces expensive experimental research and development of a polymer technology. Many studies consider the modeling of single-screw extruders in detail (for example, papers [2, 3, 5–7]). However, there is much less attention being paid to the modeling of twin-screw extrusion. Many authors have investigated a rather complex process of twin-screw extrusion over recent years. Authors made various assumptions to simplify a mathematical description of the process. Such an approach was acceptable for a long time, but as the productivity of extruders has increased significantly over time, many processing models became unacceptable for practical use. Therefore, modeling the process of polymer processing in twin-screw extruders, taking into account actual boundary conditions, as well as a heat exchange of a polymer with screws and an extruder barrel, is relevant. Introduction Manufacturers use single-screw extruders for continuous fabrication of products from thermoplastic polymers.Recently, they have also started to use twin-screw extruders.Twin-screw extruders demonstrate high productivity and mixing ability, as well as the stability of processing parameters at replacement of a forming tool (extrusion head) [1][2][3][4]. Currently, producers of equipment for processing of polymers offer many twin-screw extruders with various geometries, which significantly complicates the selection of necessary equipment.That is why mathematical modeling of the process of twin-screw extrusion acquires great importance.That makes it possible to select the most efficient equipment quickly.This possibility considerably reduces expensive experimental research and development of a polymer technology. Many studies consider the modeling of single-screw extruders in detail (for example, papers [2,3,[5][6][7]).However, there is much less attention being paid to the modeling of twin-screw extrusion. Many authors have investigated a rather complex process of twin-screw extrusion over recent years.Authors made various assumptions to simplify a mathematical description of the process.Such an approach was acceptable for a long time, but as the productivity of extruders has increased significantly over time, many processing models became unacceptable for practical use.Therefore, modeling the process of polymer processing in twin-screw extruders, taking into account actual boundary conditions, as well as a heat exchange of a polymer with screws and an extruder barrel, is relevant. Literature review and problem statement Any thermoplastic material has certain properties.Therefore, a corresponding geometry of operation bodies (primarily screws) and a processing mode are necessary for processing of material in a twin-screw extruder.One of the ways to choose the most effective extruders, which provide the required quality of products, is mathematical modeling Processing of polymers in twin-screw extruders is much more difficult than in single-screw ones.This is explained by the presence of two rotating screws in one or opposite directions, and because of the shape of a channel of a body (barrel) of an extruder [8][9][10]. Researchers also point out that a lack of reliable software constrains the mathematical modeling of twin-screw extruders, so foundations for the design are experimental and practical data [9].Due to the above, studies of twin-screw extrusion considered analysis of a process of mixing (mainly for screws, which rotate in one direction) for a long time.For example, paper [11] considers designs of mixing elements of screws with or without interlocking engagement.Paper [12] investigates effectiveness of screws with neutral and reversible pin mixing cams, and papers [13,14] -screws with oval cams.Work [15] presents results of similar studies on screws with oval cams.Paper [16] presents classification and considers constructions of mixing-dispersing elements of screws in detail. Authors of paper [17] studied hydrodynamics and a mixing effect of a co-rotating twin-screw extruder in detail.However, the analysis of velocity of polymer melt is in the isothermal approximation, which can lead to a significant error in processing of high-viscosity polymers. Work [18] investigates a velocity field and pressure in a channel of a co-rotating twin-screw extruder [18].There is no analysis of a temperature field of a polymer. Papers [19,20] present results of a study on the dependence of productivity of a co-rotating twin-screw extruder and pressure along its operation channel [19,20].There is research into screws with a channel of different geometry and different velocities of their rotation.However, the mentioned studies are relevant in the absence of a dispenser at the inlet to an extruder only, which in practice is extremely rare. Work [21] considers the analysis of a process of mixing and distribution of pressure along an operation channel of a co-rotating twin-screw extruder Paper [22] provides similar studies (regarding screws with mixing cams).The mentioned works do not consider an influence of heat supply systems of screws and a barrel on a temperature field of processed material, which affects the quality of obtained products significantly. Work [23] investigates melting velocity of polymer granules in a twin-screw extruder with counter-rotating screws; specifically, it determines a length of a melting zone in dependence on rotational velocity of screws.The work considers the process of processing of a polymer in the gap between the screws by analogy with the process of processing on roller machines [24].In addition, it does not show the dependence of temperature of processed material on a length of an extruder channel, which makes difficult to analyze an influence of process parameters on the quality of obtained products. The aim of most of experimental and practical studies is analysis of mixing ability of twin-screw extruders and determination of their productivity.At the same time, stud-ies almost do not consider analysis of a temperature field of processed material, as well as determination of power of dissipation.However, exactly these parameters determine the quality of polymer melt and products, which we obtain in an extruder. Due to peculiarities of the structural design of an operation channel of twin-screw extruders, heat supply systems of screws and a barrel, as well as real boundary conditions, significantly influence the process of polymer melting and provision of necessary temperature mode of processing.Given the above, a new approach is necessary for modeling of the process of twin-screw extrusion. The aim and objectives of the study The objective of the study is a mathematical modeling of the process of processing of polymeric materials in an operation channel of twin-screw extruders.A model should take into account dosed feeding of extruders with processed material, availability of heat supply systems for its operation elements, and also real boundary conditions (geometric ones, velocity ones and temperature conditions).This will give us a possibility to determine rational parameters of the twinscrew extrusion process to ensure the required temperature distribution of a polymer at the outlet from an extruder at its given productivity.The mentioned parameters include the method of heating or cooling of screws and an extruder body, a type of a heat-transfer agent, its temperature and volume flow, geometry of an operation channel of screws and frequency of their rotation. It is necessary to solve the following tasks to achieve the objective: -consideration of features of full and partial filling of an operation channel of an extruder with processed material and heat transfer of a polymer with rotating screws and a stationary barrel; -theoretical investigation of the process of processing of a polymer in an operation channel of an extruder; -experimental verification of adequacy of the developed mathematical model. 1. Modeling of the process of polymer processing in a twin-screw extruder with counter-rotating screws Among twin-screw extruders, one of the most common are extruders with counter-rotating screws and their full engagement closed both in a longitudinal direction and a transverse direction.A feature of such extruders is that spiral channels of the screws are a series of C-shaped sections almost isolated from each other, each of which has a certain volume of processed TpM (Fig. 1).We proposed a model of the separated volume limited by one turn of a screw [5,25] for the analysis of the process of twin-screw extrusion taking into account the above. Material that enters the C-shaped section of the operation channel of the extruder from the feeding box at rotation of screws goes in the direction of the molding head, and two C-shaped volumes of TpM come out of it in one turn.Productivity of the extruder is almost independent of resistance of the molding head [5,26].A depth of the cutting of screws can be relatively large, which reduces velocity of deformation of TpM and intensity of dissipation, and leads to an increase in a fraction of heat supplied to TpM from the wall of the barrel.Such scheme of motion of TpM provides equality of time of its stay in channels of screws, which is especially important for processing of heat-sensitive materials.Gaps between screws and between a screw and a barrel lead to some reduction in productivity, but they improve mixing of TpM.A transfer of material through the mentioned gaps from one C-shaped volume to another one depends on a pressure drop between volumes and depends on resistance of the molding head.Such gaps exist between combs of one screw and the core of a neighboring screw (roller gap) δ rs-cs, between lateral surfaces of combs δ rs-rs , and between combs of windings and the wall of a barrel δ rs-b (Fig. 2). Relative velocity of rotation of screws in δ rs-rs gaps is approximately equal to zero at the counter-rotating screws, and therefore their influence on intensity of dissipation is insignificant. Fig. 2. A nature of the engagement of screws of a twin-screw extruder: a -at counter-rotating screws; b -at co-rotating screws Extruders with counter-rotating screws with mutual engagement provide a high mixing effect with considerable productivity.However, in this case, radial (spreading) forces arise in twin-screw extruders.They lead to an increased wear of operation parts (primarily a barrel). Compaction of TpM in an extruder goes due to reducing of a volume of a closed C-shaped volume in the direction of the molding head.We should note that reconsolidation and jamming of material is possible for different values of a degree of compression of screws.Therefore, a dosed feeding is necessary for twin-screw extruders operation. Theoretically, we can define volumetric productivity of twin-screw extruders without considering of overflows as a product of two C-shaped volumes at the extruder's outlet and rotational frequency of screws [5,26].However, theoretical dependences give significantly higher values of produc-tivity compared with practical data, which we can explain by presence of dispensers and the need to achieve high quality of processing.Thus, theoretical data gives an upper limit to productivity only.On the other hand, productivity of a dispenser determines productivity of an extruder at dosed feeding, and therefore determination of productivity of an extruder does not have a significant value for further calculation of an extruder, if the productivity does not exceed the maximum possible rotation frequency for the given geometry of screws. We determine the maximum possible productivity of an extruder with dosed feeding and n s rotational frequency of screws.In this case, only the last C-shaped sections are completely filled, and their volume, as a rule, is the smallest.For this purpose, we consider a cross section of an extruder in a plane perpendicular to longitudinal axes of screws (Fig. 3) [26].The area of ABCF segment is equal to the area of OAFC sector except for the area of OAC triangle: Let us determine these areas: D AC D Then we have: The cross-section area of channels of two screws is then a volume of two C-shaped sections (m 3 ) is equal to Taking into account (1), mass productivity of an extruder is ( ) where G M is mass productivity of an extruder, kg/s; ρ is density (kg/m 3 ) of TpM as a function of temperature Т, °С; п s is the rotational frequency of a screw, r/s.We can determine β angle of OAFC sector by the dependence (Fig. 3) At calculation of C-shaped sections that are not completely filled with material, we can define a degree of their filling as a ratio of productivity of a last section to productivity of a completely filled section in question, which we determine by a formula (2).A channel at the inlet to a roller gap will be filled completely in reality, due to a counter rotation of screws, and only the opposite part of the C-shaped section (at the outlet of the roller gap) will remain unfilled. We can define a cross-sectional area of a TpM volume in incompletely filled sections by a dependence (2) if we know the mass productivity provided by a dispenser On the other hand, S s area is equal to a difference in areas of sectors with R 2 and R 3 radii and an angle p − β 1 (2 ) .Here β 1 is the central angle of an annular space not filled with TpM, rad; R 2 and R 3 are radii of a core of each of screws and an inner surface of a barrel, respectively, m. Then, for two screws, we have We equate dependences (3) and ( 4) and solve the expression obtained with respect to β 1 angle.We obtain ( ) s ke D h h Paper [5] considers a general approach to calculation of hydrodynamics and a temperature field of TpM in operation channels of a twin-screw extruder. Then we determine total dissipation power ΔQ dis in a volume of two C-shaped sections.We take into account that intensity of dissipation q dis (W/m 3 ) varies along the radius only for the accepted assumptions.So, according to the Simpson's numerical integration formula, we obtain where , 4, ..., 2; j m r i and r j are radii of corresponding elements Δr, m; m is a number of nodes in the Simpson's formula.We determine q dis i and q dis j values, which are parts of dependence (5), from formula where μ is the dynamic viscosity of TpM, Pa•s; γ , γ  r and γ  z is the shift rate and its components in the direction of r and z axes, respectively, s -1 .In addition, drive power acts to deform melt in δ rs-b , δ rs-cs and δ rs-rs gaps (Fig. 2, a).Since the rotation of screws is opposite-directional, there is almost no shear deformation in δ rs-rs lateral gaps of engagement, and therefore we do not consider them.Power costs in δ rs-cs roller gap also have no significant effect on results of engineering calculations as shown by calculations performed with a use of the method of calculation of roller machines [24].The greatest power costs take place in δ rs-b radial gap between a comb of a turn and a wall of a barrel.As we already noted, an influence of a pressure gradient on a velocity profile is insignificant due to a small size of a gap, and we can consider a flow in the gap as a flow between a stationary surface and a moving surface with linear velocity distribution.Then the shear rate in the gap is γ = δ  s rsb , W and we can determine the dissipation power from dependence where T rs-b is the average temperature TpM in a gap between a comb of a screw and a barrel, which we take as equal to temperature of TpM near a surface of a barrel T b , °С, W s is the circumferential velocity of a comb of each of screws ( = p s s W Dn ), m/s.Then, power costs in gaps of two adjacent C-shaped sections of both screws are equal to 2. Modeling the process of polymer processing in a co-rotating twin-screw extruder Rotation of screws of a twin-screw extruder in one direction increases their rotational frequency and excludes possible jamming of screws.Accordingly, productivity increases at high quality mixing of TpM and self-cleaning of screws.In many cases, screws have an opposite-run profile with a semicircular cutting, which promotes better mixing of melt. Let us consider a three-run screw (Fig. 4).We use a model of the allocated volume limited by one turn of each of screws as for a counter-rotating twin-screw extruder to analyze the process.φ s -angle of spiral cutting of screws, rad, w x and w z -velocity components along the x and z axes, respectively, m/s; W s -circumferential velocity of each of screws, m/s) Fig. 4 shows that a TpM flow is redistributed between adjacent channels of their cuttings at the contact point of screws, and the motion of TpM is similar to its motion in a single-screw extruder for the remaining length of channels.Since twin-screw extruders operate under the condition of dosed feeding, only the last few turns of cutting of screws are completely filled, which provide the necessary pressure at the outlet from an extruder.A volume a channel of each of screws decreases along its axis both due to a decrease in depth, and due to a reduction in a pitch of cutting.Such a reduction occurs usually stepwise along the length of a screw in a point of contact of several structural zones of a screw with each other.Since a TpM flow is redistributed continuously, and melt in the melting zone is mixed with solid material, we assume that there is no clearly defined melting zone as in a counter-rotating twin-screw extruder.And shear deformation takes place in the entire volume of TpM along the entire length of screws. A volume of the C-shaped section is not isolated (Fig. 4, a) in this case, unlike the counter-rotating twin-screw extruders.However, material fills the last cutting sections completely, and therefore we can define velocity of its axial motion w L in these sections from the mass flow equation where w L is the velocity of an axial motion of TpM, m/s.We consider that the profile of a channel is semicircular with h max cutting radius for two screws with a k-turn cutting, so we have where S s is the cross-sectional area of channels of two screws, m.We substitute an expression (7) with dependence ( 6) and solve it with respect to w L .And we obtain where we calculate an angle of a rise of screw cutting of screws from formula We refine average material velocity w L calculated from (8) for the last sections of screws at calculation of their remaining sections by multiplication of w L velocity by the ratio of a cutting pitch of the section to a cutting pitch of the last section. We determine the equivalent depth h of a channel of a semicircular shape by replacing it with a rectangular one, if areas of their cross sections are equal (Fig. 4, b) from which the equivalent depth of a channel will be equal to (m) p = max . 4 h h We can determine the equivalent depth of a channel at the site of their location similarly in the presence of mixing cams. Equivalent cutting in cross section forms an annular space not completely filled with material, the area of which in the diametric section for two screws is approximately equal to where β1 is the central angle of a cross-section of a C-shaped section not filled with TpM, rad.We substitute expression (9) with dependence for mass productivity (6) and solve the expression obtained with respect to β 1 angle.And we obtain Productivity of an extruder is equal to productivity of a dispenser at dosed feeding, but it cannot exceed the maximum productivity G M max , which depends on frequency of rotation of screws n s and geometry of completely filled last sections.Since the C-shaped sections are not completely isolated from each other in the case of unidirectional rotation, we use a single-screw extrusion theory to determine productivity of the last sections.For a plane-parallel screw model with a pressure gradient close to zero, the productivity will equal where is the velocity component along a screw deployed on the plane of a screw channel, m/s.We substitute this expression with dependence (10) and obtain It is necessary to determine deformation rate at nodal points (by analogy with counter-rotating twin-screw extruders) to determine intensity of dissipation q dis in TpM volume.It is necessary to take into account that C-shaped sections are not locked and there is no circulation in the longitudinal direction in them when determining the deformation rate component γ  z as it is in the case with count- er-rotating screws.But for unfilled sections, the pressure gradient in z direction is close to zero, so we can assume that the velocity along the channel height changes linearly, and the deformation rate is, respectively We determine the dissipation power by integration of dissipation intensity q dis by the TpM volume, which is in the section under consideration where r, θ, z are the cylindrical coordinates. Since q dis depends only on the radius, expression (11) takes the form ( ) We calculate the integral in expression ( 12) by the Simpson's numerical integration formula (by analogy with expression (5)) We determine q dis i and q dis j values at the nodal points by a technique similar for a counter-rotating twin-screw extruder. The profile of cutting has the shape of a segment in contrast to a counter-rotating twin-screw extruder, and therefore TpM deformations do not matter much for calculation of drive power in δ rs-cs , δ rs-rs and δ rs-b gaps (Fig. 2, b).We can reduce an error caused by not taking into account deformation in the mentioned gaps by integration of dissipation intensity over the entire volume of the C-shaped section filled with the processed material (that is, under condition β = 1 0). 1. Results of modeling a counter-rotating twinscrew extruder Let us analyze the results of calculation of the process of processing of TpM on a counter-rotating twin-screw extruder with a diameter of 125 mm and an operation length of 30D for productivity of 500 kg/h and a rotational frequency of screws of 50 rpm.We performed calculations for two options: at a given temperature of surfaces of screws and a barrel, and in the absence of heat exchange with surfaces of screws and a barrel (the adiabatic mode).We assumed the exponent of n degree rheological equation as 0.3 and 1.0.Fig. 5 shows a temperature dependence of a dimensionless y/h channel height for three cross sections at the first, fifteenth and thirtieth turns of screws at = 1 n and = 0.3 n values of the exponent for the adiabatic mode.We can also see on Fig. 5 that more intense heating of TpM takes place near a surface of each of screws, and temperature inhomogeneity of TpM can reach 30...50 °С.At the same time, a temperature level of processing decreases at an increase in deviation from Newtonian behavior, which occurs due to a decrease in intensity of the melt circulation. Fig. 6 shows a temperature change along a dimensionless y/h channel height under boundary conditions of the first kind (given temperatures of operation surfaces of screws and a barrel) for the indicated sections of screws.It follows from Fig. 6 that it is necessary to heat TpM, and then to cool surfaces of screws and a barrel to maintain the given temperature at the first turns. A need to maintain such a thermal mode follows from Fig. 7, 8, which show curves of a change in a heat flow along the length of the extruder q b that must be brought or removed from surfaces of the barrel and each of the screws.It is possible to determine and select a necessary temperature mode for processing using these curves.Geometry of screws causes some jumps in heat flows at 5th, 13th and 19th turns.Fig. 9 shows a temperature change of TpM along a dimensionless height of a y/h channel for three cross sections at the beginning, in the middle and at the end of the screws.We performed the analysis for productivity of 200 kg/h, the adiabatic processing mode and two values of the equivalent thermal conductivity [5]: normal one and doubled one, that is, for different mixing modes.and doubled one (4-6) Fig. 9 shows that temperature values are rather close to each other in certain places along a length of the extruder for different values of the equivalent thermal conductivity.Thus, intensity of dissipation at certain points in the TpM volume determines temperature fields.Similarity of the temperature fields obtained as a result of calculations for the adiabatic mode confirms this as well (Fig. 10).We should note that it is necessary to supply a screw and a barrel with a considerable amount of heat on the first turns to maintain a given temperature mode, which is not always possible, since a heat exchange area of operation bodies is limited.Therefore, it is expedient to perform calculations for the case with boundary conditions of the second kind (we know heat flows on operation surfaces of screws and a barrel) firstly, and to select values of temperatures on surfaces of operation bodies according to the results.It is expedient to use mixing elements to reduce temperature inhomogeneity of TpM melt in the operation channel of an extruder [16]. 2. Results of modeling a co-rotating twin-screw extruder We performed verification of the developed model during selection and refinement of TpM processing modes on the basis of high-pressure polyethylene with filler (aluminum hydroxide) and additives.We used this material as a self-extinguishing electrical insulation for cable products.We compared the results of the numerical modeling with the experimental data obtained by granulating of the composition in JSC Kyivkhimvolokno (Kiev, Ukraine) on a twin-screw extruder with a screw of a diameter of 83 mm and an operation length of 30D.The composing screws had three-runs sections with a cutting pitch of 120, 90 and 60 mm with a semicircular cutting profile of a radius of 12, 8 and 6 mm, respectively, as well as mixing cams of triangular shape.We carried out the experiments at the productivity of 50 kg/h and rotational frequency of the screws of 40 and 25 rpm.We maintained the temperature of a barrel wall by a liquid system of thermal stabilization with four autonomous zones along its length.During the experiments, we recorded values of wall temperature of the barrel T b , temperature of TpM at the outlet from the extruder, productivity and rotation frequency of the screws continuously. We determined the temperature of the barrel wall of the extruder by thermoelectric converters TKhK-259 and TKhK-539 (НСХ L, the measuring r a nge is 0...400 °C).In addition, the measuring kit included automatic potentiometers of A-565-001-01 type (0.15/0.05 accuracy class, the measuring range is from −50 to 800 °C, digital sampling rate is 0.1 °С). We determined the temperatu re of the melt at the exit from the extruder with a help of thermoelectric converters of К1 needle type (analogue TP174, НСХ L, the measuring range is from −40 to 200 °C ) .In addition, the measuring kit included automatic pote n tiometers of А100-Н-1 type (a 0.5 accuracy class, the measuring range is 0...200 °C, digital sampling rate is 0.1 °С).Fig. 11 shows the calculated temperature profiles along the dimensionless height of y/h channel on the 7th, 14th, 21st and 28th turns for the productivity of 50 kg/h and the rotational frequency of the screws of 40 rpm as an example.A system of thermal stabilization maintained the temperature of the barrel wall in zones.The temperature was 160, 160, 180, and 180 °C, and we assumed it as given one during calculations. Fig. 11 shows that the temperature near the surface of the barrel drops sharply to a predetermined value due to a large value of a heat flow removed by a thermal stabilization system.The temperature of the material at the outlet of the extruder was 220 °C, which is close to the calculated value.A comparison of the calculation results with the experimental data showed that the actual temperature (approximately 210 °C) was close to the average TpM temperature and exceeded the given value.This fact indicates that a ther-mal stabilization system cannot remove heat of dissipation during this processing mode effectively. Fig. 12 shows similar dependencies obtained for the same performance, but with a reduced to 25 rpm rotation frequency and the given temperature of the barrel wall in the zones of 140, 140, 140 and 160 °C, respectively.Fig. 13 shows the temperature field of TpM for boundary conditions of the first kind at the barrel wall temperature by zones of 140, 140, 160 and 160 °С.system removes heat necessary to maintain a given wall temperature in the first three zones.A significant amplitude of regulation indicates that TpM temperature in general is higher than the given wall temperature.In the fourth zone, where the wall temperature was 160 °C, the amplitude of control oscillations was smaller, since the wall temperature was closer to the TpM temperature.The calculated material temperature of 180 °C was close to the experimental value of the wall temperature in this zone. Discussion of results of numerical modeling of the melting process in a single-screw extruder Analysis of the results of numerical modeling showed that they agree with the experimental data satisfactorily.The discrepancy between calculated temperature values and measured temperature values at the outlet of a co-rotating twin-screw extruder with screws Ø83×30D, which is acceptable for engineering calculations, does not exceed 10 %.The measured temperature values were slightly higher than the given value.Because of the fact that systems of thermal stabilization of the barrel and screws could not remove evolving heat of dissipation for modes under study efficiently. We can explain a good concordance between the results of numerical modeling and experimental data by taking into account partial filling of the initial section of the operation channel with the processed material, as well as the correctness of the accepted boundary conditions (velocity and temperature ones). Neglecting partial filling of the initial section of the operation channel with a polymer would lead to an increase in the calculated dissipation power and the temperature of the processed material. We also took into account real velocities on surfaces of operation bodies of the extruder: the zero velocity on the surface of the fixed barrel and the corresponding velocities on surfaces of the rotating screws.Taking into account heat exchange conditions on the outer surface of the barrel and the internal surfaces of the screws, this made possible to clarify temperature of the polymer both at surfaces of the barrel and the screws, and in the volume of the operation channel as a whole. We showed that it is necessary to use mixing elements for mixing of materials with clearly expressed Newtonian properties in the operation channel of a twin-screw extruder.We also showed that intensity of energy dissipation in certain places of processed material determines temperature fields in the volume of processed material. We consider the fact there were an experimental verification of adequacy of the developed models for one size of an extruder only as a disadvantage of the conducted studies.A lack of complete experimental data for other extruders did not make possible to perform a more detailed analysis of effectiveness of the developed calculation methodology. In addition, the obtained dependences are valid for the analysis of the process of twin-screw extrusion of an "exponential" liquid only.Nevertheless, the proposed approach makes possible to obtain similar dependences for melts of polymers.Other rheological equations describe their behavior under load. We tested the developed methodology successfully in the design of industrial extruders developed by PJSC "NPP Bolshevik" (Kiev, Ukraine) (JSSPC "Bolshevik"). We plan further research to analyze the process of grinding of polymers and elastomers wastes in screw machines [27], as well as processes of processing of polymeric materials in disk, combined and cascade extruders with operation bodies of various design. Conclusions 1. We developed models for the processing of polymers in co-and counter-rotating twin-screw extruders based on the generalized mathematical model of screw extrusion.A base of the proposed models is the analysis of an allocated C-shaped volume, which is limited by one turn of cutting of each of screws and in which there is a certain volume of a processed polymer.Such model gives possibility to describe the process of processing both in the case of complete and partial filling of an operation channel with processed material.This is especially important in case of dosed feeding of an extruder with a polymer, which is typical for modern processing equipment. In addition, the proposed models take into account real boundary conditions on operation surfaces of rotating screws and a stationary barrel, which gives possibility to choose parameters of thermal stabilization systems of operation elements of an extruder unambiguously. 2. We studied the process of melting of polymer granules in an operation channel of an extruder screw.The results of the study showed that it is necessary to use mixing elements at an increase of deviation of the behavior of processed material from the Newtonian behavior in an operation channel of screws.We established that intensity of energy dissipation at certain points in the volume of a processed polymer determines temperature fields in the volume of processed material. We substantiated that, firstly, an intensive external energy supply to screws and a barrel with their subsequent gradual cooling is necessary during the processing of a polymer on twin-screw extruders in contrast to single-screw extrusion. 3. We verified the adequacy of the developed model by comparison of the results of the numerical modeling with the experimental data in processing of a composition based on high-density polyethylene filled with aluminum hydroxide in a twin-screw extruder Ø83×30 with co-rotation screws.We carried out studies for the industrial production of 50 kg/h and the rotational frequencies of screws of 25 and 40 rpm. The proposed model of twin-screw extrusion gives possibility to determine main parameters of the equipment and the process and to estimate temperature heterogeneity of melt at the design calculation for a given productivity and the required final temperature of polymer melt.The parameters include the geometry of operation elements, frequency of rotation of screws and required minimum drive power.It is possible to determine rotational frequency of screws and thermal conditions of extruder working elements at verification calculation of an extruder for a given geometry of screws. Fig. 1 . Fig. 1.Scheme of the volume of processed TpM within one turn of cutting of a screw of a twin-screw extruder: 1, 5 -surfaces of TpM limited by the corresponding volume of the neighboring screw; 2 -a surface bounded by the barrel; 3, 6 -surfaces bounded by flanges of combs of the cutting of a screw; 4 -a surface bounded by a core of a screw; 7 -longitudinal axis of a screw Fig. 3 . Fig. 3.A cross-section of a twin-screw extruder: D -diameter of a comb of cutting of screws, m; h -depth of the channel (depth of cutting) of each of screws, m; β is the angle of conjugation of screws, rad For each turn of screws, a doubled C-shaped volume of TpM with a cross-section in the form of a ring of the area ( ) ( ) p − − 2 2 2 4 D D h except for the shaded in Fig. 3 area of a length ( ) − s ke in the direction of the axis of an extruder (here, e and s are a width of a comb of a turn and a pitch cutting of a screw, m, k is a number of runs of cutting of a screw) goes from an extruder.The area of ABCF segment is equal to the area of OAFC sector except for the area of OAC triangle: Fig. 4 . Fig.4.Design model of a co-rotating twin-screw extruder: a -nature of engagement of screws with a semicircular profile of cutting; b -velocity components in a channel of a screw (D -diameter of a comb of cutting of screws, m; s -pitch of cutting of screws, m; h max -maximum depth of an operation channel (radius of cutting of each of screws), m; r, L -coordinates directed along a height of cutting of screws and along the axis of each of screws, respectively, m; φ s -angle of spiral cutting of screws, rad, w x and w z -velocity components along the x and z axes, respectively, m/s; W s -circumferential velocity of each of screws, m/s) Fig. 5 .Fig. 6 . 2 )Fig. 8 . Fig. 5.A temperature change along the height of the channel of screws for the adiabatic mode at n=0.3 (solid lines) and n=1 (dashed lines) along the length of screws: 1 -at the beginning; 2 -in the middle; 3 -at the end of screws; y -a coordinate directed along the height of a working gap ( = 0, y h) Fig. 9 . Fig. 9.A dependence of melt temperature along the height of the screw channel at the beginning (1, 4), in the middle (2, 5) and at the end (3, 6) of the screw for different values of the equivalent thermal conductivity -ordinary one (1-3) and doubled one (4-6) Fig. 10 . Fig. 10.A temperature field for the adiabatic mode of the extrusion process: a -for the usual equivalent thermal conductivity of melt; b -for the doubled equivalent thermal conductivity of melt Fig. 11 .Fig. 12 . Fig. 11.Temperature profiles of TpM along the cutting height of screws for n s =40 rpm (1-4 on the length of the screws, which corresponds to the numbers of turns of cutting No. 7, 14, 21 and 28)
8,757
2018-07-31T00:00:00.000
[ "Materials Science" ]
The Automation of Jobs : A Threat for Employment or a Source of New Entrepreneurial Opportunities ? New and emerging technologies pose a serious challenge for the future of employment. As machines learn to accomplish increasingly complex production tasks, the concern arises that automation will wipe out a great number of jobs. This paper investigates the relationship between the risk posed by the automation of jobs and individual-level occupational mobility using a representative German household survey. It provides an overview of current trends and developments on the labor markets due to the automation of jobs. It also describes the most recent dynamics of self-employment and relates it to the risk of the automation of jobs. The results suggest that the expected occupational changes such as losing a job, demotion at one’s current place of employment, or starting a job in a new field are People's Entrepreneurial Activity: Sources and Patterns R ecent technological progress, particularly in the field of ICT, has led to the emergence of Industry 4.0, the fourth industrial revolution, and has given rise to a debate about the future of employment.There is great concern that as technology develops further, it will become possible for machines to perform tasks at least as efficiently as the humans who are currently performing them.As a consequence, it is feared that automation will lead to a massive wipe-out of jobs.Researchers from the University of Oxford, Carl Benedikt Frey and Michael Osborne, recently arrived at the conclusion that, given the current state of technology, about 47 percent of the US labor force are in jobs that are highly likely to be replaced by machines in the next 10-20 years [Frey, Osborne, 2013, 2017].Numerous follow-up studies generally confirmed this scenario for other countries, though they report great variations in automation risk across countries. 1he aforementioned studies, however, provide estimates based on aggregate employment data and it therefore remains unclear whether or not and how far the predicted risk of job automation 2 is associated with occupational mobility at the individual level.Hence, the present paper aims to shedd more light on this relationship by investigating whether working in an occupation with a high risk of automation affects job changes, such as the risk of losing one's job, demotion at one's current place of employment, or starting a job in a new field, among others.Then, this paper investigates the impact of the automation of jobs on the probability of becoming an entrepreneur.The recent rise of entrepreneurship in many developed countries that has been observed over the last two decades raises many questions with regard to the drivers of this development and the quality of those start-ups.Particularly, the technological progress that leads to the automation of jobs may foster start-ups created by necessity by those people whose jobs are likely to be replaced with machines.At the same time, technological progress may lead to an increase in the level of opportunity-driven and growth-oriented start-ups.For instance, Shane [Shane, 2000] discusses how the introduction of a radical innovation such as 3D printing technology has led to the emergence of entrepreneurial opportunities in very different fields ranging from the creation of personalized sculptures from a photo to constructing prototype models of industrial and architectural design to printing out artificial bones and creating three-dimensional human brain models for surgical planning.Obviously, some of those opportunities emerge in fields (e.g., surgery) in which machines complement rather than substitute human labor.The present empirical analysis is based on German Socio-Economic Panel data, an annual representative survey of German households containing rich information on an individual's socioeconomic background.The results suggest that the risk of the automation of jobs increases the risk of occupational changes such as losing a job, demotion at one's current place of employment, or starting a job in a new field within the next two years.At the same time, the risk of automation is negatively correlated with the probability of becoming an entrepreneur, both with and without employees.This suggests that entrepreneurs are less likely to be driven by necessity arising from the risk of job automation.Thus, the rising level of entrepreneurship in Germany is more likely to be driven by new technology creating new entrepreneurial opportunities rather than by destroying jobs. The Labor Market in the Digital Age: Trends and Developments This section reviews trends and developments that are currently present on the labor markets of many developed countries and are related to the current progress in the automation of tasks.In particular, we describe the phenomenon of the polarization of labor markets.Then, we discuss the consequences of automation for the future of employment by considering current trends in the rate of entrepreneurial activities. Which jobs are at risk of automation? In order to understand which jobs are at high risk of automation, it is necessary to analyze what types of tasks can be effectively and efficiently performed by computers and those tasks in which computers merely supplement human labor.The authors of [Autor et al., 2003] distinguish between two broad sets of tasks according to the extent of their vulnerability to computerization, namely, routine and non-routine tasks.The group of non-routine tasks can be further divided into manual and abstract tasks.Due to the nature of routine tasks that may be both cognitive (e.g.performing calculations) and physical (e.g., repetitive operations in a stable environment), they can be fully codified and thus jobs that mainly comprise routine tasks are highly susceptible to computerization.While machines outperform humans in many of the routine tasks, they have not yet achieved high performance levels when carrying out non-routine tasks, that is, manual and abstract tasks.Manual tasks are those activities that can be easily performed by humans but require enormous computing power from machines.Examples of such tasks are manual operations in unstable, changing environments that require high adaptability and manual dexterity as well as visual and language recognition.One should note, however, that the current progress in artificial intelligence (AI) is Sorgner А., quite impressive and it can be expected that machines will learn to perform those tasks even better in the near future (see [Brynjolfsson, McAfee, 2014], for other examples).Still, humans currently perform these tasks at a much lower cost, which is why there is a relatively low risk of computerization for such jobs.Last but not least, abstract tasks require creativity, persuasion, and problem-solving skills, in such capacities computers tend to complement highly educated workers.Given this state of technology, the major trend that is currently observed in various developed countries is that of the polarization of labor markets [Autor, 2015a;Autor, Dorn, 2013;Goos et al., 2014].Job polarization is a phenomenon that refers to the growth of employment at opposite ends of the occupational skill spectrum.That is, there has been an increase in highly paid jobs that require high levels of education and are mostly comprised of abstract tasks on the one hand, and on the other, there is growth in low-pay jobs that comprise manual tasks performed by people with lower levels of education.Recently, in a study of the future of employment in the US, Frey and Osborne [Frey, Osborne, 2013, 2017] arrived at the conclusion that about 47 percent of the US labor force is currently working in occupations with a particularly high risk of being computerized in the next 10-20 years.Those high-risk occupations mainly comprise transportation and logistics occupations, office and administrative support workers, and production occupations.The OECD's Directorate for Employment, Labour and Social Affairs commissioned a similar study for OECD countries [Arntz et al., 2016], whose authors concluded that the risk of automation might have been overestimated.On contrary, they find that, on average across the 21 OECD countries, only about 9% of jobs are automatable, although there is great variation between the countries.The highest risk of automation was found for Germany and Austria (12%) and the lowest for Korea and Estonia (6%).The study by Chang and Huynh [Chang, Huynh, 2016] of ASEAN countries, however, reports that about 56 percent of employment is at high risk of displacement over the next decade or two.And the study by Sorgner, Bode, and Krieger-Boden [Sorgner et al., 2017a] of G20 countries provides evidence of gender-specific differences in the effects of automation on employment, which also vary strongly across countries.This strong crosscountry variation may reflect general differences in the industrial structures of the economies in those countries.For instance, knowledge-intensive sectors have jobs that heavily rely upon abstract tasks, while many jobs in manufacturing sectors are routine-based and, thus, susceptible to automation.While the study by Frey and Osborne [Frey, Osborne, 2013, 2017] for the US and similar studies for other countries focused on aggregated employment data, it is not clear how the risk of automation and computerization affects occupational changes at the level of individuals, such as transitions from paid employment into unemployment or self-employment.Hence, this paper's aim is to shed more light on this issue by investigating micro-level data. The automation of jobs and the rise of entrepreneurship A pronounced development that many developed countries are currently experiencing is that of a fundamental shift from a managed economy to an entrepreneurial economy.The term 'managed economy' refers to the organization of market economies after WWII that were characterized by the prevalence of economies of scale, standard production routines, high levels of specialization and relatively low levels of uncertainty in the manufacturing process.In contrast to the managed economy, the entrepreneurial economy is predominantly based on pronounced start-up activity, innovation that occurs at entrepreneurial organizations, flexible production and flexible labor markets, and relatively high levels of uncertainty [Audretsch, Thurik, 2000, 2001].Moreover, new business formation, while being largely neglected by policy makers in the managed economies -starts to play an increasingly important role in the entrepreneurial economy with regard to its direct effects such as job creation [Acs, 2011] and, more importantly, the indirect effects.Concerning the latter, start-ups represent an important challenge for incumbent firms and, thus, force them to perform more efficiently [Fritsch, 2011].Last but not least, new entrants may create new markets by introducing radical innovations [Baumol, 2004].This shift towards a more entrepreneurial economy is well reflected in Figure 1, which shows the dynamics of self-employment rates in Germany during the period between 1991 and 2012.The self-employment rate grew steadily from about 8 percent in the beginning of the observation period up to about 11.5 percent at the end of the period.It is worth noting that this development cannot be entirely attributed to the former GDR's transition to a market economy, although the event of German reunification contributed significantly to the rise of the overall self-employment rate in Germany.Particularly, the self-employment rate in East Germany converged with the level of self-employment activities in West Germany around 2004 and even exceeded it thereafter.Nevertheless, the rise of entrepreneurial activities can also be observed in West German regions.This evidence leads to a question of what the drivers of this fundamental move towards a more entrepreneurial society are.The rising level of entrepreneurial activities may simply reflect various structural changes in a society.For instance, changes in the socio-demographic characteristics of the population, such as the age structure, the increased rate of female participation in the labor market, and an average higher level of education, might have led to a rise in entrepreneurship [Fritsch et al., 2015]. 3Moreover, the incentives People's Entrepreneurial Activity: Sources and Patterns to become an entrepreneur may have changed as a response to a variety of policy measures designed to promote entrepreneurship that have been realized over approximately the last two decades.For example, in Germany, a whole range of public policy measures have been realized, which promoted, for instance, start-ups by unemployed persons [Caliendo, Kritikos, 2010] and by students as well as highly educated staff at universities and other public research institutes ('EXIST').Some of those programs aim to reduce start-up barriers for women related to human and financial capital for women [Welter, 2006]. 4Those policy measures may have at least partially shaped a more pro-entrepreneurial attitude among the population and a stronger awareness of entrepreneurship as an alternative career option thus contributing to the growth of entrepreneurship.Last but not least, technological progress, in particular achievements in ICT that gave rise to the fourth industrial revolution, may be responsible for a great number of entrepreneurial opportunities and the shift towards an entrepreneurial economy in many developed countries [Audretsch, Thurik, 2000].Remarkably, Figure 2 demonstrates that the rise of self-employment rates in Germany was predominantly due to the rise in self-employment without employees (solo self-employment).While the solo selfemployment rate increased from about 3.5 percent in 1991 to 6 percent in 2012, the level of self-employment with employees increased only negligibly.The businesses of the solo self-employed have often been regarded as secondary (low quality) start-ups, as they generally do not create much value in terms of innovation, employment growth, and wealth generation [Shane, 2009].This recent rise in the level of self-employment is quite remarkable and requires investigation, as there is not much evidence concerning the reasons behind this development and about the quality of such startups.Fritsch et al. [Fritsch et al., 2015] conducted a decomposition analysis of self-employment rates in Germany over time in order to determine the major drivers of such a pronounced, radical change.They provide evidence that demographic developments, such as the shift towards employment in the service sector and a larger share of the population holding tertiary degrees, are the major drivers of the increase in the overall level of self-employment.While these factors explain most of the developments in selfemployment with employees, they could only explain less than a half of the much more dramatic increase in self-employment without employees.In particular, it remains unclear whether or not and how far the rise of solo self-employment was triggered by technological progress that has led to the automation of jobs.It can be assumed that people in jobs with a relatively high risk of automation may be more likely to set up businesses out of necessity.Furthermore, it can be expected that such businesses are less likely to be growth-oriented, as they have been created with the primary aim of employing the business owner Source: own calculations based on the German Micro-Census.[Shane, 2009].Thus, the automation of human labor may indeed drive the levels of solo self-employment "out of necessity".On the contrary, technological progress may lead to a rise in opportunity-driven, growthoriented entrepreneurship in fields that are less susceptible to automation and are characterized by creative and abstract tasks.Hence, another aim of this paper is to investigate the relationship between the risk of job automation and an individual's likelihood of becoming an entrepreneur. Data source The empirical analysis is based on German Socio-Economic Panel (SOEP) data, which is an annual representative household survey conducted by the German Institute for Economic Research (DIW).It includes detailed information about the socioeconomic situations of approximately 22,000 individuals living in Germany [Wagner et al., 2007].For the purposes of the present analysis, data for the time period from 2005 to 2013 was used. Dependent variables Currently employed individuals are asked in the SOEP to assess the probability of occupational changes over the next two years on a 10-point Likert scale from 0 to 100 with 10-point steps.The questions comprise various types of occupational changes related to losing or switching a job, an occupational promotion or demotion, expected wage/salary increases, etc.Many of those changes might occur as a consequence of the progress of computerization and/or the automation of jobs.The precise wording of questions used in the present study is as follows: "How likely is it that you will experience the following career changes within the next two years?Please estimate the probability on a scale of 0 to 100, with 0 meaning that such a change definitely will not take place, and 100 meaning that such a change definitely will take place." • "Will you lose your job?" • "Will you stop working in your current field and start working in a different one?" • "Will you be demoted at your current place of employment?" • "Will you attend courses or seminars to obtain additional training or qualifications?" • "Will you receive a salary or wage increase beyond the collectively negotiated wage increases?" • "Will you start working on a self-employed and/or freelance basis?"In addition to the anticipated occupational changes, in our further analysis we study the actual transitions from paid employment to unemployment and self-employment.To this end, two dependent variables were constructed as binary variables that equal to one if a respondent's observed employment status in time period t is paid employment and his or her employment status two years later is unemployment (self- People's Entrepreneurial Activity: Sources and Patterns employment).In addition, for respondents who switched from paid employment to self-employment, we distinguish between those who did so on a solo basis and those who are self-employed with employees, in order to partially account for necessity-and opportunity-based motives.5 Independent variable The variable of interest is the occupation-specific probability of automation, which indicates the level of risk for a particular occupation to be automated or computerized in the next one to two decades.This variable is based upon and was adapted from the study by Frey and Osborne [Frey, Osborne, 2013, 2017], who estimated the automation/computerization probabilities for 702 occupations according to the US occupational classification system, O*Net.Together with a group of experts in machine learning and robotics, Frey and Osborne were able to identify a set of occupations that they labeled with 1, meaning a 100 percent probability of the occupation being computerized in the next one to two decades, or 0 if the risk of computerization was considered absent. 6In the next step, they identified technological bottlenecks for computerization, that is, occupation-specific tasks that represent a challenge for machines.In particular, they identified three types of such bottlenecks, namely, social intelligence, creativity, and manipulation (perception).While social intelligence and creativity require high abilities and represent abstract tasks, manipulation and perception are mostly manual tasks (such as manual dexterity or tasks that are related to an unstructured work environment) that can be easily performed by humans, but represent a significant challenge for robots and machines. 7Finally, on the basis of the labeled data set, the researchers developed the optimal predicting algorithm, which they then used to estimate the probabilities of computerization for the remaining occupations based on the extent to which they are composed of tasks that pose bottlenecks to computerization. Frey and Osborne [Frey, Osborne, 2013, 2017] provide the estimated probabilities of computerization for the 6-digit U.S. System of Occupational Classification (2010 SOC).Hence, they need to be converted to the 4-digit ISCO88 occupations that are available in SOEP in order to match them with other individual-level information.For this purpose, the algorithm created by the US Bureau of Labor Statistics was used. Control variables A wide set of control variables is considered, which can affect an individual's occupational mobility.In particular, information is available about the number of years that a respondent spent in formal education, tenure with his or her current employer, experienced years of unemployment, socio-demographic characteristics including age, gender, nationality, and children in the household.Moreover, SOEP data include short item scales that measure the Big Five dimensions of the personality [Costa, McCrae, 1992]. In particular, the survey includes 15 items, three for each of the five traits, which has been shown to accurately replicate the results of the more extensive 25-item Big Five inventory [Gerlitz, Schupp, 2005].Psychological personality characteristics may affect an individual's willingness to change occupations in general, for instance, if a person has strong preferences for variety in his or her occupational environment [Åstebro, Thompson, 2011].Another reason for including personality traits in the model is that they may, to a certain extent, capture unobserved abilities beyond those measured by the level of formal education that may affect both the choice of a certain occupation and the probability of occupational changes.For a definition of the variables used in the analysis and how certain traits were measured see Table 1, which provides information on the descriptive statistics for independent variables.According to this table, the average respondent in the sample is about 42 years old and has enjoyed about 12.2 years of formal education.The average respondent has been at his or her current job for about 11 years and has experienced 0.9 years of unemployment in the past. Method The regression method used in the empirical analysis of expected occupational changes accounts for the peculiarities of the dependent variable, which is the probability of an occupational change occurring within the next two years.Since the dependent variable is bounded between zero and one, the model can be estimated by means of a fractional response model (FRM) proposed by Papke and Wooldridge [Papke, Wooldridge, 1996].The analysis of transitions from paid employment to unemployment and self-employment is conducted by means of a probit regression, which accounts for the binary nature of the dependent variable that takes a value of 1 if an occupational change has occurred and takes a value of 0 if this was not the case. Descriptive results According to Table 2, the highest average probability of an occupational change to occur within the next two years reported by currently employed respondents was related to the risk being demoted at one's current place of employment (about 46 percent), followed by the probability of acquiring additional qualifications (about 39 percent) and losing a job (about 21 percent).Interestingly, the lowest average probability of occupational change is that due to a shift to self-employment (about 8 percent).Nevertheless, it should be noted that this number is about 8 times higher than the yearly start-up rate in Germany, which is only about 1 percent [Fritsch et al., 2012].This indicates a rather high willingness among the German population to set up a business, a potential event that apparently could not be realized to its full extent.8Table 3 presents the probability of occupational changes for respondents who are in occupations that are differently affected by the risk of automation.In particular, we distinguish between three groups of automation risk: low (less than 30 percent), medium (30 to 70 percent) and high (more than 70 percent). The descriptive evidence in Table 3 suggests that the probability of occupational changes increases with the rising risk of computerization for the categories of losing a job, starting work in a new field, and demotion at a current job.Moreover, a higher risk of the computerization of an occupation is associated with an on average lower probability of acquiring additional qualifications.The same pattern was observed for the probability of becoming self-employed.Last but not least, there seems to be a non-linear relationship between the risk of automation and the probability of an increase in wages.However, this relationship may be determined by other factors, such as previous labor market experience or the level of formal education, Тable 1. Descriptive statistics for independent variables Variable Definition and measurement Mean Standard deviation Occupation-specific probability of automation The risk of a certain occupation being computerized in the next 10-20 years. Adapted from [Frey, Osborne, 2013, 2017 3) I frequently have the experience that other people have a controlling influence over my life; 4) The opportunities that I have in life are determined by social conditions; 5) Innate abilities are more important than any efforts that one can make; 6) I have little control over the things that happen in my life. People's Entrepreneurial Activity: Sources and Patterns among others.Hence, this relationship will be investigated in the next section in a multivariate analysis, in which we control for a wide set of socio-demographic characteristics that may influence the result. The risk of computerization and perceived occupational changes The results of a multivariate analysis of the relationship between the occupation-specific probability of computerization and the self-reported probability of occupational changes within the next two years are reported in Table 4.In order to test for a possible non-linear relationship, the occupation-specific probability of computerization enters the model together with its squared term.The results suggest that there is a statistically significant reversed U-shaped relationship between the risk of computerization and the self-reported probability of losing current paid employment (Column I) as well as the likelihood of giving up working in one's current occupational field and starting a job in a completely new occupation (Column II).This result is quite surprising because it means that the risk of occupational change increases with the rising risk of automation only to a certain threshold level.People in occupations with a very high risk of automation are less likely than people in occupations with a medium risk to expect occupational changes related to losing their job or starting a job in a completely new field within the next two years.One possible reason for this finding may be related to existing labor market regulations with regard to employees' protection against dismissal.9Alternatively, employees in occupations with a high risk of automation may be over-optimistic about the future of their occupations and, thus, underestimate the risk of losing their jobs. Next, the labor market polarization implies that the highest risk of automation applies to middle-skill workers in routine jobs.Moreover, these workers will be prone to downward occupational mobility when they are displaced by machines, unless they possess or acquire skills that are not susceptible to computerization, such as creativity or social intelligence.Hence, it can be expected that the higher the risk of automation of jobs, the higher is the likelihood of being demoted at one's current place of employment.The results in Column IV of Table 4 support this hypothesis: there is a statistically significant (although at a 10% level) effect of occupational automation risk on the probability of demotion at one's current place of employment within the next two years.At the same time, people in occupations with the highest and the lowest risks of automation are significantly more likely than people in jobs with a medium risk of automation to expect to gain additional qualifications in the near future (Column V).This observation may be indicative of a moderate risk of downward occupational mobility in the future, as people who urgently require additional skills in order to protect themselves from the negative effects of automation are likely to do so.11Similarly, gaining additional qualifications may be also of high importance for people in occupations with a low risk of automation, in which computers strongly complement human labor.Moreover, a higher risk of automation is less likely to be associated with the probability of wage increases for individuals in those occupations (Column VI). Finally, an interesting result was obtained with regard to the probability of setting up one's own business, namely, the respondents in occupations with a low risk of automation are significantly more likely than people in jobs with a high risk of automation to see themselves moving toward self-employment in the near future (Column VII).This is an important result that points toward opportunity-driven nascent entrepreneurship, since the willingness to set up one's own business is more pronounced among workers who are in relatively secure jobs in terms of their susceptibility to automation.Moreover, jobs with a low risk of automation include tasks requiring creativity, social interactions and abstract thinking, which are critically important for entrepreneurs. As regards the effects of the control variables, it was observed that people with high and low levels of education are significantly less likely to expect occupational changes, as compared to individuals with middling levels of education.People with a longer tenure, males as well as those with children, have a lower probability of occupational changes.With regard to the effects of personality traits, people with higher levels of conscientiousness, agreeableness and an internal locus of control are less likely to expect occupational changes in the near future.The results for other personality traits are mixed.For instance, people with greater willingness to take risks report a higher probability of starting a job in a new field, obtaining additional qualifications, expecting a wage increase, and of becoming self-employed.12On contrary, less risk-averse people report a lower probability of demotion at their current place of employment. Transitions to self-employment and unemployment While the previous section presented the results for expected occupational changes, in this section the analysis is focused on real occupational transitions from paid employment to unemployment and selfemployment within the next two years.The first column of Table 5 suggests that the high risk of automation in an occupation is more likely to lead to unemployment in the next two years.This relationship is linear, meaning that there is no decrease in the probability of unemployment for those workers who are in jobs with the highest risk of automation. 13Hence, it is likely that those workers underestimate the risk of losing their job, as has been shown in the Model I of Table 4.Moreover, there is a statistically significant and negative relationship between the risk of automation and the transition from paid employment to self-employment.In order to provide a more differentiated picture of the relationship between the risk of automation and switches to self-employment, we additionally distinguish between self-employment with and without employees.The group of the solo self-employed may contain necessity-driven entrepreneurs, that is, those individuals who were at high risk of losing their jobs due to, for example, automation.On the contrary, the group of self-employed with employees is likely to contain opportunity-driven and growth-oriented entrepreneurs.Hence, one can expect different effects from automation risk on the various types of entrepreneurship.However, the results in Columns III and IV of Table 5 suggest that both types of entrepreneurs are likely to come from low-risk occupations, although the effect is lower (and only statistically significant at the 10% level) for the solo self-employed. Conclusions New and emerging technologies will lead to radical transformations on labor markets.As machines learn to accomplish not only routine tasks but also activities that require abstract skills and the ability to work in an unstructured environment, concerns arise that automation will wipe out a great number of jobs.This paper provided new evidence on the impact of the automation of jobs on individual-level occupational mobility. In particular, it shows that the expected probability of occupational changes rises with the occupationspecific risk of automation.This is particularly pronounced for such occupational changes as losing a job, demotion at one's current place of employment, or starting a new job in a different field.According to the respondents' own assessments, these changes are likely to occur within the next two years.This is quite in line with the prediction by Frey and Osborne [Frey, Osborne, 2013, 2017], who concluded that the current state of technology is such that it will be possible in the next 10-20 years to replace about half of the jobs in the US with machines. An important question then concerns which additional skills workers in jobs at high risk of automation should acquire in order to make themselves less susceptible to the negative consequences of automation. The empirical results in this paper provide evidence that workers in high-risk occupations do indeed intend to gain additional qualifications and training in the near future.However, no information was available with regard to the type of training they were more likely to choose.Hence, more research is needed in order to develop educational strategies to make workers less susceptible to automation.Moreover, this paper investigated the relationship between the automation of jobs and an individual's propensity to become an entrepreneur.The causes for the recent rise of entrepreneurial activities in many innovation-driven economies including Germany, and in particular, the rise of self-employment without employees, are still not entirely clear.There is also concern regarding the quality of such businesses without Source: author's calculations.Sorgner А., People's Entrepreneurial Activity: Sources and Patterns employees.For instance, a high risk of automation for certain jobs may drive entrepreneurial activities out of necessity.However, the result of this paper is that people in occupations at high risk of automation are significantly less likely than people in low-risk occupations to become nascent entrepreneurs and make the transition from paid employment to self-employment.This result holds for both self-employed with and without employees.It indicates that new technologies are likely to create new entrepreneurial opportunities in occupations that consist of tasks that are less likely to be computerized in the near future.Thus, the rising level of entrepreneurial activities is more likely to be driven by new technologies creating many new opportunities rather than by technology making jobs obsolete over the course of automation. Figure 1 . Figure 1.Dynamics of self-employment rates in Germany, 1991-2012 Source: own calculations based on the German Micro-Census. Figure 2 .Thousend Figure 2. Dynamics of self-employment with and without employees in Germany, 1991-2012 of occupational changes in the next 2 years by the level of occupation-specific automation risk Questions on the probability of occupational changes 10Тable 3. Probability Source: author's calculations.Тable 2. Descriptive
7,551.6
2017-06-29T00:00:00.000
[ "Business", "Economics" ]
Fluorescently Labelled Silica Coated Gold Nanoparticles as Fiducial Markers for Correlative Light and Electron Microscopy In this work, gold nanoparticles coated with a fluorescently labelled (rhodamine B) silica shell are presented as fiducial markers for correlative light and electron microscopy (CLEM). The synthesis of the particles is optimized to obtain homogeneous, spherical core-shell particles of arbitrary size. Next, particles labelled with different fluorophore densities are characterized to determine under which conditions bright and (photo)stable particles can be obtained. 2 and 3D CLEM examples are presented where optimized particles are used for correlation. In the 2D example, fiducials are added to a cryosection of cells whereas in the 3D example cells are imaged after endocytosis of the fiducials. Both examples demonstrate that the particles are clearly visible in both modalities and can be used for correlation. Additionally, the recognizable core-shell structure of the fiducials proves to be very powerful in electron microscopy: it makes it possible to irrefutably identify the particles and makes it easy to accurately determine the center of the fiducials. The field of correlative light and electron microscopy, or CLEM, has expanded rapidly during the last decade. Especially in biology it turns out to be very useful to combine these two techniques. Light microscopy or fluorescence microscopy (FM) is used to visualize, localize and track specific fluorescent molecules in cells over large areas with high sensitivity, while electron microscopy (EM) provides high resolution ultrastructural information of cells and materials 1,2 . This opens up the possibility to visualize rare transient events or specific cells within complex tissues 3,4 . For the best results in CLEM experiments, data from the different modalities should be registered with the highest possible precision. This is complicated by the vastly different fields of view of FM and EM, as well as the different contrast mechanisms of these techniques. FM requires bright and stable fluorophores, while EM relies on differences in electron density for contrast, and frequently requires heavy metal staining to visualize biological structures. Since fluorescent probes (i.e. molecules or proteins) are typically not electron dense, fluorescent labels can generally not be used for correlation. Particles visible in both modalities (fiducial markers) can be used to overcome this problem. The viability of this approach has been demonstrated in literature by using fluorescent latex beads [5][6][7] or quantum dots [8][9][10] . However, a shared problem of these candidate particles is their relatively low EM contrast, making visualization and localization in heavily EM stained samples difficult or even impossible. An alternative approach to register data between modalities is via a double labelling procedure. Here, proteins of interest are labelled with a fluorescent probe, followed by labelling with antibodies or protein A conjugated with colloidal gold 11 . A disadvantage of this approach is that correlation is indirect and based on the assumption that both labels fully colocalise. Despite great successes achieved by this approach, Miles et al. 12 recently demonstrated that this assumption not always holds true, thereby stressing the importance of finding a more direct way for registering FM and EM data. In this work, nanocomposite core-shell particles based on a gold core and a fluorescently labelled silica shell ( Fig. 1) are deployed as fiducial markers. The gold core provides contrast for EM and fluorophores covalently incorporated in the silica shell for FM. Rhodamine B is chosen as fluorophore because it was demonstrated by Karreman et al. 13 that rhodamine like fluorophores behave well under the dry and vacuum conditions encountered in EM. Using a red emitting fluorophore is also advantageous because excitation at longer wavelengths results in reduced autofluorescence 14 . To obtain the fiducials, first, an optimized synthesis of the nanocomposite particles to obtain spherical and highly monodisperse particles of arbitrary size is presented. Next, a thorough study is performed to optimize the fluorophore labelling density within the silica shell to obtain bright and (photo)stable particles. Finally, the particles are tested as fiducials in a 2D and a 3D CLEM experiment. Results Optimization of fluorophore density. A series of ~90 nm diameter particles with relative dye concentrations of 0 to 30 were synthesized, see Table 1 and SI-1. Transmission electron microscopy (TEM) measurements were carried out to determine particle sizes at different stages of the synthesis. Representative images of particles are shown in Fig. 2. The images illustrate that the particles are successfully coated with a very thin silica layer after the first growth step (a). The silica layer becomes thicker and more homogeneous after growth of the rhodamine B labelled silica layer (b) and the second stabilization layer (c). It is important to note that silica grows selectively onto the existing particles, i.e. no secondary nucleation takes place. Average particle diameters and standard deviations at the different stages of the synthesis were determined from TEM images. Average particle sizes are almost identical for all samples at different stages of the reaction, justifying the assumption that the volume of rhodamine B labelled silica per particle is similar for all samples. By ensuring that the number of particles is the same in all reactions, it is ensured that we are truly studying fluorophore labelling density effects. To calculate the average fluorophore separations, fluorophore labelling efficiencies were determined. This was performed via absorption measurements after dissolution of the silica shell of the particles as described by Imhof et al. 15 . Next, the average separation between fluorophores was calculated as the cube root of volume available per fluorophore. This value increases from 7.7 nm for the highest to 28.5 nm for the lowest labelling density (see Table 1). The estimated average number and corresponding error N ( ) of fluorophores per particle ranges from 12 for the lowest to 558 for the highest labelling density and is also included in this Table. Spectral measurements and radiative decay curves. To study fluorophore density effects, excitation and emission spectra and radiative decay curves were recorded (see Fig. 3a,b). Both spectra exhibit a small blue shift with increasing fluorophore density that is accompanied by an increase in the height of the shoulder around 520 nm in the excitation spectrum. Radiative (or fluorescence) decay curves were measured using the time-correlated single-photon counting technique after excitation with a pulsed laser 16 . These curves reveal faster decays with increasing labelling density which is indicative for fluorescence quenching. The radiative decay of particles labelled with the lowest dye labelling density, [Dye] = 2.5, is already slightly faster than the decay of the fluorophore, rhodamine B isothiocyanate (RITC), in ethanol. This can be attributed to a change in the local medium of the fluorophores, silica versus ethanol, and the APTES-dye coupling. Similar observations at increasing fluorophore densities in solid matrices were made by Genovese et al. 17 (rhodamine B in pluronic silica) and Imhof et al. 15 (fluorescein in Stöber silica) and for fluorophores on antibodies by Szabó et al. 18 . An explanation for these observations can be found by taking into account concentration effects including self quenching. Increasing the fluorophore density results in the formation of an increased number of dimers 19,20 or other species acting as quenching centers. Additionally, resonance energy transfer (homo-FRET) between fluorophores becomes more efficient because the average separation between fluorophores shortens 16 . This combination of energy transfer and quenching successfully explains the effects of fluorophore concentration on the fluorescence quantum yield of fluorophores in solution 21,22 and can also be applied to the work presented here. When increasing the fluorophore density in the particles, energy transfer between fluorophores becomes possible and the number of quenching centers within the particles increases. This energy transfer allows the excited state to migrate within the particle which makes it possible for the excited state to migrate from an unquenched fluorophore to a nearby quenching center, thereby contributing to quenching of fluorescence. Single particle intensity and (photo)chemical stability. Widefield fluoresence microscopy measurements were performed to determine the single particle intensity and photostability of the particles. In Fig. 4a the average single particle intensity is plotted as a function of the relative dye concentration. It can be seen that initially the single particle intensity increases with increasing dye concentration. After this first increase, the single particle intensity remains more constant and eventually a small drop in intensity is observed. This optimum can be explained by the counterbalance between increasing the number of fluorophores versus the increase in self quenching which becomes more evident when the single particle intensity is plotted as a function of the average fluorophore separation, see Fig. 4b. At large separations (>15 nm) the single particle intensity increases when the number of fluorophores per particle increases (i.e. shortening of the average separation) because the fluorophores do not sense each other. Next, a more constant regime between 10-15 nm with an optimum around 12 nm is observed. This constant regime can be explained by the counterbalance between increasing the number of fluorophores versus the increase in self quenching. Finally, when average separations drop below 10 nm self quenching becomes dominant which results in the aforementioned drop of intensity. The photostability of the particles was studied by measuring their photobleaching behavior. From the photobleaching curves in Fig. 3c it becomes clear that, generally speaking, the intensity loss reduces when the fluorophore density increases. This can be explained by the shortening of the radiative decay time at higher fluorophore densities. The fluorophores spend less time in the excited state, thereby decreasing the bleach rate. In addition, photobleaching reduces the effective dye concentrations which decreases the effect of concentration quenching thereby counterbalancing bleaching. Another observations made throughout the experiments was that particles with a relative dye concentration above 15 tend to cluster over time (gray area, Fig. 4). This clustering can result from a reduction of the negative zeta potential caused by the incorporation of positively charged fluorophores and amine groups. For CLEM applications non-clustered particles are preferred, therefore these high concentrations should be avoided. Based on the results included in this section, a relative fluorophore concentration of 10 (separation 11.7 nm) was chosen as the optimum labelling density. The average single particle intensity is maximum around this concentration. In terms of bleaching, it might be desirable to go for a higher labelling density. However, because of particle stability, a relative fluorophore concentration well below 15 (separation 9.2 nm) is desired. 2D CLEM experiment: Widefield and TEM imaging of fiducials on thin cryosections. The nanocomposite particles with the optimum dye concentration were first tested as fiducials in a 2D CLEM experiment. This experiment was performed by the addition of the particles to a cryosection of cells on a TEM grid in a Fig. 5f clearly shows the core-shell structure of the fiducials. This core-shell structure was also apparent at the lower magnification TEM image in Fig. 5 and proved to be very useful to identify the fiducials; additional images are included in the supplementary information (SI-3). The core-shell also proved to be very useful to accurately determine the center of the fiducials and having a distinguishable well defined structure opens up possibilities for automatic registration of the particles. It should be noted that the EM magnification should be high enough to observe the core-shell structure of the fiducials. At too low magnifications, the core-shell structure is no longer visible which complicates discriminating fiducials from dirt and automatic registration of the fiducials. 3D CLEM experiment: Endycotised nanoparticles as fiducials for correlative confocal fluorescence and 3D electron microscopy. In biological samples, correlating regions of interest (ROIs) between fluorescence and electron imaging can prove challenging due to the heterogeneous content of the cell and limited resolution of fluorescence imaging. Several organelles can be located within the same fluorescent spot, causing the risk of misidentification. The use of fiducials improves the registration accuracy between FM and EM, and can aid in alleviating this issue, especially when imaging in 3D. Due to their unique gold core silica shell architecture, we hypothesized that the nanoparticles could function as well-defined fiducials to correlate fluorescence and 3D electron imaging data. Previous research on silica particles has demonstrated that, under the right conditions, cells readily take up silica particles through endocytosis without cytotoxic effects [23][24][25][26] , indicating that endocytosed particles could serve as a useful and functional fiducial. To examine the viability of the nanoparticles as 3D fiducials, we incubated HeLa cells with the nanoparticles diluted in medium, allowing uptake of the particles into the cells. After three hours the samples were fixed and imaged using confocal fluorescence microscopy. Endocytosed nanoparticles were detected throughout the cells (Fig. 6a), indicating successful endocytosis. Following fluorescence imaging, we selected a region of a cell containing both large, bright spots and smaller, dimly fluorescent spots for FIB-SEM imaging (Fig. 6a, inset). Samples were postfixed, stained and embedded for imaging by focused ion beam scanning electron microscopy (FIB-SEM). In FIB-SEM, samples are imaged by scanning the surface of a ROI using the electron beam, after which a thin layer is ablated from the surface using the FIB. This cycle is repeated until the ROI has been imaged, allowing 3D reconstruction of a sample. FIB-SEM on biological samples requires relatively severe staining with heavy metals to obtain sufficient detail of cellular structures, which comes at the risk of obscuring fiducials, and exaggerating biological features that may be mistaken for fiducials. In our FIB-SEM data, we found that the combination of the electron-dense gold core and the electron-lucent silica shell made for easy, unequivocal identification of the compartments containing nanoparticles, even in heavily stained samples, allowing easy correlation of fluorescence and FIB-SEM data. The nanoparticles were found in endocytic compartments (Fig. 6b), and could be resolved at an individual particle level. The unique structure of the particles proved helpful in identification, meaning that any fluorescent spot seen in the confocal data could be linked to corresponding particles detected using FIB-SEM (Fig. 5b). Interestingly, brightly fluorescent spots were correlated to compartments containing up to 40 particles (Fig. 6c, blue outline), whereas only 1 or 2 nanoparticles could be found in compartments corresponding to dimly fluorescent spots (Fig. 6c, purple outline), indicating a high level of sensitivity. Thanks to the high resolution and small sectioning distance employed by FIB-SEM, even single particles could be detected, and fitted to the fluorescent data. In the FM (Fig. 6a) and FIB-SEM data, particles, and clusters of particles were also observed outside the cells. Thorough testing, including Dynamic Light Scattering measurements, proved that initially non-clustered particles were presented to the cells. The observation of clusters outside the cells can be explained by taking into account that this data is recorded after 3 hours of uptake, after the addition of fixative and after overnight fixation. Within this time frame, particles still present in the medium can aggregate, in particular after the addition of fixative. This aggregation is not expected to affect the initial uptake of the particles but this could complicate automatic registration strategies. Combined, our data shows that the nanoparticles are taken up by HeLa cells and are useable as 3D fiducials, owing to their bright fluorescence and ease of identification in FIB-SEM data. Conclusions and Outlook 15 nm gold particles coated with a fluorescently labelled silica shell (rhodamine B isothiocyanate (RITC)) were successfully synthesized. A relative dye concentration of 10, corresponding to an average fluorophore-fluorophore separation of 11.7 nm yielded optimum brightness and (photo)stability. Particles labelled with this optimum dye concentration were successfully used as fiducials in a 2D CLEM experiment to correlate widefield FM and TEM images by addition of the fiducials to a cryosection of cells on a TEM grid, demonstrating high registration accuracy in both FM and EM. After endocytosis of the fiducials by HeLa cells, the particles could also be used as well-defined fiducials to correlate confocal FM and FIB-SEM. In both experiments, the unique core-shell signature of the fiducials proved very useful to identify the fiducials and to accurately determine the center of the fiducials. This was especially evident in the FIB-SEM data, where a fiducial with only an electron-lucent shell or only a small electron-dense core would be at risk of being misidentified as a cellular structure. In future research, automatic registration procedures will be explored were the distinct core-shell structure of the here presented particles can be used to detect the fiducials in EM. Furthermore, we plan to use the offset between EM and FM positions of the fiducials to correct for FM/EM sample distortions. This opens up the possibility to use the fiducials to test and quantify the accuracy of different data correlation methods. Such a method can for example include nonlinear effects such as sample deformation caused by shrinkage of the sample in EM. Furthermore, the unique architecture of the nanoparticles can aid in devising automated correlation strategies, based on accurate localization of the nanoparticles within complex biological specimens. Finally, we note that due to the silica shell the particles are non-toxic and compatible with live cell imaging experiments, opening up imaging strategies for live-cell correlative imaging. Materials and Methods Synthesis of the fiducial markers. The fiducial markers used in this study were synthesized via a multistep procedure, experimental details are included in the supplementary information (SI-1 and 2). Briefly, gold particles with a diameter around 15 nm were synthesized via the sodium citrate reduction 27,28 . After polyvinylpyrrolidone (PVP) functionalization 29 , the particles were transferred to ethanol and coated with a very thin non fluorescent silica layer using a seeded growth procedure based on the traditional Stöber method [30][31][32] . This layer stabilizes the particles and acts as a spacer between the gold and the fluorophores. Next, the particles were coated with a rhodamine B labelled silica layer. By coupling the fluorophores to (3-aminopropyl)triethoxysilane (APTES) molecules prior to the synthesis, fluorophores were covalently incorporated within the silica matrix 15,33 . Finally, to keep the particles stable, the particles were coated with a second thin silica layer. To optimize the fluorophore labelling density, particles labelled with different fluorophore densities were synthesized by varying the amount of APTES-fluorophore complex added during growth of the fluorescent silica layer. The particles were characterized to find the optimum labelling density in terms of particle brightness and (photo)chemical stability. Determination of the fluorophore incorporation efficiency. Immediately after synthesis of the particles, 3 mL of the reaction mixture was transferred to a 5 mL eppendorf tube. This solution was centrifuged 15 minutes at 15.000 rcf to separate the particles from the reaction mixture. The supernatant was collected and stored. The particles were redispersed in 3 mL absolute ethanol and centrifugation and redispersion in ethanol was repeated two more times. 1.5 mL particle solution and 1.5 mL of a 0.4 M sodium hydroxide solution in water were transferred to a clean 5 mL eppendorf tube. After homogenization, solutions were stored for 48 hours to ensure complete dissolution of the silica shell of the particles. Next, the solutions were centrifuged 30 minutes at 20.000 rcf to remove the non-dissolved gold cores from the solution. Supernatants, ranging from transparent and colorless for the blanco ([Dye] = 0] to transparant pink for high labelling densities ([Dye] = 30) were separated from the red to black pellets and stored. Absorption spectra of all solutions were recorded on a HP8953A spectrophotometer in 1 cm quartz cuvettes. If necessary, samples were diluted with a 1:1 (volume) mixture of ethanol and 0.4 M sodium hydroxide solution. Spectral and radiative decay measurements. Bulk excitation and emission spectra and radiative decay measurements of the particles suspended in ethanol were recorded in 1 cm quartz cuvettes using an Edinburg Instruments FLS920 fluorescence spectrometer. In all measurements, fluorescence was detected at an angle of 90° to the exciting beam. Furthermore, a 530 nm longpass filter was placed between the sample and the detector in all measurements to remove residual excitation light. To record excitation and emission spectra, a 450 W xenon lamp and a double excitation monochromator with a grating blazed at 500 nm was used for excitation. Spectra were recorded with a Hamamatsu H74220-60 photo sensor module with a grating blazed at 500 nm. For the radiative decay measurements a picosecond pulsed diode laser (EPL-515) emitting at 509.8 nm with a 50 ns pulse period and a 204.4 ps pulse width was used for excitation. Radiative decay curves were recorded with a Hamamatsu R928 PMT detector with a grating blazed at 500 nm. Single particle measurements. Measurements were carried out on a Nikon Eclipse Ti widefield microscope equipped with a 40 × 0.75 NA Nikon air objective. A Nikon TI-ND6-PFS perfect focus unit was used to retain sample focus during the measurements. A mercury arc lamp in conjunction with a 510-560 nm excitation filter, a 565 nm long pass dichroic mirror and a 590 nm long pass emission filter ensured proper illumination and detection wavelengths. An excitation intensity of 6.0 W/cm 2 was used in all experiments. Finally, an Andor NEO sCMOS camera was used to record images. Single particle intensities were determined using ThunderSTORM 34 . For the single particle intensity measurements, the obtained data was directly analyzed in Mathematica. To obtain the average single particle intensities, for every sample, data obtained from at least 20 images was plotted in a histogram. The first peak in this histogram was attributed to the single particle intensity and a normal distribution was fitted to this peak to obtain the mean intensity and the standard deviation. For the bleaching measurements, a second analysis was performed in MatLab to trace the intensity of single particles from frame to frame. Furthermore, the data was filtered based on the single particle intensities determined from the first frame to remove clusters from the data set. The filtered data was averaged per frame to obtain bleaching curves. 2D CLEM: Widefield and TEM imaging of fiducials on thin cryosections. HT1080 cells stably expressing LAMP-1-GFP were incubated in complete DMEM containing 5 nm diameter colloidal gold particles conjugated to bovine serum albumin (BSA-Au 5 ) for 3 hours. Following incubation, cells were processed for cryosectioning according to previous protocol 35 . Briefly, cells were chemically fixed using formaldehyde and glutaraldehyde, scraped from the culture substrate and pelleted in 12% gelatin. Samples were infiltrated overnight in 2.3 M sucrose for cryoprotection, and plunge frozen in liquid nitrogen. 70 nm thick cryosections were sectioned and picked up on copper support grids coated with formvar and carbon. Sections were treated with DAPI (4 μg/mL) diluted in PBS to label nuclei. After labelling, sections were washed with PBS, incubated with a diluted solution (1/500) of the fiducial markers in water, followed by rinses with PBS and dH 2 O. The grids were sandwiched between a microscope slide and a #1.5 coverslip in a drop of 50% glycerol in dH 2 O. Fluorescence imaging for DAPI, GFP and the fluorescent nanoparticles was performed with a Deltavision RT Core widefield microscope (GE Healthcare) equipped with a Cascade II EM-CCD camera (Photometrics), using a 100x/1.4 NA objective. Following fluorescent imaging, the sections were washed in dH2O, stained with uranyl acetate and embedded in methylcellulose as previously described 35 . ROIs determined in FM were retraced and imaged in a Tecnai T12 TEM (Thermo Scientific). Following imaging, the x and y positions of the fiducials in fluorescence data were registered using ThunderSTORM 34 . Fiducials not properly resolved in ThunderSTORM were not considered as reference points for registration of data. In TEM data, positions of the fiducials were registered manually using the center of the gold core. Correlation of fluorescence and TEM data based on the positions of the particles was performed using eC-CLEM 36 . 3D CLEM: Confocal and FIB-SEM imaging of endocytosed nanoparticles. Hela cells were grown on gridded glass coverslips, prepared as described by Fermie et al. 37 . Cells were incubated with fiducial markers at a concentration of 1 μg/ml dissolved in complete DMEM and incubated for 3 hours, and fixed overnight in 1x PHEM buffer (60 mM PIPES, 25 mM HEPES, 10 mM EGTA, 2 mM MgCl 2 , pH = 6.9) containing 4% paraformaldehyde (Sigma) and 0.1% glutaraldehyde (Merck) at 4 °C. Following fixation, coverslips with cells were washed in 1x PHEM buffer and mounted in live-cell coverslip holders filled with 1x PHEM buffer to prevent dehydration of the samples. Fluorescence imaging was performed using a Zeiss LSM700 CLSM equipped with 63x/1.4 NA oil immersion objective. Nanoparticles were excited using the 555 nm laser line at 2% power. Z-stacks were collected with 200 nm step size. The position of cells relative to the grid of the coverslips was recorded using polarized light. Cells were prepared for electron microscopy according to a protocol described earlier 38 , with minor modifications. Briefly, samples were postfixed using 1% osmium tetroxide (w/v) with 1.5% potassium ferrocyanide (w/v) for 1 h on ice, incubated with 1% thiocarbohydrazide in dH 2 O (w/v) for 15 min, followed by 1% osmium tetroxide in dH 2 O for 30 min. Samples were en-bloc stained with 2% uranyl acetate in dH 2 O for 30 minutes and stained with Walton's lead aspartate for 30 min at 60 °C. Dehydration was performed using a graded SCIeNTIfIC REPORTS | (2018) 8:13625 | DOI:10.1038/s41598-018-31836-1 ethanol series. Samples were embedded in Epon resin and polymerized for 48-60 h at 65 °C. Polymerized resin blocks were removed from the glass coverslips using liquid nitrogen, mounted on aluminum stubs and rendered conductive using conductive carbon paint and a sputter coated layer of 5 nm Pt. Following sample preparation, automated serial imaging was performed using a Scios FIB-SEM (Thermo Scientific), according to a previously described workflow 37 . Briefly, trenches were prepared surrounding the region of interest using the FIB, after which automated serial imaging was performed using 5 nm isotropic voxels. Electron microscopy images were collected at an acceleration voltage of 2 kV and a current of 0.2 nA, using the T1 backscattered electron detector. Following imaging, correlation of fluorescence and FIB-SEM data was achieved by manual registration using Fiji and ec-CLEM 36 . FIB-SEM images are presented with inverted contrast, to resemble TEM contrast. Data Availability The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
6,080.2
2018-09-11T00:00:00.000
[ "Physics" ]
PiRNA hsa_piR_019949 promotes chondrocyte anabolic metabolism by inhibiting the expression of lncRNA NEAT1 Background Osteoarthritis is a prevalent degenerative joint condition typically found in individuals who are aged 50 years or older. In this study, the focus is on PIWI-interacting RNA (piRNA), which belongs to a category of small non-coding RNAs. These piRNAs play a role in the regulation of gene expression and the preservation of genomic stability. The main objective of this research is to examine the expression of a specific piRNA called hsa_piR_019949 in individuals with osteoarthritis, to understand its impact on chondrocyte metabolism within this condition. Methods We analyzed piRNA expression in osteoarthritis cartilage using the GEO database. To understand the impact of inflammatory factors on piRNA expression in chondrocytes, we conducted RT-qPCR experiments. We also investigated the effect of piRNA hsa_piR_019949 on chondrocyte proliferation using CCK-8 and clone formation assays. Furthermore, we assessed the influence of piRNA hsa_piR_019949 on chondrocyte apoptosis by conducting flow cytometry analysis. Additionally, we examined the differences in cartilage matrix composition through safranine O staining and explored the downstream regulatory mechanisms of piRNA using transcriptome sequencing. Lentiviral transfection of NEAT1 and NLRP3 was performed to regulate the metabolism of chondrocytes. Results Using RNA sequencing technology, we compared the gene expression profiles of 5 patients with osteoarthritis to 3 normal controls. We found a gene called hsa_piR_019949 that showed differential expression between the two groups. Specifically, hsa_piR_019949 was downregulated in chondrocytes when stimulated by IL-1β, an inflammatory molecule. In further investigations, we discovered that overexpression of hsa_piR_019949 in vitro led to increased proliferation and synthesis of the extracellular matrix in chondrocytes, which are cells responsible for cartilage formation. Conversely, suppressing hsa_piR_019949 expression resulted in increased apoptosis (cell death) and degradation of the extracellular matrix in chondrocytes. Additionally, we found that the NOD-like receptor signaling pathway is linked to the low expression of hsa_piR_019949 in a specific chondrocyte cell line called C28/I2. Furthermore, we observed that hsa_piR_019949 can inhibit the expression of a long non-coding RNA called NEAT1 in chondrocytes. We hypothesize that NEAT1 may serve as a downstream target gene regulated by hsa_piR_019949, potentially influencing chondrocyte metabolism and function in the context of osteoarthritis. Conclusions PiRNA hsa_piR_019949 has shown potential in promoting the proliferation of chondrocytes and facilitating the synthesis of extracellular matrix in individuals with osteoarthritis. This is achieved by inhibiting the expression of a long non-coding RNA called NEAT1. The implication is that by using hsa_piR_019949 mimics, which are synthetic versions of the piRNA, as a therapeutic approach, it may be possible to effectively treat osteoarthritis. Supplementary Information The online version contains supplementary material available at 10.1186/s13018-023-04511-z. Introduction Osteoarthritis (OA) is a prevalent, long-lasting condition that affects the joints.It is characterized by the gradual deterioration and loss of articular cartilage, which results in pain, joint stiffness, and reduced functionality.This degenerative process can significantly impact a patient's overall quality of life [1].Currently, the available treatment options for osteoarthritis (OA) are typically limited to invasive procedures such as joint replacement surgery and methods aimed at providing symptomatic relief [2].Hence, it is crucial to explore novel disease mechanisms to enable the development of more precise therapies.Chondrocytes, the only type of cells present in articular cartilage, play key roles in responding to injuries, maintaining tissue stability, and participating in the process of cartilage reconstruction in osteoarthritis (OA).In normal physiological conditions, chondrocytes strike a dynamic equilibrium between differentiation and apoptosis, as well as the synthesis and degradation of the cartilage extracellular matrix.However, in the presence of OA, this equilibrium is disrupted, leading to the breakdown of the cartilage matrix and excessive chondrocyte apoptosis.Ultimately, this results in the degeneration of articular cartilage [3].The regulation of extracellular matrix synthesis and degradation in chondrocytes is of utmost importance in maintaining the metabolic process of articular cartilage.The molecular mechanisms that control this delicate balance play a critical role in ensuring the overall health and functionality of the cartilage [4].Identifying and targeting this particular aspect could prove crucial in therapeutic interventions aimed at treating osteoarthritis and effectively delaying the degeneration of cartilage. PIWI-interacting RNA (piRNA) is a group of small RNA molecules that do not code for proteins.They typically have lengths ranging from 24 to 35 nucleotides.One of the distinguishing features of piRNA is the presence of a 2′-O-methylation modification at the 3' end.These piRNAs are specifically recognized and bound by PIWI proteins, which explains the name "PIWI-interacting RNA" [5,6].In the beginning, research indicated that piRNAs were predominantly found in the reproductive system.Their primary roles were believed to include safeguarding the genome by facilitating the breakdown of transcripts and regulating the structure of chromatin to inhibit the activity of transposons [7].Advancements in high-throughput sequencing technologies have led to the discovery of approximately 20,000 piRNA genes within the human genome [8].There is a growing body of evidence indicating that piRNAs, previously thought to be exclusive to germline cells, are also present in somatic cells.The disruption of piRNA expression and function has been linked to several health conditions, including cancer, reproductive disorders, neurodegenerative diseases, and the process of aging [9][10][11]. The current focus of research on piRNA revolves around its role in regulating cancer occurrence and development.Specific types of cancer, including colorectal cancer, breast cancer, and lung cancer, are particularly studied in this context.Moreover, recent studies also suggest that small interfering RNAs contribute to various human diseases such as osteoporosis, rheumatoid arthritis, tendon injuries, tendon homeostasis, and osteoarthritis [12][13][14][15][16].To date, there have been no reported studies examining the relationship between piRNAs (PIWI-interacting RNAs) and osteoarthritis or cartilage.Additionally, the exact role of piRNAs in chondrocytes remains unclear.Further research is required to investigate the impact of piRNAs on cartilage metabolism in osteoarthritis and to elucidate the mechanisms through which piRNAs regulate long non-coding RNAs (lncR-NAs) and activate inflammatory processes. Bioinformatics analysis Bioinformatics analysis was conducted using OmicStudio software (https:// www.omics tudio.cn/ tool).Volcano plots or other graphics were generated using R on the OmicStudio platform (https:// www.r-proje ct.org/ https:// www.omics tudio.cn/ tool).A small RNA sequencing dataset (GSE143514) containing samples from 3 osteoarthritis (OA) patients and 5 normal controls was downloaded from the GEO database for piRNA expression analysis.The Cutadapt program was utilized to remove adapter sequences from the raw offline data.The Trimmomatic program was employed to discard low-quality sequences and obtain clean data.The Fastqc program was used to assess the data quality of the clean data, and the fragments larger than 15 nucleotides were retained for subsequent analysis.The clean data were aligned to the piRNA database and genome using the bowtie program.Differential expression analysis of piRNAs was performed using edger, and piRNAs with significant differences between the two comparison groups were selected based on a P value < 0.05 and fold change (FC) > 2 or FC < 0.5. Cell transfection The hsa_piR_019949 mimic, hsa_piR_019949 inhibitor, and their negative controls were supplied by General Biocompany (General Biol, Anhui, China).Cell transfection was conducted by using the Lipofectamine 2000 reagent kit (Thermo Fisher, California, America).The transfection dose of hsa_piR_019949 mimic and hsa_piR_019949 inhibitor was 50 nM, respectively.The following experiments were conducted 48 h after transfection. Real-time qPCR Total RNA was extracted from cells using TRIzol reagent (Invitrogen, California, USA).For piRNA analysis, the piRNA was reverse transcribed into cDNA using the EZB-miRT2-plus-L kit (EZBioscience, Roseville, USA), and the relative expression level of piRNA was normalized to U6 controls.For mRNA analysis, the mRNA was reverse transcribed into cDNA using the HiScript III 1st Strand cDNA Synthesis Kit (Vazyme, Nanjing, China), and the relative expression level of genes was normalized to the internal control GAPDH.Real-time quantitative polymerase chain reaction (qPCR) was performed on a LightCycler 480 instrument using the ChamQ Universal SYBR qPCR Master Mix kit (Vazyme, Nanjing, China), following the manufacturer's instructions.The 2 − ∆∆Ct method was used to calculate the relative expression of piRNA and mRNA.All qPCR experiments were conducted in triplicate using a LightCycler 480 System (Roche, Basel, Switzerland), and the primers were obtained from Sangon Biotech (Shanghai, China) Co., Ltd.The sequence information of hsa_piR_019949 is as follows: 5′-GCC UGG AUA GCU CAG UUG GUA GAG CAU CAG A-3′.The sequence information of primers are listed in Additional file 1: Table S1. Flow cytometry The analysis to determine C28/I2 cell apoptosis was performed using the Annexin V-FITC Apoptosis Detection kit (Beyotime, Shanghai, China).C28/I2 cells were digested with trypsin without EDTA, and 3 mL of cell suspension was transferred into 10 mL centrifuge tubes and centrifuged at 500 rpm for 5 min.After removing the culture medium, the cells were washed with PBS and centrifuged again at 500 rpm for 5 min.The supernatant was discarded, and the cells were resuspended in 100 μL of binding buffer.Then, 5 μL of Annexin V-FITC and 5 μL of propidium iodide (PI) were added and gently mixed.The mixture was incubated in the dark at room temperature for 15 min.Flow cytometry analysis was performed using a BD Aria flow III cytometer (BD, New Jersey, USA) to detect the fluorescence of FITC and PI and calculate the apoptotic rate.The flow cytometry results were analyzed using Flow Jo software v10. Clone formation analysis In this study, the clone formation analysis method was used to assess the ability of cells to form colonies.Briefly, C28/I2 cells were seeded in 6-well plates at a density of 200 cells per well.After 24 h of incubation, the cells were treated with the experimental conditions.The medium was changed every three days to maintain cell viability.After 14 days of incubation, the cells were fixed with 4% paraformaldehyde for 15 min and stained with crystal violet for 30 min.Excess stain was washed off with water, and the plates were air-dried.The number of colonies containing at least 50 cells was counted under a microscope.The clone formation efficiency (CFE) was calculated as the ratio of the number of colonies formed to the number of cells seeded, multiplied by 100%.This analysis was performed in triplicate for each experimental condition.Statistical analysis was carried out using appropriate tests to determine significant differences between groups. CCK-8 analysis In this study, the Cell Counting Kit-8 (CCK-8) assay was used to evaluate cell viability and proliferation.C28/I2 cells were seeded in a 96-well plate at a density of 2,000 cells per well.After 0, 12, 24, 48, 72, and 96 h, 10 μL of CCK-8 solution was added to each well, and the plate was incubated for an additional 2 h at 37 °C.The absorbance was measured at 450 nm using a microplate reader.The cell viability was calculated by comparing the absorbance of the treated cells to that of the control cells.The assay was performed in triplicate for each experimental condition, and the results were expressed as the mean ± standard deviation.Statistical analysis was conducted to determine significant differences between groups using appropriate tests. Immunofluorescence staining In this study, the immunofluorescence staining technique was utilized to investigate the expression of Collagen II and MMP13 in chondrocytes.To initiate immunofluorescence staining, the sample was first fixed using a stationary solution and then permeabilized with Triton X-100.A blocking buffer containing bovine serum albumin (BSA) was employed to prevent nonspecific binding.Following that, primary antibodies specific to Collagen II and MMP13 were added to the sample and left to incubate overnight at a temperature of 4 °C.Subsequently, the sample was washed multiple times with phosphatebuffered saline (PBS) to remove any unbound antibodies.The second antibody, which was Cy3 conjugated with a fluorophore, was then introduced to the sample and incubated at room temperature for 1 h.After thorough washing, the sample was fixed onto a glass slide using an anti-fading fixing medium that included 4' , 6-diamidino-2-phenylindole (DAPI) for nuclear staining.Finally, the glass slide was examined under a fluorescence microscope, and an image was captured using a filter set suitable for each fluorophore.The fluorescence signal was quantified and analyzed using image analysis software. Safranin O staining C28/I2 cells were seeded in 6-well plates at a density of 20,000/well.After reaching approximately 70-80% confluency, the cells were fixed with 4% paraformaldehyde for 15 min at room temperature.Cells were then washed with phosphate-buffered saline (PBS) and stained with Safranin O solution (0.1% Safranin O in distilled water) for 10 min at room temperature.Excess stain was removed by washing with distilled water, and the stained cells were visualized using a brightfield microscope.Images were captured, and the intensity of Safranin O staining was quantified using image analysis software. Virus transduction Insert the target gene or RNA sequence into the appropriate site of the slow virus vector.Cultivate and amplify the C28/I2 cell line.Once the cell density reaches an appropriate level, add polyethyleneimine or polyvinyl alcohol to transduce the slow virus vector into the cells.After transduction, treat the cells appropriately to facilitate the integration and expression of the slow virus. Statistical analysis All experiments in this study were performed in triplicate and showed consistent results.The data were presented as mean ± standard deviation (SD).Statistical analysis was performed using the unpaired two-tailed Student's t-test for comparisons between two groups, and one-way analysis of variance (ANOVA) or two-way ANOVA followed by Tukey post hoc test for comparisons among multiple groups.A P value less than 0.05 was considered statistically significant.GraphPad Prism software (Version 6.01) was used for all statistical analyses.The differentially expressed genes were identified based on the criteria of fold change ≥ 2 and P value ≤ 0.05, as determined by bioinformatics analysis and mRNA sequencing results. PiRNA hsa_piR_019949 is downregulated in chondrocytes stimulated by inflammation By comparing the RNA expression profiles of knee chondrocytes in patients with osteoarthritis (OA) and normal controls, we identified 214 genes that are differentially expressed in the OA environment (Fig. 1A).Out of these genes, 25 were significantly upregulated, and 189 were significantly downregulated.Among the differentially expressed piRNAs, we discovered a small RNA called piRNA hsa_piR_019949, which exhibited the greatest reduction in expression in C28/I2 cells under IL-1β stimulation (Fig. 1B and Additional file 2: Fig. S1).To confirm our findings, we analyzed the C28/I2 cell line's RNA after stimulating it with the inflammatory factor IL-1β at varying concentrations (10 ng/mL, 20 ng/mL, 30 ng/mL).Following stimulation, we observed a notable downregulation of piRNA hsa_piR_019949, and its expression level exhibited an inverse relationship with the inflammatory factor concentration (Fig. 1C).Additionally, we collected cartilage samples from 10 patients undergoing knee replacement surgery.For the osteoarthritis group, the cartilage tissue was obtained from weight-bearing areas of the joints, while for the normal control group, it was taken from non-worn areas.Upon extracting total RNA, we found a significant downregulation of hsa_piR_019949 expression in human knee arthritis tissues (Fig. 1D).These results provide more evidence for the potential contribution of piRNA hsa_piR_019949 to the occurrence of osteoarthritis and its role in inflammatory regulation. Overexpression of hsa_piR_019949 can promote the anabolism and inhibit catabolism of chondrocytes in vitro To validate the function of piRNA hsa_piR_019949, we carried out number of experiments.Firstly, we created a cellular model that overexpressed piRNA hsa_ piR_019949 by designing a mimic of the piRNA and transfected it into the C28/I2 chondrocyte cell line (Fig. 2A).Our CCK-8 assay results indicated that the overexpression of piRNA hsa_piR_019949 significantly enhanced cell proliferation in the chondrocyte cell line after 48 h (Fig. 2B).Similarly, the colony formation assay demonstrated consistent findings (Fig. 2C, D).We labeled cells with apoptosis-specific fluorescent markers and employed flow cytometry to assess cell apoptosis rate.Our analysis revealed that the chondrocyte cell line transfected with the piRNA hsa_piR_019949 mimic exhibited a significant reduction in apoptosis compared to the control (Fig. 2E), thus indicating an increase in viable cells (Fig. 2F).Furthermore, Safranin O staining was employed to examine the impact of piRNA hsa_piR_019949 on chondrocytes.The results indicated that chondrocytes transfected with the piRNA hsa_piR_019949 mimic exhibited darker staining, which suggests the presence of a greater amount of cartilage matrix (Fig. 2G and Additional file 3: Fig. S2A).By utilizing RT-qPCR and immunofluorescence protein verification, we found that transfection with the piRNA hsa_piR_019949 mimic resulted in a significant increase in the transcription and protein levels of Collagen II (Fig. 2H, I).Additionally, it was found that the expression of MMP13 had decreased (Fig. 2J, K). Knockdown of hsa_piR_019949 can inhibit the anabolism and promote catabolism of chondrocytes in vitro In our study, we created a model of the human normal chondrocyte cell line with suppressed expression of hsa_piR_019949 by developing an inhibitor for hsa_piR_019949 (Fig. 3A).The results obtained from Fig. 1 illustrates the downregulation of the piRNA hsa_piR_019949 in osteoarthritis.A The heatmap depicts the variation in expression levels of piRNAs between cartilage affected by osteoarthritis (OA) and healthy, normal cartilage.B The volcano map illustrates the differential expression patterns of piRNAs in cartilage affected by osteoarthritis (OA) compared to normal cartilage.C The expression of hsa_piR_019949 in C28/I2 cells, stimulated with IL-1β at concentrations of 10 ng/ml, 20 ng/ml, and 30 ng/ml, was detected using RT-qPCR.D The expression of hsa_piR_019949 was analyzed using RT-qPCR in both OA (osteoarthritis) and normal tissues.Statistically significant differences are indicated by *P < 0.05, **P < 0.01.n = 3/group; piRNAs = piwi-interacting RNAs; OA = osteoarthritis; NC = negative control; RT-qPCR = real-time quantitative PCR CCK-8 and clone proliferation experiments demonstrated that the inhibition of hsa_piR_019949 expression led to a decrease in the proliferation of the chondrocyte cell line (Fig. 3B-D).Furthermore, the analysis of apoptosis using flow cytometry revealed that the suppression of hsa_piR_019949 expression promoted cell death in an inflammatory environment (Fig. 3E, F. Staining with Safranin O showed a lighter color after transfecting the chondrocyte cell line with the hsa_piR_019949 inhibitor, indicating a reduction in the cartilage matrix (Fig. 3G and Additional file 3: Fig. S2B).Furthermore, the expression of Collagen II decreased upon inhibition of hsa_piR_019949 (Fig. 3H, I), whereas the expression of MMP-13 increased (Fig. 3J, K). NOD-like receptor signaling pathway is correlated with high expression of hsa_piR_019949 in C28/I2 To investigate the regulatory mechanism of hsa_ piR_019949 in C28/I2 metabolism, we conducted RNA sequencing analysis on three samples transfected with hsa_piR_019949 mimic and three control groups.Through differential gene expression analysis, we generated volcano plots and heatmaps to visualize the differences in gene expression.Notably, we found that 19,754 genes were upregulated, while 19,128 genes including the lncRNA NEAT1 and NLRP3 were downregulated (Fig. 4A, B).Subsequent GO (Gene Ontology) and KEGG (Kyoto Encyclopedia of Genes and Genomes) analyses enabled the identification of Fig. 3 Knockdown of piRNA hsa_piR_019949 leads to a decrease in the anabolic metabolism and increase in the catabolic metabolism of chondrocytes.A The study investigated the effect of inhibiting hsa_piR_019949 on chondrocytes.B The expression of hsa_piR_019949 in chondrocytes was examined after transfection with an inhibitor.Cell proliferation was assessed using CCK-8 assays.C The ability of chondrocytes to form clones was evaluated through crystal violet staining.D The number of clones formed by chondrocytes was quantitatively analyzed.E The apoptosis of chondrocytes transfected with the inhibitor was measured using flow cytometry, F specifically looking at the percentage of normal cells.G The quantification of cartilage extracellular matrix was determined using safranine O staining.The protein expression of collagen II (H) and MMP13 (J) in chondrocytes transfected with hsa_piR_019949 mimics was detected through immunofluorescence staining.The mRNA expression of collagen II (I) and MMP13 (K) in chondrocytes transfected with hsa_piR_019949 mimics was detected using RT-qPCR.Statistically significant differences are indicated by *P < 0.05, **P < 0.01.n = 3/group potential downstream signaling pathways influenced by hsa_piR_019949.Several biological functions related to defense against viruses, immune system processes, and signaling pathways such as Human papillomavirus infection, RIG-I-like receptor signaling pathway, and NOD-like receptor signaling pathway were found to be associated with the expression of hsa_piR_019949.Previous studies have suggested that the NOD-like receptor signaling pathway is linked to the activation of the inflammasome protein NLRP3 [17] (Fig. 4C, D).Hence, it is plausible that hsa_piR_019949 may impact C28/I2 metabolism by modulating the NOD-like receptor signaling pathway.This novel finding provides valuable insights for exploring the functions and mechanisms of hsa_piR_019949 in the future. LncRNA NEAT1 serves as a downstream target gene regulated by hsa_piR_019949 in chondrocyte metabolism The expression levels of lncRNA NEAT1 were validated to be downregulated following transfection with hsa_ piR_019949 mimic, whereas the inhibitor group showed upregulated expression using RT-qPCR (Fig. 5A).Additionally, the downregulation of NLRP3, a key gene in the NOD-like receptor signaling pathway, was confirmed with elevated expression of hsa_piR_019949 (Fig. 5B).Following transfection with hsa_piR_019949 mimic, lentivirus-mediated transfection of NEAT1 (Additional file 4: Fig. S3) and NLRP3 was then performed separately in both control and experimental groups.By comparing the cartilage composition using Safranin O staining, it was observed that the promoting effect of hsa_piR_019949 on chondrocyte synthesis metabolism was suppressed by NEAT1 and NLRP3.This suggests that NEAT1 and NLRP3 may function as downstream targets of hsa_piR_019949 (Fig. 5C, D and Additional file 5: Fig. S4). Discussion In summary, the piRNA called hsa_piR_019949 is downregulated in chondrocytes when inflammation occurs.A comparison of RNA expression profiles in knee chondrocytes between patients with osteoarthritis (OA) and normal controls identified 214 genes that showed changes.Among these, 25 genes were significantly upregulated, and 189 genes were significantly downregulated.One of the small RNAs found was hsa_piR_019949.Further validation demonstrated that the expression level of hsa_ piR_019949 decreased significantly under the stimulation of inflammatory factors and showed an inverse correlation with the concentration of these factors.Moreover, hsa_piR_019949 was also found to be significantly downregulated in the articular cartilage of OA patients, reinforcing its potential role in the development of OA and regulation of inflammation. Further research revealed that hsa_piR_019949 is involved in the functional regulation of chondrocytes in vitro by influencing chondrocyte synthesis and degradation.Overexpression of hsa_piR_019949 promoted chondrocyte proliferation, reduced apoptosis, and increased cartilage matrix synthesis.Conversely, inhibiting the expression of hsa_piR_019949 suppressed chondrocyte proliferation, promoted apoptosis, and facilitated degradation of the cartilage matrix.Additionally, hsa_ piR_019949 is associated with the NOD-like receptor signaling pathway, impacting chondrocyte metabolism through the regulation of this signaling pathway. Currently, the focus of research on non-coding RNA in osteoarthritis is primarily on microRNAs (miRNAs).Studies by Ito et al. [18] have shown that microRNA-455 (miR-455), specifically miR-455-5p and miR-455-3p, are highly expressed in human and mouse primary chondrocytes.These miRNAs up-regulate the key transcription factor Sox9. Another study by Yen-You Lin et al. [19] uncovered that miR-144-3p can directly bind to IL-1β, reducing its expression level and thus alleviating the damage it causes to cartilage.Additionally, miR-144-3p was found to slow down the progression of osteoarthritis in a rat model of anterior cruciate ligament and meniscus surgery. NEAT1 is an RNA molecule that interacts with various RNA-binding proteins and plays a role in important biological processes such as RNA metabolism, transcriptional regulation, and cellular stress responses [20,21].It has also been found to be highly expressed in various tumors and is involved in tumor initiation and progression [22,23].NEAT1 regulates the expression and activity of NLRP3, an important component of the inflammasome involved in inflammation and immune regulation [24].Overexpression of NEAT1 leads to excessive activation of NLRP3 and heightened inflammatory response [25].NEAT1 interacts with NLRP3 mRNA, controlling its stability and translation.It also forms complexes with other RNA-binding proteins to regulate NLRP3 inflammasome formation and activation.The NEAT1-NLRP3 axis is implicated in various diseases, including inflammatory conditions like rheumatoid arthritis and inflammatory bowel disease, as well as neurodegenerative disorders such as Alzheimer's and Parkinson's diseases [26][27][28][29].Understanding the relationship between NEAT1 and NLRP3 is crucial for understanding inflammatory responses and immune regulation.However, in osteoarthritis, NEAT1 and NLRP3 seem to have a relieving effect.The purpose of this study is to use hsa_ piR_019949 to regulate the expression of NEAT1 and NLRP3 in order to reverse chondrocyte apoptosis and degradation of the extracellular matrix.It is also investigating whether lncRNA NEAT1 is a downstream target gene of hsa_piR_019949 in chondrocyte metabolism.Experimental results from simulated transfection of hsa_ piR_019949 show that the expression levels of NEAT1 and NLRP3 were downregulated.Further experiments confirm that NEAT1 and NLRP3 inhibit the promotion of hsa_piR_019949 on chondrocyte anabolism.These findings suggest that NEAT1 and NLRP3 may indeed be the downstream target genes of hsa_piR_019949. According to Xin, Liu et al. [30], the NLRP3 pathway has been shown to play a significant role in promoting oxidative stress and apoptosis in synovitis-induced stimulation.Specifically, the ALPK1/NF-κB signaling pathway disrupts the redox balance and increases the production of ROS by inducing cytokine secretion, including IL-1β and TNF-α.This, in turn, promotes the NLRP3/ Caspase-1/GSDMD signaling pathway, leading to apoptosis.The study also suggests that intervention in the NLRP3 pathway can reduce oxidative stress and inflammatory damage.Shaocong, Li et al. [31] demonstrated that iron overload leads to the formation of NLRP3 inflammatory corpuscles, resulting in chondrocyte apoptosis and arthritis.Their study showed that Cardamonin (CAR) can inhibit NLRP3 by activating SIRT1 and inhibiting the expression of the p38MAPK signaling pathway.This intervention reduces chondrocyte apoptosis and arthritis caused by iron overload.Therefore, the NLRP3 pathway plays a role in inducing inflammatory reactions and chondrocyte apoptosis, and inhibiting NLRP3 can treat iron overload-induced arthritis using CAR.Xiaohang, Zheng et al. [32] believe that NLRP3 is involved in the occurrence and development of inflammatory reactions and oxidative damage in articular cartilage.NLRP3-mediated pyroptosis plays a crucial role in the pathological process of osteoarthritis, and the activation of the NF-κB pathway exacerbates the inflammatory response.The study investigated the effects of paroxetine, an antidepressant, on chondrocytes in articular cartilage.They measured the expression levels of NLRP3, IL-1β, CASP1, and other related proteins in ATDC5 cells and mouse models.The results showed that paroxetine treatment reduced the inflammatory response in articular cartilage by inhibiting the activation of the NF-κB signaling pathway and suppressing pyroptosis. In summary, the study revealed that the piRNA hsa_ piR_019949 may play a significant role in the development of osteoarthritis (OA) and the regulation of inflammation.It also demonstrated the functional regulation of this specific piRNA in the synthesis and degradation of chondrocytes.Furthermore, the study delved deeper into the relationship between hsa_piR_019949 and the NOD-like receptor signaling pathway, as well as NEAT1/NLRP3.These findings offer valuable insights for further investigation into the function and mechanism of hsa_piR_019949 with OA. Conclusion In conclusion, our study indicates that focusing on the piRNA hsa_piR_019949 and suppressing the expression of lncRNA NEAT1 holds promise as a therapeutic strategy for osteoarthritis.This approach has the potential to stimulate the growth of chondrocytes and improve the production of extracellular matrix, which are crucial for the repair and regeneration of cartilage.Further investigation is necessary to comprehensively comprehend the underlying mechanisms and to establish the effectiveness and safety of hsa_piR_019949 mimics as a treatment for osteoarthritis. Fig. 2 Fig.2The overexpression of piRNA hsa_piR_019949 has been found to promote the anabolic metabolism and inhibited catabolic of chondrocytes.A The expression of hsa_piR_019949 in chondrocytes transfected with hsa_piR_019949 mimics is being studied.B The detection of chondrocytes transfected with hsa_piR_019949 mimics was accomplished using CCK-8 assays.C The ability of chondrocytes transfected with hsa_piR_019949 mimics to form clones was detected using crystal violet staining.D Quantitative analysis of the clone formation number of chondrocytes.E The flow cytometry technique was employed to detect the apoptosis of chondrocytes that were transfected with hsa_piR_019949 mimics.F Quantitative analysis of the normal cells of chondrocyte detected by flow cytometry.(G) Safranine O was used to quantify the cartilage extracellular matrix.The protein expression of collagen II (H) and MMP13 (J) in chondrocytes transfected with hsa_piR_019949 mimics was detected through immunofluorescence staining.The mRNA expression of collagen II (I) and MMP13 (K) in chondrocytes transfected with hsa_piR_019949 mimics was detected using RT-qPCR.Statistically significant differences are indicated by *P < 0.05, **P < 0.01.n = 3/group Fig. 4 Fig.4 The NOD-like receptor signaling pathway is associated with high expression levels of hsa_piR_019949 in C28/I2 cells.A The heatmap displays the contrasting levels of piRNAs that are expressed in C28/I2 cells transfected with hsa_piR_019949 mimics in comparison to the normal control.B The volcano map illustrates the differential expression of piRNAs in C28/I2 cells transfected with hsa_piR_019949 mimics compared to the normal control.C The GO enrichment analysis highlights the enrichment of differentially expressed mRNA in terms of Gene Ontology categories.D The KEGG enrichment analysis reveals the enrichment of differentially expressed mRNA in relation to the Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways Fig. 5 Fig. 5 Long non-coding RNA NEAT1 acts as a gene that is influenced by hsa_piR_019949 in the regulation of chondrocyte metabolism.A The mRNA expression of lncRNA NEAT1 in chondrocytes transfected with hsa_piR_019949 mimics and inhibitor were detected by RT-qPCR.B The expression of NLRP3 in chondrocytes transfected with hsa_piR_019949 mimics and inhibitor were detected by RT-qPCR.C The quantification of cartilage extracellular matrix after transfected with NEAT1 was detected by Safranine O. D The quantification of cartilage extracellular matrix after transfected with NLRP3 was detected by Safranine O
6,560.4
2024-01-04T00:00:00.000
[ "Medicine", "Biology" ]
Exploring Household Food Dynamics During the COVID-19 Pandemic in Morocco Alongside the dramatic impact on health systems, eating, shopping, and other food-related habits may have been affected by the COVID-19 crisis. This paper analyses the impacts of the COVID-19 pandemic on food shopping habits and food-related activities of a diverse sample of 340 adult consumers in Morocco. The study is based on an online survey conducted in Morocco from September 15 to November 5, 2020, utilizing a standardized questionnaire delivered in French and Arabic via Survey Monkey. The findings show that consumers' diet, shopping behavior, and food interactions have changed significantly. Indeed, the survey outcomes indicated (i) an increase in the consumption of local items owing to food safety concerns; (ii) an increase in online grocery shopping; (iii) a rise in panic buying and food hoarding; and (iv) an increase in culinary capabilities. The findings are expected to help guide Morocco's current emergency measures as well as long-term food-related policies. INTRODUCTION The Coronavirus Disease 2019 (COVID- 19), discovered in Wuhan (China) toward the end of 2019 (1), is now one of the most critical issues confronting humanity (2). Alongside the dramatic impact on health systems, the COVID-19 pandemic is expected to have dire effects on societies' socioeconomic development and people's livelihoods worldwide (2). The pandemic is even considered a severe threat to achieving the Sustainable Development Goals (SDGs) encompassed in the 2030 Agenda for Sustainable Development (3). Further, a growing body of studies indicates that COVID-19 has altered food systems (4, 5) with consequences on food and nutrition security (6)(7)(8)(9). The COVID-19 epidemic created socio-economic shocks that impacted the global functioning of agricultural and food systems, as well as the food security situation of millions of people worldwide (10). Indeed, measures taken by governments to reduce and slow down the spread of the virus (e.g., lockdowns, mobility restrictions, shops closing) have affected several production sectors (e.g., agriculture) and value chains and disrupted international trade (11,12). As a result, food production has generally declined during the pandemic, and all food chain stages have been disrupted (6,(13)(14)(15)(16)(17). Furthermore, survival psychology recognizes that individuals can experience behavioral adjustments due to specific circumstances such as natural disasters and health emergencies. These behavioral shifts may affect attitudes and behaviors related to food consumption (18). In this context, many articles show that the COVID-19 pandemic induced changes in food-related behaviors (19)(20)(21)(22). Indeed, the pandemic has influenced food access and shopping behavior (23), food consumption habits and diets (24)(25)(26)(27)(28), as well as food wastage behavior (29). However, the COVID-19 pandemic effects are not alike across countries (11,21,27,28,30). El Bilali (31) argues that the pandemic is significantly affecting developing countries and vulnerable groups. Morocco, a middle-income developing country, is the second most affected country by COVID-19 in Africa, after South Africa, and one of the most affected in the Near East and North Africa (NENA) region. As of February 21, 2021, Morocco recorded 480,948 confirmed cases of COVID-19 and 8,548 deaths (32). The first five confirmed cases of COVID-19 were reported in Morocco on March 2, 2020. Since then, the number of new cases per day has steadily increased to reach its peak of 6195 on November 13, 2020. Daily new cases have been slowly decreasing and reached 950 cases on December 29, 2020 (33). Morocco had initially successfully controlled the outbreak between March and May 2020 by taking several strong measures ranging from mobility restrictions (within the kingdom as well as travel bans) to calling for mandatory confinement. However, the government's priorities changed as the pandemic, and the related lockdown strategy, caused economic loss. As a consequence, Moroccan decision-makers eased precautionary measures on June 20, in favor of resuming economic activity and easing the financial constraints of individuals and businesses (34). Since then Moroccan government tries to have a balance between recovering the economy and saving lives by applying local measures instead of general ones, such as the lockdown applied in Casablanca during the first weeks of October 2020 ( Table 1). These measures helped mitigate the public health threat, but many economic sectors, including the "informal sector, " were severely affected (34,41). In this context, Haddad et al. (42) predict a decrease in Morocco's GDP and put that "the main losses are concentrated in the regions that most contribute to the country's GDP, which coincide with the most densely populated areas, and that the most affected sectors are labor and flowintensive" (p. 10). Consequently, the government modified its priorities and eased the precautionary measures in June to relaunch the economy and reduce the financial pressure on citizens and enterprises (34). Also, Morocco created an emergency fund for the management of the COVID-19 pandemic (endowed initially with 10 billion MAD, about $1 billion) to upgrade health infrastructure and support the most affected economic sectors (43), including tourism, air transport, and some exporting sectors (e.g., textile and automotive sectors) (41). The scholarly literature on the effects of the COVID-19 pandemic on food systems and consumption patterns has been so far mostly geographically biased; it focuses on Western and Southern Europe, North America, and China (30), while developing countries in general and those of the NENA region in particular, such as Morocco, have been overlooked. The analysis of the scholarly literature shows that most of the papers dealing with the COVID-19 emergency in Morocco focus on the dynamics of the spread of the virus as well as its health impacts (44)(45)(46)(47)(48)(49)(50)(51)(52)(53)(54)(55)(56)(57)(58). Other articles analyze the pandemic's socio-economic impacts in the kingdom (45,59). Further papers address some specific impacts of the pandemic, such as on education (60,61), transport (62), quality of life and wellbeing (63)(64)(65), and environmental pollution (66)(67)(68). Ouhsine et al. (69) analyze the impacts of the pandemic on solid waste generation at Moroccan households. However, the study was carried out only in two small municipalities in central Morocco (viz. Khenifra and Tighassaline) and does not specifically address food waste (it refers to a generic "organic fraction" without further distinction). Therefore, there is no comprehensive, nationwide analysis on food consumption behavior during the COVID-19 pandemic in Morocco. To fill this knowledge gap, the present article analyses the effects of the COVID-19 pandemic on food acquisition, access, and consumption in Morocco. In particular, the paper sheds light on how the COVID-19 emergency and the consequent confinement measures and lockdown, affected food-related behavior in Moroccan households. DATA COLLECTION AND METHODS The study is based on an online survey conducted in Morocco using a standardized questionnaire 1 . It was conducted in Arabic and French using Survey Monkey from September 15 to November 5, 2020. The survey link was circulated by e-mail and social media (e.g., Facebook, LinkedIn, Instagram). The snowball-sampling approach was used in the research, and participants were requested to distribute the online questionnaire. The study targets the general adult population (age >18 years) in Morocco. Participating in the survey was entirely optional, and there was no monetary incentive to do so. We used the non-probability sampling method. The study was performed in compliance with the principles set out in the Helsinki Declaration, and all procedures concerning research participants were authorized by the Western Michigan University Human Subjects Institutional Review Board (HSIRB). At the beginning of the survey, all participants were informed about the research's objective and context and gave their digital informed consent regarding privacy and information management policies. The questionnaire consisted of 24 questions of different types (multiple-choice and one option), divided into three sections: [1] 9 questions on socio-demographics of the respondents (e.g. education level, gender, income, etc.); [2] 13 questions on food purchase and consumption behavior (e.g., food shopping behavior, food-related activities, food waste, etc.); and [3] 2 questions on positive and negative emotions during the pandemic. A pre-test was performed with 21 participants to assure data quality, and feedbacks were used to adjust the survey before its administration. The total of valid collected answers was 340. For multiple-choice socio-demographic questions, response options depended on the question's nature. For example, for question 5: "How would you describe your household income compared to other households in Morocco?, " response options were: Much lower than most other households/ Slightly lower than most other households/About the same as most other households/Slightly higher than other households/Much higher than other households. For some multiple-choice questions, a Likert scale was used and response options were: never = 0; first time = 1; much less = 2; slightly less = 3; about the same = 4; moderately more = 5; much more = 6. For some other multiplechoice questions, response options were 5-point Likert scale: 1 (not at all), 2, 3, 4, 5 (very much). The questionnaire was carefully constructed to reduce the threat of common method variance and mitigate the respondents' chance of misunderstanding the items. Also, a range of preventative measures was applied. The survey findings were downloaded for analysis into SPSS (Statistical Package for Social Sciences) version 25.0. Descriptive statistics (means, standard deviations, percentages, frequencies) were calculated. The analysis of multiple responses was performed to draw the percentages of responses and cases as well as the trends. Since variables were nominal and ordinal, non-parametric tests were used. Furthermore, The Spearman correlation coefficient was calculated to evaluate the association between the respondents' variables and socio-demographic characteristics. Results were significant for p < 0.05. RESULTS AND DISCUSSION Socio-Demographic Characteristics of the Study Participants Table 2 describes the socio-demographic profiles of the survey participants. The findings indicate that 54.41% of the respondents are men, 41.76% are married with children, and 39.41% earn the same revenue as most other households in Morocco. In terms of occupation status, of all the interviewees, 63.82% were working (full-time or part-time), and 20.29% were students. In addition, most of the respondents were in middle age (46.18% of them were 25-45 years old). The sample was well-educated, with 76.18% of respondents holding a master's or a Ph.D. Furthermore, 13.24% of the cohort lost their employment or had their salary reduced because of COVID-19 ( Table 2). Impact of COVID-19 on Food-Related Behaviors As observed in several countries worldwide (72,73), COVID-19 has transformed most participants' food shopping and procurement behavior in Morocco (Table 3). Firstly, 27.06% of the respondents indicated that they increased their purchase of local food products. Indeed, the consumption of local Moroccan food items has risen owing to food safety issues. As a consequence of the COVID-19 pandemic, concern over the transmission of the virus exists, and customers increasingly want to know where the food they purchase comes from. Consumers' irrational assumptions that foreign food items might pose a safety danger involved a preference for local food products. Also, 20.59% of the respondents stated that they ordered more groceries online. Meanwhile, 17.35% of respondents indicated that they ordered more food online from a full-service or fast-food restaurant or by a delivery application (Table 3). Moreover, since shopping in a supermarket has a perceived risk, participants buying patterns have changed. On the one hand, more respondents purchase their groceries online to escape crowded shops, thereby accelerating food retailers' digitization (74). Responding to this growing demand, some Moroccan hypermarkets increased their delivery capacity and launched their e-commerce platforms for the first time. Marjane, the Moroccan hypermarket, launched a delivery App and partnered with the Spanish distribution platform Glovo (75). Also, in April 2020, Carrefour launched, in partnership with Jumia Food, a free home delivery service in major Moroccan cities (76). However, these channels did not grow as high as may have been the case in other countries. Several significant hurdles limit online shopping development in Morocco, such as the lack of online payment systems and low Internet penetration (77). Secondly, 52.65% of the participants said they had stocked up on food since COVID-19 became severe. Indeed, just before the confinement in March 2020, a rush toward supermarkets has been observed in Morocco, and demand for flour and grains has jumped. Moroccans were panicking over the Coronavirus and stocking up in droves. Hence a surge in food prices. Despite promises from the government and stores that the food supply system can satisfy the extraordinary hoarding caused by the epidemic, pasta, wheat, and salt shelves have been depleted (78). Thirdly, 54.71% of the participants specified that they go shopping less than usual, and 35.29% indicated that they buy more than usual on each trip to the grocery store (Table 4). Indeed, since shopping in person in a supermarket has a perceived risk and induces fears of being close to others, participants' buying patterns have changed. Despite many protective measures and regulations applied by supermarkets (e.g., the installation of protective barriers, frequent cleaning, provision of masks, gloves, and disinfectants, etc.), for many consumers, shopping at a grocery store poses an evident danger (79,80). As observed in several countries, most study participants reduced the number of shopping trips and were shopping less than usual, buying more on each trip to diminish store visits and limiting their perceived risk of exposure to COVID-19 ( Table 4). The Spearman correlation test results revealed that age significantly affected some behaviors and habits ( Table 5). For example, age had a very significant effect (p < 0.05) on the number of shopping trips. Aware that the risk for severe illness with COVID-19 increases with age, older participants go shopping less than usual, buying more on each trip to limit their perceived risk of exposure to COVID-19. Also, old *Scale values: never = 0; first time = 1; much less = 2; slightly less = 3; about the same = 4; moderately more = 5; much more = 6. Frontiers in Nutrition | www.frontiersin.org participants, concerned for their families and the long-term outlook, stocked more food than younger ones ( Table 5). Panic buying and stockpiling were shaped by several factors, including socio-demographic factors (e.g., culture, income, gender, and personality). Household preferences and attitudes and product categories may also be differentially affected over time (81). Overall, individuals in different age groups have responded differently to the health crisis (82). Also, the results highlight some changes in food-related activities. Indeed, 38.53% of participants reported eating more with family members, 63.53% were cooking and making food even more often, and 59.12% spent a lot of time cooking. Furthermore, 27.36% made less easy meals (e.g., instant foods, frozen foods, etc.). Additionally, with the closure of the HORECA channel (hotels, restaurants, and catering), consumers have moved from out-of-home to homebased eating, with more cooking and baking at home. Trying to recreate the restaurant experience, many consumers rediscovered home cooking. Indeed, it is much easier to find the time for these activities and try new recipes with the confinement (83)(84)(85). Moreover, with restaurants, coffee shops, and cultural institutions closed, entertainment options became restricted, and eating with family and cooking became new entertaining activities. CONCLUSION The health and economic crisis triggered by the COVID-19 pandemic has had disorderly societal, economic, and psychological effects on food behaviors and consumption patterns, contributing to an impending global food emergency. However, impacts differ from country to country, and national data is essential for research and comprehension. In this context, this study examines the effects of the COVID-19 pandemic on food behavior in Morocco based on a cross-sectional survey involving 340 participants. The survey results show that the COVID-19 pandemic, and preventive actions taken by the Moroccan government, had affected food-related behaviors and habits. Undoubtedly, there have been apparent modifications in the way participants are shopping and interacting with food. To our knowledge, this is the first study about the perceptions of the impacts of COVID-19 on food behaviors in Morocco. Given that the COVID-19 pandemic is new and uncertain how long it will last, data and knowledge are needed to assess its effect on food consumption patterns. In addition, since there is no widespread literature on contemporary pandemics outside of SARS, the COVID-19 study will guide comprehension and even predict the potential of shock and crisis research (18). This and other future research will serve as a foundation for organizational and government readiness for future shocks and pandemic outbreaks. The sample bias is the main limitation of this study. The survey participants were chosen at random and recruited voluntarily. As a self-administered questionnaire, it was performed by volunteers who were not compensated. Therefore, only persons driven by an interest in the topic participated in the survey (cf. self-selection of the sample). Consequently, our sample does not reflect the general population in Morocco. For example, high-educated individuals were overrepresented in our sample. In general, low-educated individuals are frequently underrepresented in surveys (86). Furthermore, online surveys tend to exclude those who are web-illiterate as well as the elderly. More specifically, in the NENA region, poor households and informal workers are the least likely to be heard through online surveys. Inadequate infrastructure, low computer literacy, and lack of money to purchase a device or internet subscription can limit their access to the Internet, resulting in less participation in online surveys (87). The limitations mentioned above are frequent in Computer Assisted Web Interviewing (CAWI), which is now usually applied in surveys (88)(89)(90). This bias limits the ability to generalize survey findings to the whole Moroccan population. However, because of the COVID-19 epidemic, online surveys can collect data from a distance, which is a distinct benefit when social distancing is necessary and face-to-face research is problematic. So far, the scholarly literature on the impacts of the COVID-19 pandemic on food systems and consumer behavior has been geographically biased, focusing on Western and Southern Europe, North America, and China (30). In contrast, developing countries in general, and those of the NENA region in particular, such as Morocco, have been overlooked. To the best of our knowledge, this is the first study in Morocco on consumers' perceptions of the effects of COVID-19 on food behaviors. Given that the COVID-19 pandemic is new and its duration is unknown, data and knowledge are required to assess its impact on food consumption patterns. The findings of the paper confirm that the final results of COVID-19 will most likely differ from country to country, depending not only on the epidemiological situation but also, among other factors, on the baseline situation and shock resilience (91). As highlighted by El Bilali (31) "The pandemic immediate impacts vary from a country to another depending, inter alia, on the epidemiological situation, lockdown and confinement measures, pre-COVID socio-economic development level" (p. 59). This and other future researches will serve as a foundation for organizational and government readiness for future shocks and pandemic occurrences. The study's findings are also helpful for developing evidence-based policy in Morocco and the NENA area as a whole during the post-pandemic recovery period. Finally, many researchers questioned if these changes in consumers' behaviors and diets are permanent or transient. However, since the COVID-19 infection is new and still unfolding, and the channels of influence are multiple and interconnected globally, the precise consequences in the future on food habits are unknown. Further, the pandemic is far from over, and some countries still face significant epidemics, but even those who currently control the virus fear upcoming waves, especially with the spread of more contagious variants (32). The possibility of new infections and waves could result in new lockdowns or continuity of the current tight measures over the coming months, contributing to more disruption of economic activity and food-related activities. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The studies involving human participants were reviewed and approved by the Western Michigan University Human Subjects Institutional Review Board (HSIRB). The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS TB and HE: conceptualization, writing-original draft preparation, and project administration. TB, HE, and CB: methodology and formal analysis. CB: software, validation, and data curation. AA and SA: investigation. TB, HE, CB, AA, and SA: writing-review and editing. TB: funding acquisition. All authors have read and agreed to the published version of the manuscript. FUNDING The publication of this article was funded by the Qatar National Library.
4,663.4
2021-09-27T00:00:00.000
[ "Environmental Science", "Sociology", "Economics" ]
Hairpin Windings for Electric Vehicle Motors: Modeling and Investigation of AC Loss-Mitigating Approaches : The hairpin winding configuration has been attracting attention as a solution to increase the power density of electric vehicle motors by enhancing the slot-filling factor. However, this winding configuration brings high AC losses during high-speed operation and we require new approaches to tackle this challenge. This paper considers reducing AC losses by proposing two main methods: correct transposition of conductors in parallel paths, and enhancing the number of conductor layers in a slot. First, the proper connection of conductors in parallel paths is considered, and the essential rules for this purpose are described. Next, the paper uses a numerical approach to deal with the effect of incorrect conductor transposition in winding paths on generating additional AC losses due to circulating currents. Finally, the impact of the number of conductor layers in the mitigation of AC losses is also discussed in detail. According to the results, by increasing the number of layers, ohmic losses in the layer near the slot opening dramatically decrease. For instance, ohmic losses in the layer near the slot opening of the eight-layer setup were 82% less than the two-layer layout. Introduction Due to the destructive effects of combustion engines on global warming and the environment, there is an increased demand for sustainable and green transportation [1,2].Therefore, due to intensive research and development in transportation electrification for all applications, hybrid and pure electric vehicles (EVs) have been proposed and developed [3,4].The demanded metrics mainly concentrate on maximizing power and torque density and minimizing the weight of electric motors [5,6]. The requirement for maximizing the electric motors' power and torque density has led to a revolution in winding technology [7].The key enabler point is to find a way to increase the slot-filling factor.In earlier investigations, research studies mainly concentrated on increasing the traditional round-wound wire-filling factor by proposing an orthocyclic, layered winding methodology and the pressing tooth-wound [7].However, none of the aforementioned methods achieved the desired result; the orthocyclic and layered winding methodology requires specific and expensive equipment and machinery types, and the pressing tooth-wound can only be implemented for electric motors with concentrated windings [7]. The practical option for maximizing electric motors' power and torque density for traction applications and overcoming the drawbacks mentioned earlier is replacing the traditional round-wound wires with rectangular hairpins [1,8].This winding configuration has various advantages over traditional round-wound winding, for instance, less complexity, a higher possibility of mass production, maximizing current density, shorter end winding, and lower DC copper losses [1,9].More recently, hairpin windings technology has gained attention as the standard solution for high-reliability motor drives for hybrid electric vehicles [9][10][11].As illustrated in Figure 1, this winding technology has been applied to hybrid electric vehicles such as the Toyota Prius and Chevrolet Volt /Bolt [8,12]. brid electric vehicles [9][10][11].As illustrated in Figure 1, this winding technology h applied to hybrid electric vehicles such as the Toyota Prius and Chevrolet Volt /Bo However, this solution leads to some drawbacks and challenges, for instan motor-design flexibility that arises from a fabrication perspective [1,8,9,11,13].F more, the traction motor for electric vehicle applications has a maximum speed o 12000 rpm and depends on the number of poles; their fundamental frequency more than 1 kHz.Due to the skin and proximity, the current density distribution the conductors is non-uniform.Therefore, these types of winding generate high A [1,10,14,15].Improvements to the hairpin design should be aimed at mitigating AC to enhance the reliability and the electric motors' working speed limits for transpo electrification purposes.The main objective of this paper is to investigate ways to mitigate AC losses.purpose, two different methods are investigated: the first relies on the correct tra tion of rectangular conductors in parallel paths to reduce recirculating currents, second is based on increasing the number of conductor layers in slots to reduce th section of each conductor, leading to a reduction in the impact of skin and proxi fects. Basic Design Rules of Hairpin Winding and Case Study The working voltage of traction motors has increased, and in some cases, it about 800 V. Furthermore, these motors are implemented for high-current and -sp plications.Hence, the winding design with parallel paths is essential.In this situat conductors' correct transposition into parallel paths is essential for eliminating p circulating currents between parallel branches that generate additional AC loss section explains the fundamental design principle for hairpin windings and the transposition and connection of series conductors in the wire path.In addition, th However, this solution leads to some drawbacks and challenges, for instance, low motor-design flexibility that arises from a fabrication perspective [1,8,9,11,13].Furthermore, the traction motor for electric vehicle applications has a maximum speed of above 12000 rpm and depends on the number of poles; their fundamental frequency reaches more than 1 kHz.Due to the skin and proximity, the current density distribution inside the conductors is non-uniform.Therefore, these types of winding generate high AC losses [1,10,14,15].Improvements to the hairpin design should be aimed at mitigating AC losses to enhance the reliability and the electric motors' working speed limits for transportation electrification purposes.The main objective of this paper is to investigate ways to mitigate AC losses.For this purpose, two different methods are investigated: the first relies on the correct transposition of rectangular conductors in parallel paths to reduce recirculating currents, and the second is based on increasing the number of conductor layers in slots to reduce the cross-section of each conductor, leading to a reduction in the impact of skin and proximity effects. Basic Design Rules of Hairpin Winding and Case Study The working voltage of traction motors has increased, and in some cases, it reaches about 800 V. Furthermore, these motors are implemented for high-current and -speed applications.Hence, the winding design with parallel paths is essential.In this situation, the conductors' correct transposition into parallel paths is essential for eliminating potential circulating currents between parallel branches that generate additional AC losses.This section explains the fundamental design principle for hairpin windings and the proper transposition and connection of series conductors in the wire path.In addition, the effect of incorrect transposition of conductors in the wire path on additional AC losses is also considered. Hairpin Winding Design Principle Layout plots are an effective way to explain and provide an intuitive view of hairpin connections.For example, Figure 2a,b illustrate two conceivable ways of designing a three-phase, six-layer hairpin winding layout for a 36-slot, four-pole stator.The number of stator slots per phase (q m ), and number of slots per pole per phase (q), are calculated as [16]: where N s is the number of stator slots, m is the number of phases, and p is the number of pole pairs. Machines 2022, 10, 1029 3 of 15 of incorrect transposition of conductors in the wire path on additional AC losses is also considered. Hairpin Winding Design Principle Layout plots are an effective way to explain and provide an intuitive view of hairpin connections.For example, Figure 2a,b illustrate two conceivable ways of designing a three-phase, six-layer hairpin winding layout for a 36-slot, four-pole stator.The number of stator slots per phase (qm), and number of slots per pole per phase (q), are calculated as [16]: where Ns is the number of stator slots, m is the number of phases, and p is the number of pole pairs.According to Equation ( 1) and ( 2), the number of stator slots per phase is qm = 12, and the number of slots per pole per phase is q = 3. Figure 2a shows the connection of the winding layers (nw) with one parallel path per phase (Naa = 1), and the number of series turns per phase (Nst) is calculated as follows [16]: Therefore, in this specific design, the number of series turns per phase is Nst = 36.The winding connections start from layer a of slot 1 (S1-a), and during the first revolution (in each revolution, the number of turns is 2p), they continue to cover conductor layers a and b with full pitch.After completing the first revolution, S28-b is connected to S2-a using a According to Equation (1) and ( 2), the number of stator slots per phase is q m = 12, and the number of slots per pole per phase is q = 3. Figure 2a shows the connection of the winding layers (n w ) with one parallel path per phase (N aa = 1), and the number of series turns per phase (N st ) is calculated as follows [16]: Therefore, in this specific design, the number of series turns per phase is N st = 36.The winding connections start from layer a of slot 1 (S1-a), and during the first revolution (in each revolution, the number of turns is 2p), they continue to cover conductor layers a and b with full pitch.After completing the first revolution, S28-b is connected to S2-a using a jumper short pitch to change the phasor (slot).After finalizing the second revolution, another jumper changes the phasor and connects S29-b to S3-a.After completing the third revolution, another short-pitch jumper must change the layer level from a and b to c and d and connect S30-b to S1-c.This process continues for three successive revolutions to change the phasor, and after completing three multiple revolutions, the layer levels are changed, and finally, it is finished by going back to S30-a. The hairpins count as lap-winding topology and follows these winding format connection rules [8].According to the above hairpins layout, the principle rules for the number of stator slots and hairpin winding are as follows [8,17]: • The number of slots per pole pair must be greater than one to provide the possibility of generating electro-motive force (EMF); • The number of slots per phase per parallel path must be an integer; • The number of pole pairs per parallel path is equal to 2k, where k is an integer; • The number of slots per phase divided by the greatest common divisor of the number of slots, and the number of pole pairs (GCD (slots, pole pairs)) must be an integer; The number of conductors in the slot must be even. If the number of parallel paths per phase is greater than one (N aa > 1), the following two rules, titled transposition rules, are essential in a hairpin winding configuration: • Layer arrangement rules: In the presence of parallel paths, the wires belonging to one similar path should be placed in all layers of the slot to provide the same inductances for all parallel paths; • Slot per pole arrangement rule: To make sure that all parallel paths generate the same EMF, the wire that belongs to one similar path should be distributed in all slots per pole per phase. Figure 2b illustrates the winding transposition for two parallel paths (N aa = 2): the paths are represented by black and red arrows.In this design, the number of series turns per phase is N st = 18.Accordingly, one path starts from S01-a, and the other starts from S01-f.Correct driving transposition rules and the layer arrangement and number of slots per pole per phase rules are essential in this condition.To increase the resolution, conductors included in path one are displayed in orange, and those belonging to path two are displayed in blue.According to Figure 2b, each path's wires located in all slot layers provide similar inductance, and all slots per pole of that phase contribute to balancing the EMF.The concept of winding transposition is critical in wires with parallel paths.Therefore, a 33 kW synchronous reluctance motor (SynRM) with a 36-slot, four-pole, with six-layer layout, 400 V, and a base speed of 4500 rpm was selected as a case study to check and validate the correct transposition (analytical winding configuration) of the 36-slot layout with two parallel paths presented in Figure 2b using Motor-CAD.When using the SynRM rotor, the investigation procedure is more straightforward as the impact of PM eddy current losses does not interfere when one is considering AC losses to check the performance of hairpin winding. For this purpose, the SynRM for EV applications, with the parameters presented in Table 1, is defined in Motor-CAD.Furthermore, Motor-CAD provides the option titled custom in the winding pattern table.By selecting this option, the designer can manually implement a hairpin configuration and define the conductors' transposition.Hence, as seen in Figure 3a, the winding transposition is defined as a table on the winding pattern page.In addition, the slot coil arrangement of phase one and linear hairpin winding patterns are shown in Figure 3b,c, respectively.Finally, Figure 3d shows the phasor diagram of the three phases of the SynRM.The three phases are displaced from each other by 120 degrees, and it can be concluded that the proposed winding pattern is correct. Additional AC Losses Failure to follow the transposing rules leads to a significant circulating current, thus increasing AC losses.A quarter of the SynRM is modeled to reduce computational burden using the finite element analysis (FEA) approach to investigate the effect of the incorrect transposition of rectangular conductors on additional AC losses.For this purpose, the 2D FE commercial software package Magnet from Mentor Graphics is used.Figure 4a shows the geometry of a quarter of the SynRM with the names and positions of phases and conductors.Figure 4b,c illustrate the proposed correct transposition and incorrect transposition equivalent circuits, respectively.These circuit models consist of each conductor's details, which permit access to each conductor to find ohmic losses, including the summation of AC and DC copper losses.In the incorrect transposition circuit, the connection of conductors A 3 (Slot A 1 -Conductor A 3 ) and A 4 (Slot A 1 -Conductor A 4 ) is changed.Both models were solved using the time-harmonic approach for the frequency domain range from 1 Hz to 1 kHz.Figure 5a,b show the FEA results of correct and incorrect winding transposing, respectively.The comparison of the FEA models, especially for A 3 and A 4 conductors, shows that the current density distribution in A 3 and A 4 is significantly changed.Moreover, Figure 5c shows the changing value of ohmic losses versus frequency for A 3 and A 4 , for correct and incorrect models. Number of Conductor Layers The hairpin winding configuration has low motor-design flexibility and suffers from high AC losses.The high AC losses limit the motor's operation speed (frequency).Therefore, it requires new winding techniques to mitigate AC losses of hairpins and enhance the motor's operation speed (frequency).An effective way to mitigate AC losses is to increase the number of conductor layers in a slot.Thus, the conductor height and crosssection are reduced, and skin and proximity effects are reduced. In conventional machine design, resistive losses of m-phase machines are computed as [18]: The AC resistance of one phase (RAC) is expressed as [19]: where KAC is a dimensionless coefficient, RDC is the winding DC resistance, and η is the characteristic height of the conductor given by ( 6) in the case of a rectangular conductor [19,20]: where ϵl is the layer filling factor for a rectangular conductor, hc is conductor height, and δ is the skin depth.As seen in Figure 5c, the AC losses in A 3 and A 4 change dramatically.For instance, in the incorrect transposition model, the amount of ohmic loss of conductor A 3 for the frequency of 1 kHz rises to about 88% more than its value for the correct format.In addition, ohmic loss behaviors for both conductors have entirely changed in the incorrect transposition model.For example, as seen in Figure 5c, for the incorrect case, ohmic losses for both conductors rise rapidly, while for the correct model, after a particular frequency, the intensity of the increment of ohmic losses decreases. Number of Conductor Layers The hairpin winding configuration has low motor-design flexibility and suffers from high AC losses.The high AC losses limit the motor's operation speed (frequency).Therefore, it requires new winding techniques to mitigate AC losses of hairpins and enhance the motor's operation speed (frequency).An effective way to mitigate AC losses is to increase the number of conductor layers in a slot.Thus, the conductor height and cross-section are reduced, and skin and proximity effects are reduced. In conventional machine design, resistive losses of m-phase machines are computed as [18]: The AC resistance of one phase (R AC ) is expressed as [19]: where K AC is a dimensionless coefficient, R DC is the winding DC resistance, and η is the characteristic height of the conductor given by ( 6) in the case of a rectangular conductor [19,20]: where l is the layer filling factor for a rectangular conductor, h c is conductor height, and δ is the skin depth. According to (5), the challenging factor in AC winding resistance calculation is the dimensionless coefficient (K AC ).Therefore, the layer conductor model is implemented to precisely determine the dimensionless coefficient (K AC ) as follows [19,20]: where l is the number of the layer, N is the number of conductors, and C I (η) and C II (η) are defined as [19,20]: Therefore, correlations of AC loss factors for the various layers and an entire slot are defined, respectively, as follows [19]: 10) According to ( 6), (10), and (11), AC losses are a function of the conductor height and the number of conductor layers. The numerical approach is used to further study the effect of the number of layers on AC copper losses in the hairpin winding configuration.Since AC losses chiefly pertain to leakage flux in the slot, a single slot is modeled using the software to reduce the computational time.The time-harmonic solution is used with an electrical circuit fed by sinusoidal current sources (in this manner, a nominal peak current is imposed on the hairpin windings). Figure 6 shows four different layer layouts in a single slot, with two-, four-, six-, and eight-layer layouts modeled and analyzed to further investigate the impact of the number of conductor layers on mitigating AC losses.To make these cases comparable regarding the placement in the slot, the conductor width and the slot filling factor are kept the same in all setups; therefore, the only variable parameter is the conductor's height.Moreover, the distance of the last conductor from the slot opening is the same in all configurations.Therefore, the maximum current value for the six-layer conductors' structure is 96.2 A, and the current values of other setups are set in accordance with this value.The conductor dimensions and current value of each configuration are presented in Table 2.After the model construction, each model was run using a frequency range from 1 Hz to 1 kHz, by steps of 100 Hz, at a temperature of 120 • C. the distance of the last conductor from the slot opening i Therefore, the maximum current value for the six-layer and the current values of other setups are set in accordanc dimensions and current value of each configuration are model construction, each model was run using a frequen steps of 100 Hz, at a temperature of 120 °C.The current density distribution for two setups, four-layer and eight-layer layouts, is considered in the first step.Figure 7a,b show the current density development along with the slot height (y-axis) for four-layer and eight-layer layouts, respectively.Moreover, Figure 7c shows the numerical results in terms of the current density distribution inside a single slot with eight-layer and four-layer layouts.Accordingly, due to the proximity effect, the current density distribution is uneven in each layer and is concentrated on one side of the conductors.In addition, the uneven current density distribution is significant in the layer near the slot opening due to the high leakage-flux density. However, due to lower skin and proximity effects, the current density in the slot with an eight-layer layout is distributed more uniformly than in the slot with a four-layer layout, leading to lower AC losses in the eight-layer layout.Therefore, increasing the number of layers in hairpin windings leads to a more even distribution of current density in the conductor inside slots and lower AC losses. As previously mentioned, Magnet provides each conductor's ohmic loss value, which is the summation of AC and DC copper losses.In Table 3, the influence of the number of layers on ohmic losses is evaluated for the different layers in a frequency range from 1 Hz to 1 kHz. Moreover, the AC losses factor (K AC ) is estimated as [7]: where P AC and P DC represent the ohmic loss and DC copper loss, respectively. considered in the first step.Figure 7a,b show the current density development along with the slot height (y-axis) for four-layer and eight-layer layouts, respectively.Moreover, Figure 7c shows the numerical results in terms of the current density distribution inside a single slot with eight-layer and four-layer layouts.Accordingly, due to the proximity effect, the current density distribution is uneven in each layer and is concentrated on one side of the conductors.In addition, the uneven current density distribution is significant in the layer near the slot opening due to the high leakage-flux density.For calculating the AC loss factor (K AC ), the setups are first simulated from a frequency of 1 Hz, which gives a uniform current distribution in the conductors and represents DC losses, and then the value of the AC loss factor for each operating frequency is evaluated using (12).Table 4 shows the AC loss factor for various layer layouts for a frequency range from 1 Hz to 1 kHz.For ease of comparing numerical results of setups, the data for each configuration are plotted against the frequency.This plot is expected to help show the effect of the number of layers on AC losses.Figure 8a shows the variation of ohmic losses with the frequencies for four layouts.Accordingly, increasing the conductor layers leads to lower AC losses, such that the eight-layer setup has the lowest ohmic losses.For ease of comparing numerical results of setups, the data for each configuration are plotted against the frequency.This plot is expected to help show the effect of the number of layers on AC losses.Figure 8a shows the variation of ohmic losses with the frequencies for four layouts.Accordingly, increasing the conductor layers leads to lower AC losses, such that the eight-layer setup has the lowest ohmic losses.However, the setup with the two-conductor layer does not follow the same behavior at frequencies above 300 Hz.Moreover, Figure 8b shows ohmic, AC, and DC losses of all setups at a frequency of 1 kHz, clearly showing that the two-layer layout is not following the same behavior.In addition, Figure 8c shows the ohmic losses in the layer near the slot opening at 1 kHz.The ohmic losses in the conductor near the slot opening for the eight- However, the setup with the two-conductor layer does not follow the same behavior at frequencies above 300 Hz.Moreover, Figure 8b shows ohmic, AC, and DC losses of all setups at a frequency of 1 kHz, clearly showing that the two-layer layout is not following the same behavior.In addition, Figure 8c shows the ohmic losses in the layer near the slot opening at 1 kHz.The ohmic losses in the conductor near the slot opening for the eight-layer setup are significantly lower than in other setups.This value is about 82% lower than the two-layer setup and shows that the AC losses are reduced by increasing the number of layers and reducing the conductor height. Bianchi and Berardi in [10,15] mentioned the same condition for the two-layer layout; however, they did not investigate the cause of this behavior.In the following, the impact of proximity losses in the different conductor layers of all the setups is investigated to clarify the reason for the two-layer setup behavior.Figure 9a,b show the current density distribution in the Z-direction and flux-density distribution at 50 Hz and 800 Hz, respectively.Due to the proximity effect, the highest current density generated from the magnetic flux distribution concentrates at the bottom of conductors.This condition becomes severe in the conductor layer near the slot opening due to the significant magnetic flux concentration.As a result, for frequencies greater than 300 Hz (according to Figure 9b), the total active area of the two-layer setup is smaller than the active area of other configurations.For instance, in the four-layer configuration, the unused area is restricted to the first layer on top of the slot (approximately one-quarter of the total cross-section area), while the two-layer winding active area covers almost one-third of all copper conductors' area.Due to the proximity effect, the highest current density generated from the magnetic flux distribution concentrates at the bottom of conductors.This condition becomes severe in the conductor layer near the slot opening due to the significant magnetic flux concentration.As a result, for frequencies greater than 300 Hz (according to Figure 9b), the total active area of the two-layer setup is smaller than the active area of other configurations.For instance, in the four-layer configuration, the unused area is restricted to the first layer on top of the slot (approximately one-quarter of the total cross-section area), while the two-layer winding active area covers almost one-third of all copper conductors' area. Conclusions This paper considered hairpin winding as a solution for currently in-demand metrics, such as maximizing power and torque density and minimizing the weight of electric motors.This winding topology offers a high filling factor to maximize electrical motors' Conclusions This paper considered hairpin winding as a solution for currently in-demand metrics, such as maximizing power and torque density and minimizing the weight of electric motors.This winding topology offers a high filling factor to maximize electrical motors' power and torque density for traction applications; however, it suffers from high AC losses.Therefore, this paper's main focus was to investigate approaches to mitigate these losses.For this purpose, two methods were considered: the correct transposition of conductors in the wire path to protect the circuit from circulating currents, and increasing the number of conductor layers in a slot. For the proper transposition of the conductors in parallel paths, the primary design principle, and the rules for proper wire connections in the parallel passages, were investigated using an analytical approach and then validated using Motor-CAD.In addition, the examples of correct and incorrect transposition of conductors in parallel paths with their effect on AC losses were considered by employing numerical methods.According to primary observation, the ohmic losses increased about 88% in the numerical model with the wrong transposition. The impact of conductor layers on the AC loss reduction was investigated in the next part.For this purpose, various setups with different conductor layers were modeled using a numerical approach.According to numerical results, the number of conductor layers has a significant impact on the AC losses in that setups with a higher number of layers provide lower AC losses, for example, the ohmic loss in the conductor near the slot opening for the eight-layer setup was about 82% less than for the two-layer setup. Figure 1 . Figure 1.Replacing the conventional round winding with hairpin windings in various elec motors. Figure 1 . Figure 1.Replacing the conventional round winding with hairpin windings in various electric motors. Figure 3 . Figure 3. Modeling the hairpin winding configuration with six-layer conductors with Motor-CAD: (a) the table of transposition of windings, (b) coil arrangement of phase one, (c) linear view of coil arrangement of phase one, and (d) phasor diagram of the windings. Figure 3 . Figure 3. Modeling the hairpin winding configuration with six-layer conductors with Motor-CAD: (a) the table of transposition of windings, (b) coil arrangement of phase one, (c) linear view of coil arrangement of phase one, and (d) phasor diagram of the windings. Figure 4 . Figure 4. (a) Geometry model of a quarter of SynRM in Magnet software, (b) correct transposition circuit, and (c) incorrect transposition conductors. Machines 2022, 10 , 1029 8 of 15 Figure 5 . Figure 5. (a) FEA results of correct transposition of windings for 1 kHz, (b) FEA results of incorrect transposition of windings for 1 kHz, and (c) ohmic losses versus frequency in A3 and A4 conductors with correct and incorrect transposition of windings. Figure 5 . Figure 5. (a) FEA results of correct transposition of windings for 1 kHz, (b) FEA results of incorrect transposition of windings for 1 kHz, and (c) ohmic losses versus frequency in A3 and A4 conductors with correct and incorrect transposition of windings. Figure 6 . Figure 6.Various setups for considering the effect of conductor numbers on AC losses: (a) two-layer, (b) four-layer, (c) six-layer, and (d) eight-layer. Figure 7 .Figure 7 . Figure 7.Comparison of numerical results of current density distribution counter plot at 1 kHz: (a) four-layer, (b) eight-layer layout, and (c) diagram current density distribution diagram of both layouts. Figure 8 . Figure 8.(a) Numerical results of ohmic losses for various conductor layers, (b) DC, AC, and total ohmic losses of layouts at 1 kHz, and (c) ohmic loss for the layer near the slot opening at 1 kHz for various setups. Figure 8 . Figure 8.(a) Numerical results of ohmic losses for various conductor layers, (b) DC, AC, and total ohmic losses of layouts at 1 kHz, and (c) ohmic loss for the layer near the slot opening at 1 kHz for various setups. Machines 2022 , 10, 1029 13 of 15 clarify the reason for the two-layer setup behavior.Figure 9a,b show the current density distribution in the Z-direction and flux-density distribution at 50 Hz and 800 Hz, respectively. Figure 9 . Figure 9. Current density distribution in the Z direction and the flux density distribution for twolayer, four-layer, and eight-layer conductors (a) at 50 Hz and (b) 800 Hz. Figure 9 . Figure 9. Current density distribution in the Z direction and the flux density distribution for two-layer, four-layer, and eight-layer conductors (a) at 50 Hz and (b) 800 Hz. Parameter Symbol Quantity Pole pitch τ 106.2 mm Machine stack length L 156.1 mm Air gap thickness g 0.4 mm Rotor outer diameter of the rotor Dro 135.2 mm Slot width bs1 6 mm Slot active height hs 21.7 mm Table 2 . Dimension and current value of various configurations. Table 3 . Ohmic losses of various setups. Table 4 . AC losses factor of various layer layouts.
6,994.4
2022-11-04T00:00:00.000
[ "Physics" ]
PUBLIC FAMILY SPENDING, LABOUR PRODUCTIVITY, INCOME INEQUALITY AND POVERTY GAP IN THE GROUP OF SEVEN COUNTRIES: EMPIRICAL EVIDENCE FROM PANEL DATA Purpose. Comparable data on distribution of family income provide reference point for determining economic performance of any country, opportunity to assess effects of income inequality and poverty drivers that are either country- or region-specific. This study analysed the effectiveness of composite indices of public spending on family benefits, labour productivity, macroeconomic performance indicators and moderating factors in reducing income inequality and poverty gap in the Group of Seven (G7) countries from 1980 to 2019. Methodology. The study employed fixed effects Least Squares regression model in panel environment within the framework of empirical econometric methodologies. The composite indices comprised public spending on family benefits in cash and kind, unemployment allowance payments, tax on personal income, labour productivity, harmonised unemployment rate, consumer price index, real GDP growth rate, GDP per capita and per hour worked, fertility rate and trade. After graphical analysis of the data, order of integration was via unit root tests. Hausman test was carried out to choose between fixed and random effects models. Subsequently, parameters of the models were estimated and evaluated for significance at the 0.05 critical level. INTRODUCTION Considerable reduction in income inequality and poverty, through peoplecentred fiscal policy thrusts and increased productivity and national output, has been one of the main objectives of governments of most countries all over the world. Recent data show global extreme rate to be 10.7 percent in 2012, 12.4 percent of the world's population lived in extreme poverty in 2013, and that number of people living below the international poverty line of $1.9 daily income decreased by 114 million (Perreira, Lalner and Sanchez-paramo, 2017). World Vision (2018) reports that about 25 percent of the world's population has moved out of extreme poverty since 1990, and less than 10 percent now lives in extreme poverty, with survival based on $1.9 or less in a day. Historically, official poverty rate differs across the Group of Seven (G7) countries over time. Official Poverty rates in the United States were 14.8, 12.3 and 11.8 percent in 2014, 2017 and 2018 respectively (Semega et al, 2019). In Britain, the rates were 22.0 percent and 13.9 percent in 1990 and 2017, respectively . At the end of the 19 th Century, more than 25.0 percent of people in Britain lived at subsistence level, or even below (The British Academy, 2018). Poverty rate in Canada was 12.4% in 2008, with plus or minus 1 percent changes from 12.0 percent until 2015. In 2017, poverty rate in the country was 8.7 percent (Statistics Canada, 2020). In France, 2 million people lived in extreme poverty, and recent data indicate that 14 percent of the population (8.8 million people) live in poverty (Komyati, 2019). Germany experienced rising poverty rate from 14.0 percent in 2006 to 14.5 percent in 2010, 15.1 percent in 2011 and 15.2 percent in 2012 (Kreft, 2014;CIA World Factbook, 2019). Accurate number or percentage of the population living in poverty in Japan is difficult to obtain because the country has no official poverty line. However, regular employment status survey in 2006 showed that 8.2 percent of regular Japanese workers lived in poverty. The poverty rates were estimated at 16.1 percent in 2013, with 15.7 percent of the population living in poverty (United Nations, 2017), and 15.7 percent in 2017 (Statistica, 2017). The percentage of Italian poor population increased from 7.9 percent in 2016 to 8.4 percent in 2017, with poverty rates of 14.0 percent in 2016, and 27.7 percent in 2017 (Statistica, 2017; Instituto Nazionale di Statistica, 2018;Lu, 2018;Maio, 2018). Like poverty rates, threshold of poverty differs across the G7 countries. Poverty thresholds in the countries were: $61,372 in the Unites States in 2017 ; 60 percent of the median United Kingdom household income or £25,000 in Britain (The British Academy, 2018); 13.0 percent in Canada, determined as household after-tax income below 50 percent of the median after-tax income (Statistics Canada, 2019); 60 percent of a median revenue in France (Komyati, 2019) and 2,099 euros in Germany, where the trend poverty line is anchored on net income (Kreft, 2014;CIA World Factbook, 2019). Though Japan has no official poverty line, house-hold mean net-adjusted disposable income of US$23,458, which exceeds the OECD countries' average of US$22,378, is the proxy (Lu, 2018); but at poverty threshold of 1676.54 euros (Instituto Nazionale di Statistica, 2018;Maio, 2018), Italy is below the OECD countries' average. Comparable data on distribution of household disposable income provide reference point for determining relative position of any country on the global economic development map as the basis to assess the effects of income inequality and factors that are either country-or region-specific. Governments could learn from the success of palliative measures implemented in other countries to reduce income disparities and poverty. Arguably, achieving comparability in the context may be constrained by differences in national practices, especially in terms of concepts of inequality measures such as the GNI coefficient and statistical sources . For instance, Heshmati (2004) used World Income Inequality Database (WIID), also known as United Nations University (UNU-WIDER), to provide evidence that suggests that inequality in disposable income is declining over time. But the significant heterogeneity at regional and development levels over time cast serious doubts on Heshmati's evidence. For instance, estimates by the International Labour Organisation (2016) show that more than 300 million people in developed countries lived in poverty in 2012. Moreover, widening inequality has accompanied rising incomes around the world; just as poverty level is on the increase in the developed countries (United Nations, 2016). Therefore, poverty is also the experience in the developed countries. Though global data suggest that income inequality across households has risen in many countries, some estimates show that it has narrowed across the world as a whole because the incomes of developing and developed regions have been converging (United Nations, 2016). This shows that, despite the growth in income, widening inequality still persists. For instance, although China has remarkably reduced poverty incidence in a short period of time, income inequality still remains a serious challenge, which requires greater effort over longer time horizons (Liu, 2017). Fiscal policies that engender equity in education reduce income inequality by reducing earnings disparity among the population (OECD, 2012). Sources of income inequality and poverty such as low labour productivity, high fertility rate and proportional income tax may exacerbate poverty gap within and across regional groupings, especially in the event of negative externality economic shock. For example, Philpott (2013) notes that productivity gap between the United Kingdom and other G7 countries widens to largest in 20 Years, with the tendency to increase in the years ahead. Recent data (see the Appendix) show that only the United States ranks among the five most productive countries in the world in 2015 (Johnson, 2017). Hitherto, substantial studies concentrated on productivity in general, and labour productivity in particular; and just few analysed income inequality and pov-erty in relation to either economic growth (Charles, 1982;Blank and Blinder, 1986;Blank and Card, 1993;Khan et al, 2014;Liu 2017) or labour productivity (Chinbui et al, 1993;Cimoli et al, 2017) in the context of regional groupings of either developing or developed countries. Therefore, this study examines the effectiveness of familycentred public spending and some other macroeconomic indices in reducing income inequalities and poverty gap in the Group of Seven (G7) countries, with reference to the forty-year period, 1980-2019. The empirical statistical results provide basis for logical conclusion and appropriate policy implications. The remaining sections of this paper are: literature review, methodology employed for the analysis, presentation of results and discussion of findings, and conclusion and policy implications for the G7 countries. 2.1.Conceptual and Theoretical Issues In the literature, the different indices used to measure income inequality among individuals or households include: (1) The GINI coefficient index, which shows the extent to which income distribution among households or individuals in an economy deviates from a perfectly equal distribution . It compares the cumulative proportions of the population against proportions of income they receive. Its value ranges from 0 (perfect inequality) to 1 (perfect equality). The more the coefficient tends to 1, the less inequality and vice versa. (2) S80/S20 index, which is the ratio of the average income of the twenty percent richest to the twenty percent poorest people in the population of a given country. (3) P90/P10 index, determined as the ratio of 10 percent of people with highest income (i.e., upper bound value of the ninth decile) to that of the first decile or 10 percent of people with lowest income. (4) P90/P50 index, which shows the ratio of the upper bound value or ninety percent of the people with highest income to the median income or fifty percent of the population with middle income level. (5) P50/P10, which indicates the fifth bound value of the fifth decile or fifty percent of people with median income relative to the upper bound value of the first decile or ten percent of people with median income. (6) The Palma ratio which shows the share of all income received by ten percent of the people with highest disposable income relative to the share of all income received by forty percent of people with the lowest disposable income among the population of a given country . Productivity index is expressed as the ratio of a country's real gross domestic product (RGDP) to the average number of hours (full-and part-term) all employed people work annually (Johnson, 2017). Poverty gap measures the ratio or proportion by which the mean income of the poor falls below the poverty line. Poverty gap provides an indication of the poverty level in a country, thereby helping to put the country's poverty rate in its proper context . As an indicator of poverty level, it is measured for the total population as well as for people within the age range of 18 and 65 years and people over 65 years of age. Discussions on ethical side of the concept have been considering questions as to whether equality is desirable, fair, and the appropriate level (Sen, 1992). Modern approach to inequality and poverty measurement has definitional elements in the contexts of income based on ethical concepts or other basis for the consideration of distributional comparisons (Deininger, 1996). The concepts are anchored on a set of assumptions that validate any specific ranking principle. In practice, income may be considered as wealth or expenditure. Substantial modern literature explains that income plays the role of a personal index or utility, usually articulated as nominal income normalised by an index of needs (Cowell, 2002). Stiglitz, Sen and Fitoussi (2009) considers income that is adjusted for publicly-provided in-kind transfers to be the most comprehensive concept of household disposable income. This implies that the income of an individual is assumed to fall within some range that gives exact economic definition of income. Under perfect competition, wage distribution among workers is deemed to reflect marginal revenue products, which varies according the workers' abilities. But the tenets of perfect competition are not consistent with inequality in disposable income (Liu, 2002). Naturally, therefore, this aspect of research study is not suggested by traditional economic theory. Income distribution vector contains the income of a given individual family member and determines the welfare of the family in terms of the income amount available to it. Therefore, welfare of the family is contextualised and suitably classified as either poor or rich class. The amount of money allocated to each class differs; so does the amount which may be invested in social resources (Marx, 1849) or allocated to finance public benefits varies, and family or household income inequality persists. This negates the realism that macroeconomic policies that deviate significantly from poor family-palliative spending usually have far-reaching adverse effects on poor family disposable income OECD (2012). In addition, a wide range of other factors determine family disposable income, some of which are articulated in the perceived linkage channels shown in Figure 1. Empirical Studies Available literature suggest paucity of studies related to this aspect of research work in recent times. Based on regional panel data, Blank and Card (1993) investigated the connectivity among poverty, income distribution and growth in nine regions of the United States during 1957-1991 period. The study found heterogeneous effects of poverty on income inequality and growth. The study showed that income inequality and poverty are closely related to conditions in the labour market. Failure of poverty rates to respond to robust GDP growth during the 1980s was due to the combination of slow productivity growth and widening wage inequality. Though the study ignored the determinants of family disposable income, its findings are consistent with some earlier studies (Charles 1977;Charles, 1982;Blank and Blinder, 1986;Slottje, 1989;Ruggles, 1990;Jargowski and Bane, 1991;Levy and Murnane, 1992); and contemporary studies (Blank, 1993;Chinbui et al, 1993;Card and Riddell, 1993). In the context of wage inequality, Liu (2002) investigated the net effects of relative deprivation and efficiency wages on labour productivity in Taiwan and South Korea. Based on Taiwanese data from 1979 to 1996 and Korean data from 1993 to 1996, the results indicated that relative deprivation has a highly negative effect on industrial productivity while the effect of efficiency wages is not statistically significant. These underscore the importance of relative deprivation and support the view that manufacturing firms must be concerned with the social legitimacy of their wage distribution, if sustained high labour productivity must be engendered. The literature provides some empirical evidence which suggest multiple linkage paths between labour productivity increases and poverty reduction. The linkages include price level instability, unemployment, barriers to technology adoption, initial asset endowments and constraints to market access, all of which inhibit the ability of the poorest to participate in the gains from labour productivity growth (Schneider and Gugerty, 2011). With annual panel data for 32 Sub-Saharan African (SSA) countries, Dhrifi (2014) estimated a simultaneous equation model that catalysed the interrelationship between agriculture labour productivity, technological innovation and poverty. The results showed significant contribution of agricultural productivity to output growth and poverty in the countries. Technological innovation had direct and indirect significant positive impact on poverty. Khan et al. (2014) found that rural development and national income per capita have negative association with poverty and income inequality, but positive association with labour agricultural output growth in Pakistan. Also, FDI has a positive impact on income inequality and poverty. However, external debt is positively related to poverty and income inequality in rural Pakistan. Worthy of note is that health expenditures have positive relationships with poverty and inequality; an indication that the country's health reforms are intrinsically anti-poor. Cimoli et al (2017) examined productivity in the contexts of social expenditure and income distribution in Latin America. The study showed that though social expenditure and direct redistribution are crucial for improving income distribution, and that sustainable equality requires structural change. The authors provided evidence that both institutions and production structure in Latin America fail to foster equality and, thus, engender extremely high levels of inequality during the 1990-2010 period. Based on data for Korea in the post-World War II period accessed from WIDER inequality database, Heshmati (2004) investigated the linkage between inequality and some macroeconomic variables -growth, openness, wages, liberalisation and income redistribution. The results suggested declining income inequality over time both in the levels and development. Cervantes-Godoy and Dewbre (2010) reported that while economic growth has considerable poverty reduction effect, the sector mix of growth matters substantially with growth in agricultural incomes being specifically important for poverty reduction in OECD countries. A survey by Ramos and Mann (2017) on fiscal approach for inclusive growth as strategy to reduce inequalities found that the G7 countries have been facing lingering period of low growth and persistent lower income of the poorest. The evidence suggests that inequalities widened over the last two decades amid stagnating productivity growth. The emphasised potential of fiscal policy to fundamentally shaping the nexus between productivity and inclusiveness so as to engender income inequality and poverty reduction in the OECD and other countries. Therefore, it recommended, among other things, that the G7 governments need to revisit the tax and benefits system to provide enhanced incentives for labour market participation, encourage the creation of quality jobs in the formal economy, and provide incentives for skills development and lifelong learning. And that the countries should strengthen their social protection systems, particularly in the areas of public spending policies in the direction of poor family benefits. It is obvious that, except Ramos and Mann (2017), the previous studies left out some relevant key variables in the linkages between poor family-oriented fiscal policy thrusts and labour productivity, on the one hand, and income inequality and poverty gap, on the other hand. For instance, the studies ignored the role of tax on personal income in shaping family adjusted disposable income adjusted. The studies also neglected the relevance of family fertility rate and other macroeconomic considerations such as real GDP growth rate and trade-driven external economic shock. Therefore, the stimulation and innovative point of departure of this current research interest is the inclusion of these relevant key variables omitted by previous studies. This justifies the relevance of the study within the contexts of public family spending, labour productivity, income inequality and poverty gap in the Group of Seven (G7) countries. Design, Data and Source We employed Panel EGLS regression model to analyse data for the Group of Seven (G7) countries, namely: The United States of America, Britain or the United Kingdom, Canada, France, Germany, Japan and Italy. The data sets are proxy variables for poor family-oriented fiscal policy thrusts and income inequality incidence and poverty gap in the G7 countries. For the policy variables, we considered public spending on family in-cash and kind, unemployment allowances payments and tax on personal income. GDP per hour worked and labour productivity are the productivity variables, while harmonised unemployment rate, consumer price index, real GDP growth rate, GDP per capita and trade are relevant macroeconomic variables, and fertility rate and trade moderate the influence of the variables on income inequality and poverty gap. The data span 40 years (1980 -2019). Time series values of the data set were extracted from the OECD (2019) family database and World Bank's (2019) World Development Indicators data bank. The sources have been proven to be authoritative and reliable over the years. The variables, descriptions and sources are summarised in Table 1. Specification of Models for Analysis We specified and estimated two models, as the basis to determine the relative effectiveness of the poor family-focused public spending in reducing the income inequality intensity and poverty gap in the G7 countries. In the models, income inequality and poverty gap are the respective dependent variables while composite indices of poor family-oriented fiscal policy transfers, labour productivity, macroeconomic performance indicators and composite index of the moderator variables are the independent variables. Though most of the series are ratios or percentages, we transformed all into logarithmic form to bring all data to the same baseline and, thus, eliminate idiosyncratic-induced outliers in the models (Wooldridge, 2006). This neutralises country-specific influence across the OECD countries. We recognise that several factors shape the family or household disposable income of the family. Therefore, we specified the aggregated analytic models as follows: (1) where lniieqty and lnpg are indices of income inequality and poverty gap, respectively. Σfofpt is composite index of family-oriented fiscal policy thrusts, consisting of three fiscal policy indicators, namely: public spending on family in cash and kind as well as unemployment allowance payments, which we classified the sum as public spending on family transfers (PSFTs), plus tax on personal income (TOPI). Σlpvty is a composite index of labour productivity, which components are GDP per hour worked (GDPPHW) and labour productivity index (LP). Also, Σmepi is a composite index of macroeconomic performance, which incorporates harmonised unemployment rate (HUR), consumer price index (CPI) or inflation, real GDP growth (rGDPgr) and GDP per capita (GDPPC). Similarly, Σmf is a composite index of intra-country moderating factors across the OECD countries, and the components are fertility rate (FTR) and trade (TRADE). FTR moderates demographic influence on family disposable income while TRADE moderates influence of external sector or exchange globalisation on family disposable income. μ is the error term , and it is assumed to satisfy white noise conditions. Disaggregating equations (1) and (2), we obtain the following: (3) where λ 0 and θ 0 are the intercepts of the models, λ j (j = 1, 2, 3, …, 10) and θ k (k = 1, 2, 3, …, 10) are the respective coefficients of the models to be estimated. The coefficient of each of the independent variables depicts the effect of the associated independent variable on the dependent variable. μ i,t is the white noise error term. PSFBS, TOPI, HUR, CPI, RGDPgr GDPPC LP, GDPPHW, FTR and TRADE are as earlier defined. The data set are time series observations on the variables. Therefore, stationary properties of the sets are ascertained so as to ensure stability and time invariance in the estimated relationships. The justification is that a non-stationary time series yields spurious results and, therefore, is inappropriate to generalise for time other than the present as regression tends to yield spurious and inconsistent estimates (Engle and Granger, 1987). The data set is characterised by small number of cross-section units (seven countries) and relatively long period (1980 -2019). We conduct Hausman test to determine the appropriateness of either fixed or random effects model. Based on the result, we employed period fixed effects model to estimate parameters of the model, via panel least squares. This method is considered more suitable than the Generalised Method of Moments (GMM) Estimator which is suitable for dynamic panel data models with fixed effects, large number of cross-sections and short time series (Holtz-Eakin, Newey and Rosen, 1988; Arellano and Bover, 1995;Stock, 2007;William, 2008). We control for time heterogeneous outlier effects across the countries by incorporating period dummy in the estimation process and, thus, control for countryspecifics among the OECD countries. This mitigates any unobserved problems of endogeneity among the dependent and independent variables, as well as time outlier effects across the countries. The a priori expectations are that the coefficients of lnTOPI, lnFTR, lnHUR and lnCPI would have positive sign, thereby indicating positive response of income inequality (lnIIEQTY) and poverty gap (PG) to 1 percentage change the variables. On the other hand, the coefficients of lnPSFTs lnRGDPgr, lnGDPPC, lnLPRODVTY, lnG-DPPHW and lnTRADE would have negative sign showing that the response of income inequality (lnIIEQTY) and poverty gap (PG) to 1 percentage change in the independent variables would be negative. We evaluated statistical significance of the responses at the conventional 5% critical level. The expectations are summarised in Table 2. Graphical Analysis of the Data Series Graphical analysis of income inequality, poverty gap, labour productivity, public spending on families in cash, and unemployment allowance payment during the 1980-2019 period are presented in Figures 2. to 7. respectively. It is obvious from Figure 2. that the G7 countries experienced narrow inequality income distribution during the period, except the United States whose inequality widened between 1982 and 1984, and peaked at 3 percent. It is evident from Figure 4. that in the G7 countries, government cash spending on families relativey fluctuating trends during the 1980-2019 period, and that over the period the US and Britain exhibited greater fluctuations (between 1.2 and 2.5 percent of GDP) than other G7 countries. . shows low fluactions in tax on personal income in the G7 countries during the 40-year period. It is manifest from the line graphs that differrent taxgovernment ratios in the countries during period. For instance, from 1981 to 2009, Canada has greater tax-government ratio while the ratio is lowest France and Japan throughout the period. From 1981 to 1997, the ratio ratio was lower in France than Japan; but reversed between the two countries from 1998 to 2019. Time Series Properties of the Data Sets Stationarity test result of the time series panel data set for unit root test of times series panel data set is presented in Table 3. (2020), using E-Views version 10 Notes: LLC assumes common unit root process. **Significant at the level of 0.05. Trade moderates the influence of trade globalization. The results in Table 3. show that panel data series of the variables are integrated of order zero, I(0). Therefore, Panel Engle and Granger Least Squares (EGLS) estimation technique is appropriate to obtain numerical values of parameters of the models. 4.3.Hausmen Test Result Result of the Hausman test is presented in Table 4. The result shows that at 11 degree of freedom, Chi-Square statistic has p-value of 0.0002, which is less that the conventional 0.05 level. Therefore, fixed effects panel model is appropriate. 4.4.Results of the Fixed Effect Panel Least Squares Regression Estimates of the intercept and coefficients, as well as the relevant evaluation statistics with p-values, for the Panel EGLS regression models specified in equations (1) and (2) are presented in Table 5. Note: Significance is considered at the 95% confidence interval or p-value < 0.05 level. Source: Author's computations (2020). Estimates of Model 1 coefficients provide statistical evidence of varying response of income inequality to dynamics of the independent variables. Some of the coefficients have the sign as expected a priori while others are to the contrary. As the sign of the coefficients indicates, the percentage response of income inequality (IIQTY) to percentage change in tax on personal income (TOPI), fertility rate (FTR), consumer price index (CPI), productivity (PRODVTY), gross domestic product growth per hour worked (GDPPHW) and trade (TRADE) is consistent with the expectations, while the response to public spending on family transfers (PSFTs), harmonised unemployment rate (HUR) and real GDP growth rate (RGDPgr) is contrary to the expectations. The magnitude of the coefficients with the p-values of the coefficient t-statistic values provided statistical evidence that some of the responses are negligible while others are not. The positive response of income inequality (IIEQTY) to public spending on family transfers (PSFTs) is negligible. For 1 percent increase in the composite index of public spending on family transfers (PSFTs), tax on personal income (TOPI), fertility rate (FTR) and real gross domestic product growth (RGDPgr), income inequality (IIEQTY) responds by 0.8, 4.5, 15.3, and 2.4 percent increases respectively, with respective t-statistic p-values of 0.8305, 0.3610, 0.0622 and 0.1323, which prove that the response is negligible or not statistically significant. But the response of income inequality (IIEQTY) to 1 percent increase in consumer price index (CPI) is 2.6 percent increase, which is statistically significant (p-value = 0.0116). The response of IIEQTY to 1 percent increase in harmonised unemployment rate (HUR) is 2.2 percent decrease, which is statistically negligible (p-value = 0.3920). Similarly, IIEQTY responds to 1 percent increase in labour productivity (LPRODVTY) and gross domestic per hour worked (GDPPHW) by 6.2 percent and 8.4 percent decreases, respectively. For 1 percent increase in gross domestic per capita (GDPPC) and trade (TRADE), there are 1.5 percent and 21.3 decreases, respectively, in income inequality (IIEQTY), each of which is statistically significant (p-values = 0.0320 and 0.0004). The negative coefficient, with statistic p-value of 0.0043, shows that percentage decrease in income inequality (IIEQTY) significantly exceeds its percentage increase in the context time shocks. The F-statistic (3.4573), with p-value of 0.0000, is statistical evidence that joint percentage decrease in income inequality is statistically significant relative to the dynamics of poor family-focused fiscal policy, labour productivity, macroeconomic performance and the moderating factors. Therefore, the composite indices induce significant decrease in income inequality in the OECD countries during the 1980-2019 period. The adjusted coefficient of multiple determination (Adjusted R-squared = 0.3056) shows that the independent variables considered in the model explain about 31 percent of the total variations in income inequality. This suggests that the unexplained proportion situates outside the model. The Durbin-Watson statistic (DW = 2.3193) indicates that, within the context of Model 1, the explanatory variables are free from the problem of serial order correlation. The estimates of Model 2 coefficients show that poverty gap responses differently to changes in poor family-centred fiscal policy thrusts and the other explanatory variables in the model. Coefficients of PSFTs, FTR, HUR, CPI, GDPPC and TRADE are appropriately signed, while coefficients of the other variables in the model are not consistent with the a priori expectations. Numerical values of the coefficients, with the associated t-statistic p-values provide statistical basis which show that the response of poverty gap (PG) to 1 percent increase in public spending on family transfers (PS-FTs), tax on personal income (TOPI), harmonised unemployment rate (HUR) and gross domestic product per hour worked (GDPPHW), respectively, is statistically negligible (i.e., not statistically significant). The evidence is that for 1 percent increase in PSFTs, TOP1, HUR, and GDPPHW, respectively, the percentage decreases in PG are 13.36 (p-statistic p-value = 0.2591), and 12.32 (t-statistic p-value = 0.4258); percentage increases in HUR and GDPPHW, respectively, are 6.78 (p-value = 0.3938) and 64.08 (p-value = 0.1103), respectively. On the other hand, 1 percent increase in CPI, RGD-Pgr and LPRODVTY induces statistically significant percentage increase in PG of 7.61, 9.96 and 46.46, respectively, with t-statistic p-values of 0.0169, 0.0482 and 0.0058, respectively. The results also provide statistical evidence that 1 percent increae in moderating influence of TRADE induces significant decrease (43.62 percent; t-statistic pvalue = 0.0189) in poverty gap. On the other hand, 1 percent increase in the moderating influence of FTR results induces significant increase in PG (36.51 percent; t-statistic pvalue = 0.0000). The implication is that possibly, in the G7 countries, moderating influences of influences of fertility rate trade transmit through dynamics of poor familyfriendly fiscal policy thrusts, labour productivity and some macroeconomic variables. Negative coefficient (0.0684) of the period dummy, with statistic p-value of 0.0043, provides evidence of significant difference between percentage decrease and increase in poverty gap (GP) resulting period-induced shocks. The F-statistic value of 2.8388, with p-value of 0.0000, shows that joint percentage decrease in poverty gap (GP) during the 40-year period is statistically significant relative to the mechanisms of poor family-centred fiscal policy, labour productivity, macroeconomic performance and the moderating factors. This means that the explanatory variables significantly reduce poverty gap in the G7 countries. This supports support the finding by Philpott (2013), the view expressed by the United Nations (2016) and the estimates by the International Labour Organisation (ILO, 2016). The value of the adjusted coefficient of determination (Adjusted R-squared = 0.2479) shows that the poor-focused public spending mechanisms, labour productivity metrics and macroeconomic economic factors account for about 25 percent of the total variations in poverty gap. Therefore, the unexplained proportion of the total variations may be attributable some factors outside the model such as consumption and other life styles of the poor. The Durbin-Watson statistic (DW = 1.9172) shows that the independent variables in the model are free from the problem of serial order correlation. By comparison, the negative coefficients of PSFTs and TOPI Model 2, which are positive in Model 1, show that poor family-oriented public spending on family transfers and tax on personal income effectively reduce index of income gap, but increase that of income inequality in the countries. SUMMARY, CONCLUSION AND POLICY IMPLICATIONS FOR THE G7 COUNTRIES This paper employed empirical econometric methodologies to examine the effectiveness of public spending on family transfers, labour productivity, some key macroeconomic performance indices and two moderating variables in reducing income inequality and poverty gap in the Group of Seven (G7) countries. Graphical presentations and fixed effects panel least squares (PLS) estimation techniques are used for the analysis. Estimates of the model coefficients, with the relevant statistics, provided the basis for the evaluation of the effectiveness of the independent variables in bridging income and poverty gaps in the countries. The results show that percentage changes in income inequality and poverty gap indices differ for same percentage change in public spending on family transfers, labour productivity and the macroeconomic performance indices. Some variable-specific percentage changes induced statistically significant percentage changes in income inequality and poverty gap, while others did not. Aggregate percentage change in the explanatory variables induced significant change in in income inequality and poverty gap in the countries. The results also showed that powers of the models are moderately low, and varied in explaining the total variations in incidences of income inequality and poverty gap in the G7 countries during the period. The paper concludes that, individually, increases in public spending on family transfers reduces poverty gap but increases income inequality in the G7 countries. Similarly, tax on personal income reduces poverty gap but increases the incidence of income inequality. Labour productivity reduces income inequality incidence but increases poverty gap in the countries. Changes in consumer price index (inflation), real gross domestic product growth rate, incidence of income inequality and poverty gap move in same direction. But changes in gross domestic product per capita, income inequality and poverty gap move in opposite directions. Poverty gap and gross domestic product per hour worked change in the same direction while income inequality change in the opposite direction. The findings, which are subsumed in the conclusion, present certain policy implications for the G7 countries. First, poor family-centred public spending mecha-nisms should be sustained with a view to continually narrowing poverty gap in the countries. By implication, therefore, increasing public spending on family benefits in-kind and in-cash as well as unemployment allowance payments should be in-built in poor family-focused benefits expenditures. To mitigate the increases in income inequality induced by poor family-oriented public spending and tax on personal income, buffers should by in-built in the tax structure to alleviate the income inequality-escalating effect. Further, progressive tax regime should be implemented, and substantial proportion of resultant tax revenue be channeled towards increasing poor family-beneficial public spending. Alternatively, labour productivity-enhancing investment, such as investment in functional education, training and research should be considered. This would empower the poor and increase their income earning capacity and, thus, improve their financial status. It would also reflect in increased real gross domestic product both per capita and hour worked and, ultimately, reduce poverty gap in the countries. Hence, incorporating these in the fiscal and other macroeconomic policy frameworks have inherent potentials for broader effectiveness in reducing the incidences of income inequality and poverty gap in the G7 countries. The data analysed in this paper are for the G7 countries. Therefore, the paper suggests that further studies should consider similar and or related studies for larger sample of the OECD countries and other geographical regions in the developed and developing countries as well as the emerging market economies. Authors Contributions Each of the co-authors participated actively in sourcing and extracting data sets used for analysis in this paper. Each author also participated equally parts in the review of literature done in section two. The third and fifth co-authors wrote the introduction (section one) and conceptual issues (in section two). All authors made equal contributions to the theoretical considerations (in section two) of the paper. The authors articulated the methodology in section three; implemented the econometric methodological analysis, discussion of the results, conclusion and policy implications in sections four and five. Statement of Public Interest Many studies have provided empirical evidence of dwindling income incidence and poverty reducing effects of fiscal, economic and social policy thrusts in countries across the world, particularly in the developed nations. This is confirmed for the G7 countries analysed in this paper. Therefore, the authors are of the view that appropriately redesigning and fine-tuning the mechanisms of public spending on households, and engendering commensurate labour productivity are matters of urgency within and among the countries for enhanced reductions in the widening incidences of income inequality and poverty gap.
8,297.8
2020-11-24T00:00:00.000
[ "Economics" ]
ANALYSIS OF MULTI-PIN MODULAR DAUGHTERBOARD-TO-BACKPLANE CONNECTORS AT HIGH BIT RATE SIGNALS A theoretical model for the electrical characterization of multi-pin modular daughterboard-to-backplane connectors at high bit rate signals is developed. The fundamental field equations are transformed into a linear system of equations for the currents and voltages at the edges of the pins of the connector. I. INTRODUCTION The growing tendency towards applications involving the propagation of high-bit- rate signals has revealed the characteristics of short line elements such as connectors into an important factor in designing high-speed logic devices.In fact, three im- portant constraints in the use of connectors for high speed signals applications must be taken into account. The first constraint refers to the reflections due to impedance mismatch.These reflections can cause problems in digital circuitry when the delays due to the in- terconnection distance are significant compared to signal switching times.The connector's impedance is determined by its geometry and configuration (distribution of grounded pins).The impedance mismatch is important not only because it causes undesirable reflections.It can also affect the characteristics of the trans- mitted waveform.The grade of this influence depends on the signal's rise and fall time and the discontinuity level. The second constraint is crosstalk between the signal contacts.In open systems such as multi-pin modular daughterboard-to-backplane connectors, where the pins are not shielded from one another, crosstalk is a strong interference source.The absence of electromagnetic isolation between the pins results into capacitive and inductive coupling between them.Two types of crosstalk can be defined, far-end and near-end, depending on the direction they propagate.Near-end crosstalk prop-agates on the opposite direction in respect to the transmitted signal, while, far-end crosstalk propagates to the same direction with the transmitted signal.In most connectors near-end crosstalk is more strong than the far-end is.The factors that mainly determine the crosstalk level are signal's switching times, as well, as the connector's geometry (existence of ground planes) and its configuration (grounding percentage). Finally, the third constraint is self and mutual inductances of the pins used for the signal transmission (hot pins) and of the grounded pins, as they affect the "ground inductance."The geometry of the connector is the most important factor that influences the equivalent inductance of a ground connection. In this paper the behavior of multi-pin modular daughterboard-to-backplane connectors at high-bit-rate signals is analyzed by means of a theoretical model of the connector's assembly developed using electromagnetic theory principles.Ef- ficient formulaes for the calculation of self and mutual inductances and capacitances between the pins of the connector in respect to the existence or not of ground planes, the connector's configuration and the signal's rise time in respect to the crosstalk level is presented.Furthermore, a comparison of three types of existing connectors, Standard EURO (no ground plane), Teradyne HD + 1 (one ground plane) and Teradyne HD + 2 (two ground planes), is made for bit-rates in the range of 100 Mbits/sec up to 500 Mbits/sec. The transverse electromagnetic mode assumption is used throughout the follow- ing analysis, as the outside of the connector is considered to be metallised. II. ANALYTICAL MODEL A generic multi-pin connector model consisting of N pins is presented in Fig. 1. The two well known equations derived from the coupled transmission lines theory 2,3 N Vm Vm RcIm jc0 E LmiIi 0 m 1, 2,..., N RF frequency, input voltage of the m th pin, output voltage of the m th pin, input current of the m th pin, output current of the m th pin, mutual inductance between the m th and the n th pins, mutual capacitance between the m th and the n th pins, contact resistance between the two extremes of each constitute the basis of our theoretical approach.Notice that when m n, Lm and Cm represent the self inductance and capac- itance of the m tn pin respectively. The analytical expressions for the Lmn and Cma parameters are derived by em- ploying well known electrostatic techniques 3,4 presented in Appendix A. Applying the initial conditions for each pin (see Fig. 1), the following equations are obtained" V m E m ImRm m 1,2,... ,N (3) Vm imZLm (4) where Em Rm and ZLm represents the source voltage of the m t pin, the source internal resistance of the m tn pin, the load resistance of the m ta pin. Evidently, the linear system of eqs (1), ( 2), ( 3) and (4) describes totally the electrical performance of the multi-pin connector in the frequency domain.Fur- thermore, by calculating Vm and Vm we can derive all the information we need about the near-end and far-end crosstalk respectively. The time domain performance can be described by the system of eqs ( 1)-( 2) if the jo9 operand is replaced by /Ot. III. NUMERICAL RESULTS The time domain performance of three types of existing connectors, Standard EURO (no ground plane), Teradyne HD+ 1 (one ground plane) and Teradyne HD + 2 (two ground planes), is presented.The influence of the existence or not of ground planes in respect mainly to the crosstalk level is evaluated.Furthermore, the impact of the connector's configuration (grounded pins percentage) and the signal's switching times into these types of connectors is analyzed. The following standard grounding configurations are examined" A trapezoidal pulse of width Tw and rise and fall time Tr Tw/3 and amplitude 1 V is considered as the source voltage.The source internal resistance and load resistance are considered to be 75 f. The time domain performance is derived by employing the inverse Fourrier transform into the system of eqs ( 1)-( 4). The Figs. 2 and 3 typical voltage pulses in hot (output voltage) and passive pins (near-end crosstalk) are presented respectively.The effects of the impedance mis- match are apparent in these typical pulses.The degradation of the pulse due to the reflections varies strongly with the type and the configuration of the connector and the singal's rise time. In Fig. 4 the variation of the mean value of near-end and far-end crosstalk for one active pin in respect to the grounding configuration is presented for the above mentioned types of connectors.The distribution of crosstalk level among the pins depends mainly to the distance between passive and hot pins. Finally, in Fig. 5 the influence of the signal's rise time in the crosstalk level (near-end and far-end) is presented for the HD + 2 connector with 100% config- uration. The theoretical results were verified against experimental data obtained by a test setup.The test setup consists of a daughterboard and a backpanel connected to each other through the connector under test.The theoretical results were found to be quite accurate compared to the experimental data.In Fig. 6 .-''"FA,-ENDTr'(ns) FIGURE 5 Variation of the mean value of near-end and far-end crosstalk for one active pin for the Teradyne HD+ 2 connector (100% configuration)with the signal's rise time. of the theoretical and the experimental mean value of near-end and far-end crosstalk in respect to the grounding configuration is presented for the EURO connector.. IV. CONCLUSIONS A generic analytical model for the electrical characterization of multi-pin modular daughterboard-to-backplane connectors at high bit-rate signals is presented. The crosstalk between the signal pins reveals as the major constraint for signal's transmission at bit-rates in the range of 100 Mbits/sec up to 500 Mbits/sec.The existence of one at least ground plane, as well as, a 50% grounding con- figuration may be required in order to reduce the crosstalk level into acceptable 0.07 0.06 AL 0,04 --.limits for signal's transmission at bit-rates in the range of 100 Mbits/sec up to 500 Mbits/sec. FIGURE FIGUREGeneric multi-pin connector model. FIGURE 2 FIGURE 2 Typical output voltage pulse of a hot pin (active). FIGURE 3 FIGURE 3 Typical voltage pulse of a passive pin (near-end crosstalk). FIGURE 6 FIGURE 6Comparison of the theoretical and the experimental mean value of near-end and far-end crosstalk for one active pin for the EURO connector in respect to the grounding configuration. TABLE Grounding # passive pin Variation of the mean value of near-end and far-end crosstalk for one active pin for Standard EURO, Teradyne HD+ and Teradyne HD+ 2 connectors in respect to the grounding configuration.
1,884.4
1992-01-01T00:00:00.000
[ "Engineering", "Physics" ]
Calculation of Thermodynamic Quantities of 1D Ising Model with Mixed Spin-( s , ( 2 t − 1 ) /2 ) by Means of Transfer Matrix : In this paper, we consider the one-dimensional Ising model (shortly, 1D-MSIM) having mixed spin-( s , ( 2 t − 1 ) /2 ) with the nearest neighbors and the external magnetic field. We establish the partition function of the model using the transfer matrix. We compute certain thermodynamic quantities for the 1D-MSIM. We find some precise formulas to determine the model’s free energy, entropy, magnetization, and susceptibility. By examining the iterative equations associated with the model, we use the cavity approach to investigate the phase transition problem. We numerically determine the model’s periodicity. Introduction Lenz introduced the one-dimensional (1D) classical spin-1/2 Ising model [1].Compared to the spin-1/2 model, the spin-1 Ising model, also known as the Blume-Capel model, is more appropriate; hence, it was employed in the investigation of phase transitions in systems of three states [2,3].Then, the spin-3/2 Ising model was used to extend the spin-1 Ising model's conclusions [4,5]. The magnetic characteristics of mixed-spin systems have recently attracted both theoretical and experimental attention [6][7][8].These systems are especially well suited for studying the magnetic properties of a particular class of magnetic materials, in addition to some technical applications [9].Therefore, mixed-spin Ising models have been extensively studied in the literature lately [10].In Ref. [11], we classified the disordered phases corresponding to the Ising model with spin-1 and spin 1/2 on the semi-finite Cayley tree (CT).Then, on the second-order CT, we examined the phase transition problem of the Ising model with spin-2 and spin 1/2.Furthermore, we discussed the chaotic behavior of the corresponding dynamical system [12].The main methods for examining the properties of Ising models with the mixed spin on the square lattice [10], Bethe lattice [13], and CT [11,12] are Monte Carlo simulations [10], the cavity method [14][15][16], the Kolmogorov consistency method [11,12], the iteration method, and transfer matrix method.In [17], the magneto thermal parameters that characterize the magnetocaloric effect (MCE) behaviors, such as entropy, entropy change, and adiabatic cooling rate, are precisely calculated using the transfer matrix approach. In the literature, there are numerous methods for determining the free energy of a given model [13,18].The free energy of the lattice models on the Bethe lattice is calculated while boundary conditions are taken into consideration [13,18,19].In-depth research has been performed on the thermodynamic properties of the 1D Ising model with single spin-s using the transfer matrix [20][21][22]. The author is not aware of any research on the thermodynamic properties of 1D Ising models with mixed spin.Due to its widespread applications, the matrix transfer approach is one of the most commonly used methods in statistical mechanics.In [21], the transfer matrix approach is considered to conduct an analytical study of the 1D Ising model with spin-s.To grasp physical issues as well as statistical mechanics, it is essential to compute thermodynamic quantities like free energy, entropy, magnetization, and magnetic susceptibility [22]. In this paper, to determine the thermodynamic quantities related to 1D-MSIM with the mixed spin-(s, (2t − 1)/2), we consider the transfer matrix approach.We enlarge a specific bond and establish the transfer matrix that corresponds to the bond from site j to site j + 1.By considering the trace of several transfer matrices multiplied simultaneously, the partition function is reconstructed.Inspired by the results given for the 1D Ising model with the single spin-s [21], the partition function and the free energy of 1D-MSIM with mixed spin-(s, 2 ) having the nearest neighbor interactions and external field are calculated.To calculate the model's free energy, entropy, magnetization, and susceptibility, some exact formulas are established via the corresponding transfer matrix.By using the cavity approach to the corresponding iterative equations of the model, the phase transition issue is studied.The periodicity of the model is also estimated, numerically. Preliminaries Here, we review definitions and key findings in the construction of the partition function for the 1D-MSIM with mixed spin-(s, 2 ) on the one-dimensional lattice Z.We denote the set of integers by Z and the set of strictly positive integers by N + .Two vertices x and y, x, y ∈ N + are called nearest-neighbor (NN) if there exists an edge ∈ L connecting them, which is denoted by = x, y .For x, y ∈ N + , the distance d(x, y) is the length (the number of edges) of the shortest path connecting x with y.For any x ∈ N + , the direct successor of the vertex x is defined by S(x) = {y ∈ N + : d(x, y) = 1}. Denote the set of even natural numbers by E = {2n : n ∈ N} and the set of odd natural numbers by O = {2n − 1 : n ∈ N + }.We consider the mixed spin-state spaces denoted by Φ = {±s : s ∈ Z + } ∪ {0} and Ψ = {± (2t−1) 2 : t ∈ Z + }.For A ⊂ N + , a spin configuration ξ A on A is defined as a function Let Ω E = Φ E and Ω O = Ψ O be the spaces of infinite configurations and Ω +,N = Φ E∩N 2N+1 and Ω −,N = Ψ O∩N 2N+1 be the spaces of finite configurations.Ξ = Ω E × Ω O represents the configuration space.An element of Ω E is denoted by σ(x), for x ∈ E. Similarly, an element of Ω O is denoted by s(x), for x ∈ O. In this paper, the spins in Φ and Ψ can be placed on the odd-numbered vertices and the even-numbered vertices, respectively, when mixed spins are designed on the lattice N + (see Figure 1). Construction of the Partition Function Associated with the Model Let us construct the partition function corresponding to 1D-MSIM with mixed spin (s, 2t−1 2 ) on the lattice N + . Let ξ ∈ Ξ, so that one has where σ ∈ Φ and s ∈ Ψ. Fix a finite volume Λ ⊂ N + .Let E = { = x, y : d(x, y) = 1, x, y ∈ N + } be the set of nearest-neighbor edges with at least one endpoint in Λ ⊂ N + .We examine in detail the 1D-MSIM with mixed spin (s, 2t−1 2 ) built on the lattice N + , having the Hamiltonian where the energy of each link tying up nearby sites is represented by the first sum, and the energy of each site is represented by the second sum. The partial partition functions associated with the Hamiltonian (1) can be found as follows: where ξ ∈ σ × s = Ξ, β = 1 k B T and J is the coupling constant.Given the finite set N 2N+1 , by considering the boundary spins, or the spins on the (2N + 1)-th level, we can construct the summation of Equation (2).Consequently, we state that the density-free energy function is where ) is the total partition function. As a result of the derivatives of the free energy function with respect to certain parameters, many authors have studied other thermodynamic properties [21,22].If the largest eigenvalue of the transfer matrix corresponding to the given model is λ max , the thermodynamic quantities of this system are obtained as follows: Bulk free energy Entropy Internal energy Construction of the Partition Function for the Model via Transfer Matrices The Kramers-Wannier transfer technique may be used to construct the partition function.This method requires building a transfer matrix and determining its eigenvalues. The Partition Function and the Boltzmann Weight It is well known that the total partition function Z (2N+1) (β, h ξ(x) ) = Tr(−βH 2N+1 ) depends on the particular realization {J i , h i } (see [21,22] for details).Also, it is well known that the total partition function is equal to the Boltzmann weight's sum over all possible states.Here, we consider blocks or configurations with (2N + 1) length, and under zero boundary conditions, that is, the spins designed for m > 2N + 1 are regarded as ξ(m) = 0. If we want to rewrite the Hamiltonian (1) for all configurations on the configuration space Ξ 2N+1 , we can obtain the Boltzmann weight in the form: From ( 1), for i = 1, 2, • • • , N, we denote the energy of the bond between sequential vertices 2i − 1 and 2i by Similarly, for i = 1, 2, • • • , N we denote the energy of the bond between sequential vertices 2i and 2i + 1 by Note that, as can be seen in Equations ( 8) and ( 9), for i > 1, while the oscillating magnetic fields h σ 2i−1 and h s 2i fluctuate depending on where the vertex is located, the coupling constant J, which determines the energy between two successive vertices, remains constant. It is well known that the partition function is equal to the sum of the Boltzmann weight e −βH(η) over all possible states in the lattice N [21,22].Therefore, for the configuration η ∈ Ξ 2N+1 , we can write the Boltzmann weight e −βH(σ) in the form For this system, the canonical partition function can be written as The next step is to decompose the bonds using the bond representation and factor the Boltzmann weights into pairwise factors. The Transfer Matrices Here, we construct the transfer matrices corresponding to 1D-MSIM with the (s, (2t−1) 2 ) mixed spin on the lattice N + .We obtain two different transfer matrices.First, let us place the spins of the set Φ on the odd-numbered vertices of the lattice N + , and the spins in the set Ψ on the even-numbered vertices.So, we define the entries of the transfer matrix P as follows: where Similarly, we can define the entries of the transfer matrix Q by where From ( 10)-( 13), we obtain One can easily see that the matrix P has Card(Φ)×Card(Ψ) dimensions while matrix Q has Card(Ψ)×Card(Φ) dimensions.Assume that PQ = M.The total partition function can be written as the trace of the product of 2N + 1 random transfer matrices. Investigation of Thermodynamic Quantities in the Translation-Invariant Case Assume that h = h j = h i for all j ∈ {− 1 2 , 1 2 } and i ∈ {−1, 0, 1}.This case is called the translation-invariant property.Thus, the transfer matrices P and Q given in (15) and ( 16) are obtained as follows Case I When we multiply the transfer matrices P and Q given in Equations ( 17) and (18), we obtain the square matrix: After some tedious and complicated algebraic manipulations, the eigenvalues of the matrix M are obtained as: A(β, h, J) − (A(β, h, J)) 2 − 4B(β, h, J) , A(β, h, J) + (A(β, h, J)) For the sake of simplicity, substitute the variables θ = e βJ 2 and v = e βh 2 to obtain By computing the trace of the matrix product M, we can determine the partition function as where λ i denotes the eigenvalues of the transfer matrix M for i = 1, 2, 3. From (11), we obtain where λ t,j denote the eigenvalues of transfer matrix M. So, one has As it is well known, the model's critical behavior manifests itself in the thermodynamic limit as N → ∞; therefore, the largest eigenvalue λ max = λ 3 of the transfer matrix M given in (19) stands as the only indicator of the bulk free energy:   . Behavior of the Thermodynamic Quantities of 1D-MSIM with Mixed Spin-(s,(2t − 1)/2) in the Absence of a Magnetic Field For h = 0, we obtain the transfer matrix M given in (19) as The set of eigenvalues of the matrix M is obtained as follows: It is clear that λ 3 = λ max .From the formula in (23), we obtain Figure 2 shows the graph of free energy F(β, 0) given in ( 25) as a function of β for J = 0.2.From Figure 2, it can be seen that the free energy is approaching zero for T → 0 + .In Figure 2, the oscillating magnetic fields h s 2i and h σ 2i+1 are assumed to be zero.If h s 2i and h σ 2i+1 are taken to be nonzero, then, by using the parametric representation, the free energy function can be plotted in the form h → (β(h), F(h)).Similarly, from (6), we obtain the internal energy as The changes in entropy of 1D-MSIM with mixed spin (1, 1/2) are given in Figure 3. Results show that the internal energy converges to zero at the higher temperature values, while becoming a constant 0 as T → 0. When the temperature increases, the entropy increases until it reaches ln( √ 6) and goes to zero at low temperature values.Case II Note that while successively placing spins on the vertices of a one-dimensional lattice, if the elements of Ψ are placed in the first vertex of the lattice N, then a transfer matrix having 2 × 2 dimensions is obtained, and the eigenvalues of the transfer matrix are the same as those of the previous matrix M. We obtain the set of eigenvalues of the matrix M given in (29) as It should be noted here that the eigenvalues of the matrices M and M are the same, except for 0. Therefore, it is obtained that λ 3 = λ 2 (see Equations ( 20) and (30)).We can choose any spin on the starting vertex, while mixed spins are placed on consecutive vertices of the lattice.From ( 20) and (30), it is clear that λ max = λ 2 . Magnetization and Magnetic Susceptibility In this subsection, we construct magnetization and magnetic susceptibility by means of the eigenvalue λ 2 in the Formula (30). Let us consider the reduced nearest neighbor spin-spin coupling interactions K = βJ 2 = ln(θ) and the reduced magnetic field H = βh ξ(x) 2 = ln(v), we write the magnetization as and susceptibility We do not give the exact expressions of the formulas for the magnetization and susceptibility here, because the operations are excessively long and complex.We numerically examine the behavior of these two quantities as functions of h and T. The Mathematica software (Version 8.0, Wolfram Research, Inc., Champaign, IL, USA) [23] has been used to perform the calculations and to plot the figures.A threedimensional plot of m(T, H) is given in Figure 4. Taking into account the eigenvalue λ 2 in the Formula (30), for J = −2, one can easily see that, as the temperature T increases, the smoothness of the function decreases and exhibits a behavior similar to the step function.On the other hand, for J = 2, the surface of the function becomes smooth.In contrast to the single-spin Ising chain's typical single step [21], in Figure 4 (left), the magnetization graph shows three distinct steps at low temperatures.This phenomenon may be explained by spins adopting distinct states within even and odd sublattices, or by all spins assuming the same state.A three-dimensional plot of χ(T, H) is given in Figure 5. Nonexistence Phase Transition in the Absence of the External Magnetic Field In this section, we study 1D-MSIM with the mixed spin-(1,1/2) employing the ERR. Suppose that , from Equations ( 43) and (44), we obtain where θ = e If the values of x and y in Equations ( 37) and ( 38) are substituted into Equation (36), one obtains the following rational recursive equation f : R + → R + : From (39), we obtain a second-order equation From Equation (40), it is clear that the function f given in (39) has only one positive fixed point, so there is no phase transition for the given model.The graphs of the function f for antiferromagnetic (θ = 0.421) and ferromagnetic (θ = 3.421) values are plotted in Figure 6.The diagrams show that the function in both cases has just two fixed points.Additionally, there is only one positive fixed point.As a result, there is just one Gibbs measure and no phase transition in the model.As it is well known, the classical single spin 1D Ising model's phase transition issue has attracted the interest of statistical mechanics researchers for over a century, and it has been established that there is no phase transition [1].In this present paper, we demonstrated that the mixed spin 1D Ising model has no phase transition as an example. Chaoticity of the Model A dynamical system's chaotic behavior is determined by how sensitively it depends on the initial conditions [24,25].It has long been a challenge to see whether a model's phase transition and the chaotic behavior of the corresponding dynamical system are related [26].In this subsection, we investigate the chaoticity of the 3D-RDS given in (36)-(38).With the help of the Lyapunov exponent, we numerically study the model's chaotic behavior. From the definition of the Lyapunov exponent, we obtain From (41), we have Figure 7 shows the Lyapunov exponent of the dynamical system corresponding to the 1D-MSIM with mixed spin-(1,1/2).It is seen that the Lyapunov exponent is always negative in the ferromagnetic region.Therefore, the rational function f (w) given in (39) is periodic.The behavior of the Lyapunov exponent λ(θ) in the antiferromagnetic region can also be examined. The Average Magnetization for the Mixed Spin-(1,1/2) Ising Model In this section, by using the exact recursion relations (ERRs) (see [9]), we obtain the partial partition functions of the 1D-MSIM with mixed spin-(s, (2t−1) 2 ).Contrary to the previous sections, here we place the spins of Ψ in the first vertex of the lattice and the spins of Φ in the second vertex of the lattice, while placing spins at the vertices of the lattice Z + , so for n = 0, 1, 2, • • • , we obtain s 2n−1 ∈ Ψ and σ 2n ∈ Φ.With the help of the cavity method (see [14][15][16] for details), we obtain the partial partition functions as follows: The Average Magnetization In this subsection, assuming there is an external magnetic field, we obtain the magnetization equations for spins s and (2t−1) 2 , respectively, as follows. other issues in statistical mechanics.By considering the approach given by Albayrak [9], the isothermal entropy change of the average magnetization for 1D-MSIM will be analyzed.The findings obtained in our present article exhibit different behaviors from the results given in previous studies [21,22].Frankly, we cannot comment on the physical meaning of such behavior.These topics are covered in introductory courses in statistical mechanics at the undergraduate level.Therefore, we believe that the results of our present study will be of interest to a wide readership of statistical mechanics. Funding: This research received no external funding. Figure 2 . Figure 2. The graph of free energy F(β, 0) given in (25) as a function of β for J = 0.2.In the absence of an external magnetic field, from (5), the entropy of the model is obtained as Figure 3 . Figure 3. (Left)The graph of entropy S(β, 0) given in(26) for J = 0.2 as a function of the temperature T in the absence of a magnetic field.(Right) The graph of entropy S(β, 0) given in(26) for J = 0.2 as a function of β in the absence of a magnetic field. Figure 4 . Figure 4. (Left) The graph of magnetization m(T, H) given in (31) for J = −2 as a function of h and T. (Right) The graph of magnetization m(T, H) given in (31) for J = 2 as a function of h and T. Figure 5 ( Left) and (Right) show the behavior of the susceptibility function given in Equation (32).For J = −2, three stacks resembling a boot are observed, and a stack appears for J = 2 in the chosen region T ∈ [0.001, 7], h ∈ [−6, 6].As seen in Figure 5 (Left), while three different susceptibility peaks appear for J = −2 at low temperatures, the susceptibility peaks disappear as the temperature increases.For J = 2, only a single susceptibility peak is observed at low temperatures (see Figure 5 (Right)). Figure 5 . Figure 5. (Left) The graph of susceptibility χ(T, H) given in (32) for J = −2 as a function of h and T. (Right) The graph of susceptibility χ(T, H) given in (32) for J = 2 as a function of h and T. Figure 6 . Figure 6.(Left) The graph of the function f given in (39) for θ = 0.421.(Right) The graph of the function f given in (39) for θ = 3.421.
4,821.6
2023-09-14T00:00:00.000
[ "Physics" ]
Information Swimmer: A Novel Mechanism of Self-propulsion We study an information-based mechanism of self-propulsion in noisy environment. An information swimmer maintains directional motion by periodically measuring its velocity and accordingly adjusting its friction coefficient. Assuming that the measurement and adjustment are reversible and hence cause no energy dissipation, an information swimmer may move without external energy input. There is however no violation of the second law of thermodynamics, because the information entropy stored in the memory of swimmer increases monotonically. By optimizing its control parameters, the swimmer can achieve a steady velocity that is comparable to the root-mean-square velocity of an analogous Brownian particle. We also define a swimming efficiency in terms of information entropy production rate, and find that in equilibrium media with white noises, information swimmers are generally less efficient than Brownian particles driven by constant forces. For colored noises with long correlation times, the frequency of measurement can be greatly reduced without affecting the efficiency of information swimmers. While self-propelling of bacteria is typically achieved via actuation of cellular appendages such as flagella, synthetic self-propellors often move via surface effects [13,15], or phoretic effects [3], i.e., interaction with gradient of physical quantities. Another interesting selfpropelling mechanism is Brownian motor [11,12], which relies on a delicate interplay between noises and periodic potential. Here we explore a novel mechanism of self-propulsion that uses information instead of energy. We imagine a swimmer periodically measures its velocity relative to its environment, and adjust its friction coefficient accordingly. As a consequence it is able to maintain a steady motion along the chosen direction, with an average velocity comparable with root-meansquare velocity of Brownian motion. We shall call such a system an information swimmer, in echo of information engine which use information to extract energy from a single heat bath. Perpetual motion with no energy dissipation may widely be perceived as violating the second law of thermodynamics. The essence of the second law is however not about energy, but about entropy. In the presence of information acquiring devices, entropy increase is not necessarily accompanied by energy dissipation. The relation between entropy and information is an intellectually profound question with long and interesting history [16]. Through the works of Maxwell [17] , Szilard [18], Landauer [19], Penrose [20], and Bennett [21,22], and many more recent studies [16,23,24], it has become clear that in the presence of information acquiring agent, the total entropy can be written as whereas H(Info) is the information entropy, and S(Sys|Info) is the thermodynamic entropy conditioned on the information acquired. Thermodynamic entropy and information entropy can be transformed into each other, much like energy and mass. The total entropy is however dictated by the second law to be non-decreasing. There have recently been a large body of researches on design and application of "information heat engines", which use information to extract mechanical work, or to push particles to higher free energy states [25][26][27][28][29][30][31][32][33]. Information swimmer is an information engine which serves the distinct purpose of maintaining of directional transport in noisy environment. Of special interests is a design called "information ratchet" [34,35], where one measures the position of an object, and adjust the confining potential accordingly, which leads to directional transport with no apparent energy dissipation. Unlike information ratchet, the mechanism we study in this work does not require a periodic confinement potential. Information swimmers may be realized using colloidal particle, polymeric materials, or biological molecules. An information swimmer consists of at least three components: a sensor (measuring velocity), a memory (storing information), and a switch (controlling friction coefficient). Accordingly, the working cycle of the swimmer consists of three basis operations: measurement, information storage, and tuning of friction coefficient. While in reality these operations are always dissipational, there is no lower bound of dissipation imposed by any fundamental law of physics. The observation that measurement can in principle be made reversible and hence causes no energy dissipation was first made by Bennett [21,22,36], and played an essential role in the proper resolution of paradox raised by Maxwell's demon. For a review, see reference [16]. Likewise, the operation of information storage (which can be understood as a special form of computation) can also be made reversible, and hence causes no energy dissipation. (Also pointed out by Bennett is that information erasure is always irreversible and hence cause energy dissipation. We shall further assume that the memory space is sufficiently large so that there is no need of information erasure.) The friction coefficient may be tuned by changing the swimmer's volume, shape, or surface structure, which can be realized using structure phase transition of polymeric materials. A fancier way is to deform the particle using microscopic molecular motors [37][38][39][40][41][42][43] or nanorobots [44][45][46]. Again there is no lower bound of energy cost/dissipation in these processes, and hence we will assume them to be reversible. Assuming that all these operations are reversible, an information swimmer can maintain directional motion without energy dissipation. This however does not mean violation of the second law, since the information entropy stored in the swimmer's memory does increase steadily. Swimming on information may have advantages over existing mechanisms of transport in microscopic noisy environment. It does not need externally imposed potential, which is required by information ratchets and Brownian motors, or proximity to interface, which is required by phoretic swimmers. It may have higher biocompatibility since it causes much less (in principle zero) energy dissipation. It is interesting to note that life uses information for control of transport, long before human understand information. For example, in chemotaxis [47], bacteria tune their motion using information (together with energy) on gradient of external chemical stimulus (either attractant or repellent). In swarming, birds and insects adjust their fly according to their distances to neighbors [48,49]. In marine navigation, sailors control the directions of rudders and sails [50] to make boat turn (tacking) and move (zigzagging) along arbitrary direction relative to the wind. Sailing is in fact an almost ideal realization of self-propulsion using information only, because the energy cost of turning sails and rudders is negligible comparing with that needed to drive a boat. Further studies may reveal many other informationfeedback mechanisms for motion in biological and technological fields. Model and Simulation Methods To reduce the complexity of details, we consider the one-dimensional case. The physics of higher dimensional cases is essentially the same. The swimmer has a baseline friction coefficient γ. After every time interval τ m , the swimmer measures its velocity and compares with a threshold velocity v 0 . The friction coefficient is The results of measurement are recorded in its internal memory space. The dynamics of the swimmer can be modeled using piecewise linear Langevin dynamics. Assuming that the noises acting on the swimmer is Gaussian and white, during the time interval nτ m < t < (n + 1)τ m , the velocity of the swimmer obeys the following equation where ζ(t) is the normalized Gaussian white noise with statistical properties: The coefficients of the noise terms in Eq. (2) are chosen such that the Einstein relation, i.e., the second Fluctuation-Dissipation relation, is satisfied separately for v(t) < v 0 and v(t) > v 0 . This relation is a reflection of the equilibrium nature of the ambient fluid, and remains valid independent of the swimmer velocity [59]. If we set α = 1, Eq. (2) describes a normal Brownian particle, whose velocity distribution converges to a Maxwell distribution with average kinetic energy T /2, as required by equilibrium statistical mechanics. Further defining two control parametersṽ 0 ≡ v 0 /v T andτ m = τ m /τ , Eqs. (2) and (3) becomes We use numerical scheme to discretize Eq. In long time, the swimmer reaches a steady moving state. By simple dimensional argument, we expect that the average velocity scales with the thermal velocity v T = T /m, if we chooseτ m smaller than unity, and α 2 larger than unity. For a swimmer with a micron radius moving in a fluid with viscosity comparable to water, we estimate that the time scale τ ∼ 8µs and the steady velocity v ∼ 100µm/s. Velocity distribution We consider the case α 2 = 10, which means that the friction coefficient becomes ten times larger if the velocity is unfavorable. First we set the threshold velocityṽ 0 = 0, and plot the velocity distributions for two different periods of measurement τ m = 0.01 and 1. As shown in Fig. 1(a), forτ m = 0.01 (one measurement every τ /100 seconds), the velocity distribution has an abrupt change of slope near v = 0, and a high peak to the right. The probability density is severely suppressed for v < 0. The overall shape is drastically different from the equilibrium Gaussian distribution. The average velocity of the swimmer is approximately 0.7v T , as one can see from Fig. 1(b). Forτ m = 1 (one measurement every τ seconds), the velocity distribution has a much more regular shape, even though the difference with equilibrium distribution is clearly noticeable. The average velocity is approximately 0.25v T , as one can see from Fig. 1(b). Next, we fixτ m = 0.01, and vary the threshold velocity. As one can see in Fig. 1(a), there is always an abrupt change of slope in the vicinity ofṽ 0 . Forṽ 0 = 1, the velocity distribution p(ṽ) exhibits a two-peak structure, with a low and wide peak to the left ofṽ 0 , and a high and narrow peak to the right, and the average velocity of the swimmer is approximately 0.8v T . These results unequivocally demonstrate the feasibility of information swimming as a viable mechanism of self-propulsion. In Fig. 1(b) we show how the average velocity ṽ varies as a function of the threshold velocityṽ 0 forτ m = 1, 0.1, 0.01, and α 2 = 10, 2 respectively. In all cases we see that ṽ vanishes asṽ 0 → ±∞. This is of course totally expected, since in these limits, measurement almost always return the same result, and the friction coefficient remains invariant. The peaks of the curves in Fig. 1(b) correspond to the maximal average velocity achieved by tuningṽ 0 . The location of the peak move to the right as α increases, orτ m decreases. However the optimal threshold velocity is never far away from zero. The height of peak increases as α increases or τ m decreases. Note that the maximal velocity is generally less than, but of the same order as the thermal velocity. Entropic Efficiency of Swimming Similar to Maxwell's demon, an information swimmer record its measurement results, and as a consequence, the information entropy of its memory increases steadily during the motion. Hence even though the entropy of the ambient fluid remains constant, the total entropy increases, in accordance with the second law. Because information can be stored and transferred at arbitrary low temperature, increase of information entropy does not need to be accompanied by energy dissipation [19]. We can quantify the rate of information entropy increase in swimmer's memory. Let s n be the result of measurement at time nτ m , which takes 0 if v > v 0 or 1 if v < v 0 . The sequence s 1 , s 2 , . . . , s n , . . . forms a discrete Markov chain. The results of consecutive measurements are however generally correlated, and hence the sequence can be compressed before storage. According to information theory [51,52], the minimal information bit needed to store each measurement result (averaged over long sequence of measurements) is the entropy rate of the Markov chain, which is defined as: The entropy production rate is then Σ = I/τ m , where τ m is the period of measurement. Consider now a force-driven Brownian particle, whose dynamics is Eq. (2) with α 2 = 1, and augmented by an external force F : mv = F − γv + √ 2γT ζ(t). The work done by the external force is constantly dissipated into the ambient fluid in the form of heat. The average velocity is v = F/γ, and the entropy production rate is Σ = F v /T = v 2 γ/T . The same quantity for information swimmer is The ratio between Eqs. (8) and (7) is: which characterizes the entropic efficiency of information swimmer relative to a force driven Brownian particle with the same mass, baseline friction coefficient, and in the same temperature. We compute this ratio for different values of control parameters. As shown in SI Sec. II, the optimal choice of threshold velocityṽ 0 is always very close to zero. Thus we fix v 0 = 0 to reduce to the task of computation. In Fig. 1(c), we plot η IS as a function ofτ m for α 2 = 2, 5, 10, 100 respectively. It is seen there that the optimal τ m is always a faction of τ , and decreases monotonically as α increases. The maximal efficiency monotonically increases with α, and remains substantially lower than unity. Information swimming on colored noise There are many systems where fluctuations exhibit long time-correlations. For example, the time-correlations of velocity in fluids are characterized by long tails that decay algebraically [53]. Active fluids [54,55] and turbulent fluids [56] exhibit long range correlations both in time and in space. These correlations can be used to reduce the frequency of measurements for information swimmers. Here we study information swimmer using colored noises that are in thermal equilibrium. As illustrated in Fig. 2(a), we consider a system consisting of a box (with center-of-mass coordinate x 1 and negligible mass) and a particle (with coordinate x 2 and mass m) that are connected by a spring with constant k. The particle is confined inside the box and therefore does not couple to noise or friction directly. The dynamics of the two-body system is described by a set of coupled linear Langevin equations: where ζ(t) is a Gaussian white noise satisfying Eq. (3). Integrating out x 1 , we find that the dynamics of x 2 satisfies the generalized Langevin equation with an effective Ornstein-Uhlenbeck noise: where τ c = γ/k is the noise correlation time, which can be tuned continuously by tuning the spring constant k. In the limit k → ∞, τ c → 0, and Eq. (11) reduces to the white noise model. Details of the derivation are given by SI Sec. III. Equation (11c) is the second Fluctuation-Dissipation Theorem which relates variance of colored noise to the friction kernel [57]. It is a consequence of the time-reversal symmetry of the original model Eq. (10). Another possible realization of the dynamics (10) is illustrated in Fig. 2(b), where two particles connected by a spring are moving near the interface of two fluids. The first particle moves in a fluid with high viscosity in the over-damped regime so that its mass can be ignored, and the second particle moves in a fluid with low viscosity so that both friction and noise can be ignored. We can now introduce a measure-feedback mechanism into the model Eqs. (10), so that it becomes an information swimmer. The system measures velocity v 2 every τ m second, and tunes the friction coefficient to γ if v 2 > v 0 and α 2 γ if v 2 < v 0 . The coupled Langevin equations (10) are simulated using the same method as above. The dimensionless variables are defined the same as in Eq. (4). In Fig. 3, we show respectively the average velocity ṽ and entropic efficiency η IS as a function of period of measurementτ m for various values of k. It is seen that as long ask is finite, both quantities exhibit oscillation as a function of τ m . These oscillations can be used to design information swimmers that optimize velocity or efficiency. Furthermore, it appears that both quantities converge to zero asτ m → 0, indicating that the measurement-feedback mechanism becomes ineffective as the frequency of measurements becomes high, whose reason we do not yet understand. Finally, as the corre-lation time of noise becomes longer and longer (with decreasingk), the peaks of curves move systematically towards the right in theτ m axis. We note that while the maximal velocity decreases steadily as τ c increases, the change of maximal efficiency is very insignificant. The general conclusion is therefore we can use correlation of noises to reduce the frequency of measurement without changing the swimming efficiency. This may be very useful for design of information swimmers in non-equilibrium environment, such as turbulent fluids [56] or active fluids [54,55]. X.X. acknowledge support from NSFC via grant #11674217, as well as additional support from a Shanghai Talent Program. This research is also supported by Shanghai Municipal Science and Technology Major Project (Grant No.2019SHZDZX01).
3,927
2020-08-13T00:00:00.000
[ "Physics" ]
The mitochondrial associated endoplasmic reticulum membranes: A platform for the pathogenesis of inflammation‐mediated metabolic diseases Abstract Mitochondria‐associated endoplasmic reticulum membranes (MAM) are specialized subcellular compartments that are shaped by endoplasmic reticulum (ER) subdomains placed side by side to the outer membrane of mitochondria (OMM) being connected by tethering proteins in mammalian cells. Studies showed that MAM has multiple physiological functions. These include regulation of lipid synthesis and transport, Ca2+ transport and signaling, mitochondrial dynamics, apoptosis, autophagy, and formation and activation of an inflammasome. However, alterations of MAM integrity lead to deleterious effects due to an increased generation of mitochondrial reactive oxygen species (ROS) via increased Ca2+ transfer from the ER to mitochondria. This, in turn, causes mitochondrial damage and release of mitochondrial components into the cytosol as damage‐associated molecular patterns which rapidly activate MAM‐resident Nod‐like receptor protein‐3 (NLRP3) inflammasome components. This complex induces the release of pro‐inflammatory cytokines that initiate low‐grade chronic inflammation that subsequently causes the development of metabolic diseases. But, the mechanisms of how MAM is involved in the pathogenesis of these diseases are not exhaustively reviewed. Therefore, this review was aimed to highlight the contribution of MAM to a variety of cellular functions and consider its significance pertaining to the pathogenesis of inflammation‐mediated metabolic diseases. | INTRODUCTION Subcellular organelles have been viewed as separate entities with defined compositions and organizations that equip them with specialized functions. 1 However, studies proved that there are interorganellar membrane contact sites in organelles with close tethered proximity. 2,3 Among such sites, the interaction between the outer membrane of mitochondria (OMM) and that of endoplasmic reticulum (ER) was one of the best characterized. 3,4 Bernhard et al. came up with the first reported evidence for the existence of sites of physical interaction between ER and OMM from electron microscopic studies of rat liver cells in 1952. 5 A similar experimental approach by Bernhard and Rouiller in 1956 reassured this finding. 6 Another study of pseudobranch gland cells of Atlantic killifish in 1959 also reported the existence of this site. 7 However, the unique membrane that corresponded to this site was isolated as fraction X from a crude rat liver mitochondrial preparation in Vance laboratory in 1990 8 and later named as mitochondrialassociated endoplasmic reticulum membranes (MAM) in the paper published in 1994. 9 Structurally, MAM is composed of ER subdomains placed side by side with OMM but are biochemically distinct from either pure ER or mitochondria membrane. 10,11 Electron tomography images revealed that ER and mitochondria are linked by tethers formed from specific protein-protein interactions ( Figure 1). 10 Studies showed that MAM has multiple functions including regulation of lipid synthesis and transport, 8,9 cellular apoptosis, 11 initiation of autophagy, 12 Ca 2+ transport and signaling, 13 mitochondrial dynamics, 14 and insulin signaling. 15 Most importantly, it serves as a platform for inflammasome formation and activation which play a significant role in initiating inflammatory responses as Thoudam et al. 16 elegantly explained in their review published in 2018. | Structure of MAM When observed under a wide-field three-dimensional deconvolution microscope, approximately 5%-20% of the total surface of the mitochondrial network is estimated to be close to the ER membrane. 21 Moreover, the electron micrograph image revealed that overlapping apposition distances between the ER and OMM vary approximately between 10 and 25 nm. 10 This variation emanates from the fact that OMM is attached differently to smooth ER and rough ER ( Figure 1). The distance of rough ER from OMM is greater than its distance from smooth ER. This is because ribosomes are attached to rough ER and act as spacers, limiting the minimum distance between them to about 20 nm. 22 The distance can also be varied by the influence of intracellular Ca 2+ signaling as studies of live cell imaging revealed. 23-25 ER-located Mfn-2 interacts in trans with mitochondrial mitofusins to form a tethering complex to bridge the ER and mitochondria and allow efficient Ca 2+ transfer between them. 27 Silencing of Mfn-2 in embryonic fibroblasts has been shown to increase the distance F I G U R E 1 Structure of mitochondrial-associated endoplasmic membranes. IMM, inner membrane of mitochondrion; MAM, mitochondria associated endoplasmic reticulum membrane; OMM, the outer membrane of mitochondria; RER, rough endoplasmic reticulum; SER, smooth endoplasmic reticulum. The picture is created at https://biorender.com/. between them Furthermore, the absence of Mfn-2 consistently cause a loosening of their connection. 18 The VDAC1 of the OMM interacts with the ER-Ca 2+ release channel IP 3 R3 via the molecular chaperone Grp75 and forms the VDAC1-Grp75-IP 3 R3 complex serving as a conduit of Ca 2+ transfer from the ER to mitochondria. It may not have a tethering role, but rather a contact site spacing/filling function. Sigma1R (Sig-1R), another MAM resident protein, stabilizes MAM by interacting with VDAC1 and IP 3 R3. 27,28 VAPB interacts with OMM protein tyrosine phosphatase-interacting protein-51 (PTPIP51) and forms the VAPB-PTPIP51 tethering complex. 28,29 Overexpression of either protein increase ER-mitochondria tethering and Ca 2+ exchange between them, while their knockout decrease it. 28,30 Bap31 interacts with Fis1 and forms the Bap31-Fis1 MAM complex. [31][32][33] Simmen et al. demonstrated that another protein called phosphofurin acidic cluster sorting protein 2 (PACS-2) modulates the role of Bap31 in tethering the two organelles. However, depletion of PACS-2 was reported to cause Bap-31-dependent mitochondrial fragmentation and uncoupling from the ER along with inhibition of Ca 2+ signal transmission. 34 Mammalian target of rapamycin complex 2 (mTORC2) also regulates the integrity of MAM by Akt-dependent phosphorylation of PACS-2. 35 | Functions of MAM The existence of contact sites between mitochondria and ER suggests that the structures that are localized to these two different organelles can come together and synergize to provide additional functions at these specialized domains called MAM. 18,27,36,37 2.3.1 | MAM and Ca 2+ signaling Ca 2+ is released from the ER and transferred to mitochondria using MAM as a conduit. 21 Moderate loading of mitochondria with Ca 2+ stimulates ATP production via Ca 2+ -dependent activation of the key metabolic enzymes such as pyruvate dehydrogenase (PDH), isocitrate dehydrogenase, and α-ketoglutarate dehydrogenase. 27 However, prolonged overflow of Ca 2+ into mitochondria activates apoptosis whereas its reduction cellular causes energy crisis by decreasing oxidative phosphorylation. 22,27 The mechanism of Ca 2+ transfer from ER to mitochondria is mediated by four major proteins which include IP 3 R, VDAC1, Grp75, and mitochondrial Ca 2+ uniporter (MCU) reside in MAM, OMM, cytosol, and inner mitochondrial membrane (IMM), respectively. 38 The VDAC1 of the OMM interacts with IP 3 R via Grp75 (ref. 27 ) and increases the efficiency of mitochondrial Ca 2+ uptake. 39 Although OMM is permeable to Ca 2+ through VDAC1, the IMM is not. Thus, Ca 2+ needs to go through MCU, a low-affinity Ca 2+ channel that requires high Ca 2+ levels, to reach the mitochondrial matrix. 27,39,40 Sig-1R and glucose-regulated protein 78 (GRP78) are also involved in this process. Sig-1R physically associate with GRP78 at MAM where they regulate Ca 2+ flux via IP3R3, stabilizing it and prolonging Ca 2+ signaling from the ER to mitochondria. 23 Other proteins also take part in this process. For instance, Akt phosphorylates IP 3 R and suppresses IP 3 R-mediated Ca 2+ release, while tumor suppressors phosphatase and tensin homolog (PTEN) directly dephosphorylates IP 3 R and promyelocytic leukemia protein (PML) indirectly dephosphorylates IPR 3 via sequestration of protein phosphatase 2A (PP2A). 20 | MAM and lipid synthesis and transfer Phospholipid transport and synthesis is the first recognized function of MAM. 9 The ER is the main site of phospholipid biosynthesis and plays a significant role in intracellular vesicular trafficking. Because mitochondria are not connected to this trafficking, they require direct lipid transfer from the ER 18 or they might utilize MAM as a conduit. 24 On top of this, MAM are also enriched in major enzymes that are involved in the biosynthesis of the two most abundant phospholipids namely phosphatidylcholine and phosphatidylethanolamine. These enzymes include phosphatidylserine synthase-1 or -2 and phosphatidylethanolamine N-methyltransferase 2. 9,41,42 Studies reported that MAM is also the site of triacylglycerol synthesis and steroidogenesis. 43 Long-chain-fatty-acid-CoA ligase 4 that mediates the ligation of fatty acids to coenzyme A also enriched at MAM. 42 An enzyme catalyzing the formation of cholesterol esters and diacylglycerol, Acylcoenzyme A: cholesterol acyltransferase-1, is also found in MAM. 42,44 A MAM resident steroidogenic acute regulatory protein interacts with VDAC2, another MAM protein, and facilitates its translocation to the MAM before it is targeted to mitochondria for its role in steroidgenesis. 45 | MAM and insulin signaling MAM is also involved in the insulin signaling pathway. 10 However, several proteins involved in the insulin signaling pathway are enriched in MAM. For example, Akt which phosphorylates IP 3 R and reduces Ca 2+ release and prevents apoptosis, 10,35,46 mTORC2 which maintains MAM integrity, 35,47 PTEN which sensitizes cells to apoptosis by dephosphorylating IP 3 R and restoring Ca 2+ release 48 are localized in MAM. PML which modulates its sensitivity to apoptosis by sequestering PP2A and blocking Akt phosphorylation and Ca 2+ release by IP3R is also found in this site. 49 Likewise, mitochondrial Ca 2+ uptake was found crucial for effective insulin signaling in skeletal muscle cells 50 and cardiac myocytes. 51 | MAM and mitochondrial dynamics Under normal conditions, mitochondrion changes morphology to create a fragmented or tubular network and to move along the cytoskeleton with coordinated mitochondrial fission and fusion processes. 15 Mitochondrial fusion assists cells to recover from stressful conditions whereas fission promotes mitophagy to remove mitochondria that are damaged or unable to regain their function to undergo apoptosis. 17 Mfn-1, Mfn-2, and optic atrophy 1 (Opa1) are the most studied of several proteins known to involve in mitochondrial fusion. Mfn-1 exclusively localizes to mitochondria, whereas Mfn-2 resides in MAM and mitochondria. While Mfn-1 and Mfn2 are responsible for the fusion of the OMM, Opa1 is responsible for the fusion of the IMM. 15 Likewise, the fission process also involves several proteins of which dynamin-related protein 1 (Drp1) is well studied and recruited from the cytosol to the OMM by various adaptor proteins including mitochondrial fission protein 1 (Fis-1) which are present on the OMM. DRP1 is translocated to the MAM site, where it can cleave mitochondria efficiently and target damaged mitochondria for mitophagy. 15,20 2.3.5 | MAM and autophagy "Autophagy is a mechanism for the degradation of cellular material either as a way to provide nutrients during times of starvation or as a quality control system that eliminates unneeded proteins and/or organelles during normal growth and development" 13 These wastes are isolated by double-membrane vesicles called autophagosomes which fuse with lysosomes to form autolysosomes and eventually degraded by lysosomal enzymes. 52 The formation and development of autophagosomes involve autophagy-related genes, which encode proteins that regulate autophagy as discussed in various articles. [53][54][55][56] Hamasaki et al. 54 reviewed that the origin and formation of autophagosomes remain obscure for scientists though independent studies have pointed to several different organelles as potential membrane sources. However, a recent study showed that autophagosomes form at MAM. 13 Gomez-Suaga et al. reported that VAPB-PTPIP51 tethers are also in regulating autophagy. However, overexpression of VAPB or PTPIP51 tightens ER-mitochondria contacts and impairs autophagosome formation. However, small interfering RNA (siRNA)-mediated loss of VAPB or PTPIP51 loosens contacts and stimulates autophagosome formation. 57 | MAM and cellular apoptosis The transfer of Ca 2+ from ER to mitochondria is accomplished by MAM and excessive mitochondrial Ca 2+ uptake can trigger Ca 2+ -mediated apoptosis. 58 Higher matrix Ca 2+ levels sensitize mitochondria to undergo mitochondrial outer membrane permeabilization (MOMP), a process preceding apoptosis. 59 Increased uptake of Ca 2+ by mitochondria may result in changes in the permeability of the IMM. This is caused by the prolonged opening of the mitochondrial permeability transition pore which induces mitochondrial swelling and OMM rupture. This is followed by the release of apoptosisinducing caspase-activating factors such as cytosolic cytochrome C. 58 The released cytochrome C amplifies caspase activation by binding to the IP 3 R and exacerbating its Ca 2+ leaking properties. 60 Bap31-Fis1 complex also play role in apoptosis by recruiting caspase-8 which enables the cleavage of Bap31 into its pro-death fragment, p20Bap31. This fragment favors the emptying of ER-Ca 2+ stores and induces cell death. 33 Moreover, PTEN has been known to interact with IP3R/Akt complex and reduce their phosphorylation. This, in turn, results in increased Ca 2+ release and apoptosis. 48 2.3.7 | MAM and ER stress "The ER plays an indispensable role in protein folding. This role is facilitated by the presence of chaperone proteins capable of binding to newly synthesized, but as yet unfolded, proteins to facilitate optimal protein folding and prevent protein-protein aggregation under normal physiological conditions." 61 However, in pathological conditions, the accumulation of misfolded or unfolded proteins may occur and cause cellular dyshomeostasis. This triggers ER to elicit an adaptive or protective response called unfolded protein response (UPR) which restores cellular homeostasis. 60,61 If the homeostasis is not restored, the UPR switches to promote apoptosis. Nevertheless, in some pathophysiological situations, the homeostatic capacity of the ER and the UPR may not meet cellular demands and may even become a detrimental condition called ER stress in which structural uncoupling of ER from mitochondria may also induce it. [61][62][63] 2.3.8 | MAM in the formation and activation of the NLRP3 inflammasome Cells require the capacity to sense and respond to the danger presented by extrinsic threats. Pattern recognition receptors (PRRs) recognize conserved molecular patterns expressed by invading pathogens (pathogen-associated molecular patterns, PAMPs) or endogenous ligands derived from cellular damage resulting from infection or tissue injury (danger-associated molecular patterns, DAMPs). Activation of PRRs by PAMPs or DAMPs triggers downstream signaling cascades and causes the production of Type I interferon (interferon-α and interferon-β) and pro-inflammatory cytokines resulting in inflammation. DAMP-triggered inflammation was reported to play a crucial role in the pathogenesis of inflammation-mediated metabolic diseases. 61 One of the innate immunity sensors that mediate this inflammatory response are cytosolic multiprotein complexes termed inflammasomes 23 and the formation of this inflammasome involves MAM as a platform. 60,65 The most studied inflammasome was the NOD-like receptor family protein 3 (NLRP3) inflammasome. 65 In an inactive state, NLRP3 localizes to the ER membrane and cytosol. However, in its active state, both NLRP3 and its adaptor apoptosis-associated speck-like protein containing a CARD (ASC) relocate to the MAM fraction where they are strategically assembled and located to sense signals emanating from mitochondria like increased ROS and mitochondrial-derived DAMPs like mitochondrial DNA (mtDNA), ATP, cardiolipin, cytochrome C, and succinate. 19 NLRP3 oligomerizes and exposes its effector domain to interact with ASC. ASC in turn recruits pro-caspase-1 which is cleaved and becomes matured caspase-1. Finally, activated caspase-1 cleaves pro-interleukin-1β (pro-IL-1β) and pro-IL-18 to generate mature IL-1β and IL-18 (Figure 3). 20 Recently, Zang et al. reported that the NLRP3 inflammasome has been shown to be activated by a variety of distinct stimuli, including K+ efflux, mitochondrial dysfunction, lysosomal disruption, and trans-Golgi disassembly. However, the most widely accepted stimuli was K+ efflux-induced NLRP3 inflammasome activation. This mechanism has been thought to involve mitochondria. This is supported by the fact that PAMPs such as bacterial lipopolysaccharide (LPS) induced the expression of genes involved in mitochondrial biogenesis and mitophagy, resulting in an increase in mitochondrial mass and mitochondrial membrane potential. To back up their claim, the researchers silenced the mitochondrial transcription factor A (Tfam), and genetic ablation of Tfam abolished the NLRP3 inflammasome activation induced by K + efflux via release of mtDNA as deprivation of cellular mtDNA by ethidium bromide treatment could reverse inflammasome activation induced by K+ efflux. They also revealed that mtDNA release induced by K+ efflux in macrophages activates NLRP3 inflammasome. 66 It has also been shown that ER stress activates the NLRP3 inflammasome in both peripheral and central immune cells. ER stress-induced NLRP3 inflammasome activation occurs via a Ca 2+ -dependent and ROS-independent mechanism in monocytes, which is associated with upregulation of MAMs-resident chaperones, closer ER-mitochondrial contacts, mitochondrial depolarization, and impaired dynamics. MAM thus plays an important role in the innate immune cells' response to ER stress. 67 | Neurodegenerative diseases Neurodegenerative diseases, such as AD, PD, and ALS/ FTD, occur when nerve cells in the brain or peripheral nervous system lose function over time and ultimately die. 68 While they involve distinct protein pathologies, they share similar features that involve MAM disruption including mitochondrial damage, Ca 2+ homeostasis, lipid metabolism, axonal transport, UPR activation, autophagy, and inflammatory responses. 36 Inflammatory response proteins have been most commonly implicated in neurodegenerative diseases. For example, a continuous release of IL-1β negatively modulates the integrity of the brain-blood barrier, which results in the infiltration of immune cells into the central nervous system. 19 The same cytokine amplifies the generation of other pro-inflammatory factors by stimulating the activation of microglia and astrocytes. 69 Moreover, Fogal et al. 70 have demonstrated that overexpression of IL-1β mediates neuronal injury and cell death throughout glutamate excitotoxicity. Misfolded protein aggregates and excessive accumulation of metabolites are also critical determinants for the activation of ER-stress and NLRP3 inflammasome which in turn initiates neurodegeneration including AD and PD. 64 | Alzheimer's disease The pathogenesis of AD and the series of events underlying it are unknown. The most widely accepted hypothesis is called the amyloid cascade, based on the observation that the brain of AD patients contains high levels of extracellular plaques called β-amyloid (Aβ) composed of 40-42 amino acids and neurofibrillary tangles (NFTs) which are composed of hyperphosphorylated forms of the microtubule-associated protein tau in the cerebrum. Aβ is produced by cleavage of the amyloid precursor protein (APP) by presenilin (presenilin-1 and/or presenilin-2), both of which are active components of the γ-secretase complex. 69 Notably, dominant mutations both in the presenilins and in APP are currently the only known causes of the familial form of AD (FAD). 71,72 As summarized in references 17,73 these two isoenzymes of presenilins were found to be enriched in MAM fractions from neuronal and non-neuronal cells. Yu et al. 74 briefly explained that significant mutations in APP or/ and PSEN1/2 might lead to the excessive generation of Aβ42 and the increased ratio of Aβ42/40 which result in AD in their recent review. AD that is linked with presenilin mutation is also characterized by increased levels of monocyte chemoattractant protein 1 (MCP-1), IL-6, and IL-8 while a Presenilin1 mutation in microglial cells amplified tumor necrosis factor α, IL-1α, IL-1β, and IL-6 gene expression. 72 It was also reported that APP and its catabolites are also found in MAM, where they interact with other MAM-resident proteins and modulate ER functions. 75 The relationship between MAM and NLRP3 inflammasome is already described in Section 2.3.8. Moreover, researchers reported the intimate relationship between amyloid-β and NLRP3 inflammasome as oligomerized Aβ originating from nontoxic Aβ monomers directly interacted with NLRP3, leading to the activation of the NLRP3 inflammasome. 76,77 Heneka et al. demonstrated that the deposition of Aβ drives cerebral neuro-inflammation by activating microglia. Indeed, Aβ activation of the NLRP3 inflammasome in microglia is fundamental for IL-1β maturation. The researchers explained their claim with a piece of evidence from NLRP3−/− or caspase-1−/− mice carrying mutations associated with familial AD. These mice, which were largely protected from loss of spatial memory, demonstrated reduced brain caspase-1 and IL-1β activation, enhanced Aβ clearance, and NLRP3 inflammasome deficiency skewed microglial cells to an M 2 phenotype and resulted in the decreased deposition of Aβ. 78 | Parkinson's disease PD is the most common movement disorder and the second most common neurodegenerative disease after AD. 23,36 It is characterized by an excessive death of dopaminergic neurons in the substantia nigra pars compacta together with intraneuronal inclusions termed "Lewy bodies" which are mainly formed from aggregates of a protein called α-synuclein. Most recently, it has been shown that α-synuclein localizes at the MAM. 36,75 The overexpression of both wild type and mutant α-synuclein isoforms disrupt the VAPB-PTPIP51 tethers, thus decreasing MAM formation. This causes decreases in Ca 2+ exchanges between the two organelles that, in turn, lowers mitochondrial ATP production. 28 Additionally, pathogenic mutations of α-synuclein causes downregulation of MAM functions while activating inflammasome. Indeed, α-synuclein aggregates were found to be sufficient to provoke IL-1β production by activating microglia and astrocytes. The fibrillary and monomeric forms of this protein showed differences in their capacity to induce inflammation. The monomeric form only induces the expression of pro-IL-1β whereas the fibrillary form can provoke caspase-1 activation and maturation of IL-1β and fully activates the inflammasome. 79,80 In fact, similar to AD, stimulating caspase-1 activation and the release of IL-1β is necessary to induce the production of ROS and activity of cathepsin-B. 81 Accordingly, through specific inhibition of cathepsin-B; it is possible to interfere with the inflammasome assembly though this finding was not validated. More notably, Yan et al. showed that dopamineproducing neurons and NLRP3 inflammasome are tightly interconnected and are able to regulate each other. They further showed that the neurotransmitter dopamine has the potential to inhibit NLRP3 inflammasome activation and subsequent IL-1β production. This inhibitory activity of dopamine occurs via the dopamine D1 receptor signaling through an autophagic dependent process. 82 3.1.3 | ALS with associated front temporal dementia ALS/FTD is a neurodegenerative disease caused by the loss of motor neurons, resulting in the gradual deterioration of muscles. The exact cause of ALS is still not clear. However, a mutation in SigR1 is discovered in a juvenile form of ALS. In the SigR1 knockout mouse, ALS phenotypes such as muscle weakness and motor neuron loss were exhibited. 27 Another MAM protein, VAPB is also mutated in familial ALS. A mutant VAPB increases its affinity to PTPIP51 and strengthens VAPB-PTPIP51 tethering, which alters Ca 2+ shuttling between ER and mitochondria as elegantly summarized in a recently published review by Lee and Min. 83 Dominantly inherited forms of the disease were caused by deposits of Tar DNA-binding protein 43 (TDP-43) gene mutation. Notably, TDP-43-induce alteration of MAM involves breaking of the VAPB-PTPIP51, causing aberrant cellular Ca 2+ homeostasis and decreased rates of ATP production. 36 Expression, activation, and co-localization of the NLRP3 inflammasome were observed in the spinal cord of male SOD1 (G93A) mice carrying a mutant human superoxide dismutase 1 (SOD1). 84 It was also demonstrated that both aggregated and soluble SOD1G93A activates the inflammasome in primary mouse microglia. 85 However, SOD1G93A was unable to induce IL-1β secretion from microglia pretreated with NLPR3 or deficient for NLRP3,r, confirming NLRP3 as the key inflammasome complex mediating SOD1induced microglial IL-1β secretion. 86 Microglial NLRP3 upregulation was also observed in the TDP-43 mutant mice model. TDP-43 could also activate microglial inflammasomes in an NLRP3-dependent manner. Mechanistically, they could identify the generation of ROS and ATP as key events required for SOD1G93A-mediated NLRP3 activation. 84,87 | Diabetic mellitus Insulin resistance and pancreatic β-cell dysfunction in T2DM are widely associated with derangement of MAM compartments as there is a strong linkage between MAM integrity and insulin action in hepatic cells. 88 It was also demonstrated in vitro and in vivo that defective MAM is closely associated with impaired hepatic insulin sensitivity and restoration of MAM integrity by cyclophilin D overexpression improved insulin signaling in primary hepatocytes of diabetic mice. 89 Notably, an in vivo experimental study reported that in the skeletal muscle of obese and diabetic humans, the expression levels of the ER-mitochondria tethering protein Mfn-2 are reduced. Indeed, the livers of transgenic mice deleted for Mfn-2 possessed a low insulin response and a reduction in mitochondrial respiration resulting in increased production of ROS which cause subsequent accumulation of mutation at the level of mtDNA. 86 An increase in the level of ROS was found to be a primary contributor to inflammation in T2DM. In fact, proinflammatory cytokines also exacerbate ER and oxidative stress events, leading to β-cell loss, recruitment of NLRP3 inflammasome, and finally the pathogenesis of T2DM. Moreover, increased ROS also stimulates conformational changes in thioredoxin-interacting protein (TXNIP) and subsequent loss of the complex thyroidotoxin (TRX)-TRXNIP that binds and activates NLRP3 resulting in the generation of IL-1β. 23 | Cardiovascular diseases 3.3.1 | Mitochondrial dynamics and cardiovascular disease MAM and mitochondrial dynamics are also recognized as key factors in the pathogenesis of CVD. This was evidenced by a study that demonstrated that precise Ca 2+ DEGECHISA ET AL. transport from the ER to the mitochondria regulates the cardiac contraction cycle. 90 Moreover, mitochondrial Ca 2+ fluctuations and Ca 2+ oscillation triggered by ER are present during cardiomyocyte beating. 91 Among the proteins involved in the maintenance of MAM, Mfn1/2 seems to be the most relevant one in the pathogenesis of CVD. It was confirmed that adult hearts deleted for both mitofusins showed compromised cardiac function, augmented left ventricular end-diastolic volume, and reduced fractional shortening. This is supported by the fact that transgenic Mfn-2−/− mice exhibited reduced contact length between these organelles resulting in a reduction of ER-mitochondrial Ca 2+ transfer, and increased production of ROS that activate the NLRP3 inflammasome. 85 Finally, it has been reported that specific proteins conserving the ER-mitochondria interface are involved in ischemia/reperfusion (I/R). For example, OPA1 deficiency was associated with increased sensitivity to I/R, whereas the inhibition of Fis1 and DRP1 function was reported to be cardioprotective. 92 | MAM and cardiovascular diseases Missiroli et al. briefly summarized in their review that excess ROS production and subsequent NLRP3 activation are frequently found in CVD. In the presence of excess cholesterol deposition in the arterial wall, it forms crystals that induce inflammatory injury. This can be supported by the finding that macrophages can internalize these crystals and promote NLRP3 inflammasome activation in a process involving leakage of cathepsin B and L into the cytoplasm. This, in turn, causes the excessive formation of mitochondrial ROS and lowering in potassium concentrations. 19 The important role of inflammasomes was confirmed in atherosclerosis using ApoE−/− mice deletion of IL-1β gene reduced the size of atherosclerotic lesions by up to 30%. 93 Moreover, the deletion of the IL-18 receptors (IL-18R−/−) decreased the size of the lesions. 94 Despite this, NLRP3 may not be the only source of proinflammatory cytokines in atherosclerosis. Transgenic mice ApoE−/− crossed with mice deleted for different components of the NLRP3 such as (Nlrp3−/−, ASC−/−, or caspase-1−/−) exhibited no differences in atherosclerotic lesions and plaques when compared to the double knockout and control mice. 95 NLRP3 inflammasome recruitment and the appropriate MAM composition also have an important role during I/R. Notably, IL-1β and IL-18 are primary mediators of I/R-induced human myocardial injury through the inhibition of caspase-1 activity that reduces the depression in contractile force after I/R. 96 Similarly, in ASC−/− mice the level of inflammatory cytokines was reduced and this results in a significant reduction of injuries such as the development of infarctions, myocardial fibrosis, and dysfunction in myocardial I/R injury compared to wild-type controls. 97 Additionally, Shengnan and Ming-Hui demonstrated that FUNDC1 also participates in MAM formation in cardiomyocytes by binding to IP3R2. This is because FUNDC1 deletion causes an 80% reduction in ER and mitochondria contact sites resulting in the decrease of Ca 2+ transfer from ER to mitochondria resulting in elevation of ROS generation which induces chronic inflammation. 1 3.4 | The role of MAM in the onset and progression of cancer | Alteration of MAM composition in breast cancer In breast cancers, the expression of the stress-activated Sig1R was found to be higher in metastatic potential cancer cells than in normal tissue. Under basal conditions, Sig1R binds the MAM chaperone GRP78; however, upon activation of IP 3 R3, Sig1R dissociates from GRP78 and binds the receptor, thereby stabilizing it at the MAM and enhancing IP 3 R3-mediated Ca 2+ fluxes to the mitochondria. However, during conditions of chronic ER stress involving prolonged ER Ca 2+ depletion, Sig1R translocates from MAM to the peripheral ER and attenuates cellular damage, thereby preventing cell death. Sig1R also regulates Ca 2+ homeostasis by forming a functional molecular platform with the calciumactivated K + channels, thus driving Ca 2+ influx and favoring the migration of cancer cells. This implicates protumorigenic functions of this protein as stated in a current review by Morciano et al. 11 | Alteration of MAM in hepatocellular cancer Alteration of Mnfs or OPA1 function leads to decreased mitochondrial fusion, shifting the balance of mitochondrial dynamics to over-fragmentation. This phenomenon was observed in experimental settings, aimed to investigate cancer biology. 57 For instance, a study demonstrated that MFN1 loss-of-function triggered the epithelial-tomesenchymal transition of hepatocellular carcinoma favoring its metastasis. 98 Another study demonstrated that knockdown of Mfn-1 and OPA1 inhibited mitochondrial fusion in experimental settings, leading to reduced cell growth and tumor formation. This implicates the antitumor effect of OPA1 and Mfn-1 by silencing the induction of proapoptotic mechanisms, inhibition of oxidative metabolism, and ATP production. 99 | CONCLUSION MAM, a tiny membrane contact site, serves a far more important physiological function than most people realize. Based on the physiological function of multiple MAM resident proteins, there are still many unanswered questions about these contact sites. Apparently, Ca 2+ homeostasis, mitochondrial dynamics, inflammasome formation and activation, cellular autophagy, and apoptosis are all affected when this membrane contact site is disrupted. The cumulative effect of its disruption is strongly associated with inflammatory-mediated metabolic diseases, and it has a dramatic impact on health. MAM, on the other hand, plays an important role in innate immune cell response to ER stress and serves as a site of NLRP3 inflammasome activation under stress conditions, implying that MAM could serve as a novel potential therapeutic target for inflammatory-related metabolic diseases. However, the nonspecific alteration of MAM makes it so difficult to use it as a target to treat some of these diseases. AUTHOR CONTRIBUTIONS Sisay Teka Degechisa wrote the manuscript draft. Yosef Tsegaye Dabi and Solomon Tebeje Gizaw contributed to the gathering of data, draft reviewing, and editing of the manuscript. All authors revised the manuscript and approved the final version of the manuscript before submission.
6,972
2022-06-06T00:00:00.000
[ "Medicine", "Biology", "Environmental Science" ]
Development of Miconazole-Loaded Microemulsions for Enhanced Topical Delivery and Non-Destructive Analysis by Near-Infrared Spectroscopy The antifungal drug miconazole nitrate has a low solubility in water, leading to reduced therapeutic efficacy. To address this limitation, miconazole-loaded microemulsions were developed and assessed for topical skin delivery, prepared through spontaneous emulsification with oleic acid and water. The surfactant phase included a mixture of polyoxyethylene sorbitan monooleate (PSM) and various cosurfactants (ethanol, 2-(2-ethoxyethoxy) ethanol, or 2-propanol). The optimal miconazole-loaded microemulsion containing PSM and ethanol at a ratio of 1:1 showed a mean cumulative drug permeation of 87.6 ± 5.8 μg/cm2 across pig skin. The formulation exhibited higher cumulative permeation, permeation flux, and drug deposition than conventional cream and significantly increased the in vitro inhibition of Candida albicans compared with cream (p < 0.05). Over the course of a 3-month study conducted at a temperature of 30 ± 2 °C, the microemulsion exhibited favorable physicochemical stability. This outcome signifies its potential suitability as a carrier for effectively administering miconazole through topical administration. Additionally, a non-destructive technique employing near-infrared spectroscopy coupled with a partial least-squares regression (PLSR) model was developed to quantitatively analyze microemulsions containing miconazole nitrate. This approach eliminates the need for sample preparation. The optimal PLSR model was derived by utilizing orthogonal signal correction pretreated data with one latent factor. This model exhibited a remarkable R2 value of 0.9919 and a root mean square error of calibration of 0.0488. Consequently, this methodology holds potential for effectively monitoring the quantity of miconazole nitrate in various formulations, including both conventional and innovative ones. Introduction The imidazole family member miconazole nitrate has broad-spectrum antifungal activity by damaging the fungal cell membranes, which leads to the lysis of their cell walls [1]. Due to its limited solubility (<1 µg/mL) and substantial hepatic alteration, it has little systemic effectiveness [2]. Miconazole nitrate has been shown to be a potent topical antifungal agent against dermatophytes and yeasts. Therefore, miconazole-based preparations are presently utilized for topical administration and are commercially accessible in a range of topical formulations, including creams, lotions, liquid sprays, and suppositories specifically designed for vaginal application [3]. According to a previous investigation [4], the effectiveness of miconazole in treating cutaneous diseases via topical applications was found to be limited by inadequate skin penetration. Consequently, there is a need for an innovative approach that can localize the drug to the targeted area to enhance the therapeutic efficacy. This approach should deviate from traditional dosage forms and employ a topical mode of delivery. To address these challenges, researchers have explored the use of nanotechnology-based nanocarriers, such as liposomes, ethosomes, and niosomes [5][6][7]. However, these vesicular carrier systems have not been able to achieve the desired outcome of completely eradicating infections from the viable epidermis due to issues of physical, chemical, and storage instability within the formulations. Therefore, a microemulsion system was selected for this study due to its ease of preparation, ability to incorporate hydrophobic drug, stability, and most importantly, its greater physical stability in plasma compared to other vesicular systems. A microemulsion refers to a stable and homogeneous system comprising water, oil, and surfactant. It exhibits isotropic properties and is typically transparent or translucent. This formulation commonly incorporates a cosurfactant and has gained attention as a promising drug delivery system. Its advantageous characteristics include enhancing the solubility and bioavailability of drugs that possess limited water solubility [8,9]. In previous studies, microemulsion has been employed as a carrier for the topical delivery of miconazole. Ofokansi et al. [10] compared the in vitro antifungal effectiveness of a miconazole nitrate-loaded microemulsion stabilized with poloxamer and a commercial topical miconazole (Fungusol ® ) solution against clinically isolated Candida albicans. The new formulation showed an increase in in vitro antifungal activity, indicating that poloxamerstabilized microemulsion may be a potential vehicle for improving topically administered miconazole nitrate. Later, Shahzadi et al. [11] evaluated the in vitro antifungal activity against C. albicans and drug release profiles of a miconazole nitrate-loaded microemulsion and a reference cream. The microemulsion preparation showed superior results in in vitro antifungal activity, drug release, and flux compared to the cream preparation. Sadique et al. [12] also compared the in vitro antifungal effectiveness against C. albicans of a miconazole nitrate-loaded microemulsion and 1 and 2% commercially available miconazole nitrate formulations (Miconit ® ). The final selected microemulsion formulation showed enhanced antifungal activity compared to the commercially available formulations, and it was concluded that microemulsion is a promising carrier system for topical miconazole nitrate delivery. The aim of this research was to create and evaluate a microemulsion formulation as a topical delivery system to enhance the delivery of miconazole nitrate. The microemulsions containing miconazole nitrate were prepared by employing the spontaneous emulsification method, with oleic acid and deionized (DI) water used as oil and aqueous phases, respectively. This study investigated the effects of the cosurfactant type and weight ratio of surfactant and cosurfactant mixtures (K m ) on the preparation and physicochemical characteristics of the obtained microemulsion formulations, which differed from the previous studies [10][11][12]. The surfactant phase was made up of a mixture of polyoxyethylene sorbitan monooleate (PSM) and various cosurfactants (ethanol, 2-(2-ethoxyethoxy) ethanol (DEGEE), or 2-propanol) at weight ratios of 1:1, 2:1, and 3:1. The formulations' physicochemical properties, such as physical appearance, conductivity, pH, viscosity and rheological behavior, and particle size, were examined. The selected microemulsion's influence on ex vivo skin permeation and in vitro antifungal activity against C. albicans was assessed and compared with a conventional cream. The formulation's physicochemical stability was examined by storing it at room temperature (30 ± 2 • C) for a period of 3 months. Furthermore, a non-destructive analysis utilizing near-infrared (NIR) spectroscopy was created for quantifying miconazole nitrate-loaded microemulsion formulations. In the pharmaceutical industry, NIR spectroscopy is a widely used technique because it provides valuable insights into the chemical composition and physical properties of samples [13]. NIR is widely used in conjunction with fiber optical probes to identify raw materials. Furthermore, several studies have suggested the use of NIR spectroscopy to analyze drug substances and excipients quantitatively within pharmaceutical formulations through chemometric multivariate calibration and partial least-squares regression (PLSR) [14,15]. Although the literature extensively covers quantitative NIR analyses of solid dosage forms, there has been relatively less focus on the analysis of liquid formulations [14]. In this study, six PLSR models for the quantitative determination of miconazole nitrate-loaded microemulsions were developed using benchtop NIR spectroscopy with reference values from high-performance liquid chromatography (HPLC) method. To our understanding, this represents the initial establishment of a benchtop NIR spectroscopic method for quantifying miconazole nitrate-loaded microemulsion. . During the course of the experiments, DI water was utilized. Additionally, other chemicals and reagents of analytical grade or higher were procured from regional suppliers and utilized without the need for additional purification. Selection of Microemulsion Components The miconazole nitrate solubility in different oils, such as oleic acid and caprylic/capric triglyceride; surfactants, including PSM and SM; and cosurfactants, i.e., ethanol, DEGEE, and 2-propanol, was assessed using the shake-flask method [16]. Separately, 2 g of oil, surfactant, or cosurfactant was placed into stoppered vials with a capacity of 5 mL, and an excess amount of miconazole nitrate was added to each vial. To achieve equilibrium, the sample mixtures underwent mechanical stirring in a shaking water bath set at a rate of 100 strokes per minute (spm) for a duration of 72 h at a temperature of 30 ± 2 • C. The equilibrated samples were centrifuged at a speed of 9503× g for a duration of 10 min. The resulting clear supernatants were carefully collected and diluted with suitable solvents. Once filtered through a polytetrafluoroethylene (PTFE) membrane filter with a diameter of 0.45 µm, the filtrates were subjected to analysis using HPLC. This solubility testing procedure was repeated three times. The oil and surfactant that exhibited the maximum solubility of miconazole nitrate were employed for further studies. Construction of Pseudo-Ternary Phase Diagram To construct pseudo-ternary phase diagrams for unloaded microemulsions, oleic acid and PSM were chosen as the oil and surfactant, respectively, based on solubility data. The surfactant phase consisted of PSM combined with different cosurfactants (ethanol, DEGEE, or 2-propanol) at weight ratios of 1:1, 2:1, and 3:1, while the aqueous phase comprised DI water. For each phase diagram, the surfactant mixture was put into the oil phase, resulting in weight ratios of 9:1, 8:2, 7:3, 6:4, 5:5, 4:6, 3:7, 2:8, and 1:9 for the surfactant mixture to oleic acid. A magnetic stirrer was used to thoroughly mix these mixtures until a homogenous dispersion was achieved. Following the addition of the aqueous phase, various concentrations ranging from 0 to 90% in 10% weight intervals were obtained. The systems were agitated for 5 min using a magnetic stirrer before being kept at 30 ± 2 • C for a duration of 24 h to establish equilibrium. The resulting formulations exhibited transparent properties and were classified as microemulsions. Phase separation was evaluated by visual observation of turbidity. In order to determine the presence of microemulsions, gels, or two-phase regions, additional visual observations were performed. The type of microemulsion was then identified through a dilution test using either oleic acid or a brilliant blue aqueous solution. It was expected that the water-in-oil (w/o) microemulsions would exhibit miscibility with oleic acid, but immiscibility with the brilliant blue aqueous solution. Conversely, the oil-in-water (o/w) microemulsions were predicted to display opposite characteristics. The microemulsion area of the system was built on a triangle graph using SigmaPlot ® 10.0 software. The extent to which the microemulsion covered the overall percentage of the phase diagram was determined using a cut-and-weigh method [17]. However, no efforts were made to ascertain the regions associated with other structural arrangements. Preparation of Miconazole Nitrate-Loaded Microemulsions Regarding the phase diagram's microemulsion area, blank microemulsion formulations were selected and assessed for their ability to solubilize miconazole nitrate in order to evaluate the highest drug loading capacity. Two grams of blank microemulsions or water were placed in separate 5 mL capacity stoppered vials together with an excess amount of miconazole nitrate. Until equilibrium was reached, the sample mixtures were subjected to mechanical stirring in a shaking water bath running at a speed of 100 spm for 72 h at a temperature of 30 ± 2 • C. Following equilibration, the samples were centrifuged at 9503× g for 10 min, resulting in clear supernatants. These supernatants were diluted with proper solvents and then filtered through a PTFE membrane filter. The filtrates were subsequently analyzed using HPLC. The solubility tests were conducted three times. Following the determination of miconazole nitrate's solubility in blank microemulsions, the drug was accurately weighed and loaded into each unloaded microemulsion base using a magnetic stirrer until a homogenous mixture was achieved. Following that, the created miconazole nitrate-loaded microemulsions at a 1% w/w concentration were maintained at a temperature of 30 ± 2 • C for a period of 24 h to reach equilibrium before more testing. Subsequently, the microemulsions were subjected to characterization and compared with their corresponding blank formulations. The physical appearance of all samples was visually evaluated, taking into consideration factors such as clarity, color, and homogeneity. Samples were inspected with a cross-polarized light microscope (Eclipse E200 Microscope, Nikon Corporation Instruments Company, Tokyo, Japan) at a magnification of 10 × 100 to confirm the microemulsion's isotropic character. A drop of the sample was put between a cover slip and a glass slide and examined under cross-polarized light. These experiments were conducted at 30 ± 2 • C. Conductivity and pH Measurement To determine whether the samples were oil-continuous or aqueous-continuous microemulsions, the electrical conductivity of the samples was assessed using a conductivity tester (ECTestr TM 11, Eutech Instruments Pte. Ltd., Singapore), and the results were compared against the conductivity values of both oleic acid and water. The apparent pH values of the samples were determined using a pH meter (CyberScan pH110, Eutech Instruments Pte. Ltd.). At 30 ± 2 • C, each measurement was made in triplicate. Viscosity and Rheological Behavior Measurement Using a rotational rheometer equipped with a cylindrical stainless steel measurement system (φ 34 mm ISO 3219 Z34 DIN, HAAKE TM RotoVisco TM 1, Thermo Fisher Scientific GmbH, Dreieich, Germany), the viscosity and rheological behavior of the samples were assessed at various shear rates. The rheological data obtained from the viscometer were driven and calculated using the HAAKE RheoWin 4 software. To maintain a temperature Pharmaceutics 2023, 15, 1637 5 of 18 of 30 ± 2 • C, a Thermo Scientific HAAKE EZ Cool 80 circulator was utilized. The sample volume for the tests was 40 mL. By building a rheogram plotting shear stress against shear rate, the rheological behavior of the systems was investigated. The experiments yielded the apparent viscosity data at a shear rate of 1000 s −1 and a controlled temperature of 30 ± 2 • C. The tests were carried out in triplicate. Particle Size Measurement The formulations were subjected to analysis using a Zetasizer Nano-ZS instrument (Malvern Instruments Ltd., Worcestershire, UK) to determine the average droplet size (z-ave) and polydispersity index (PDI). The software automatically calculated the measurement position inside the cuvette, and the measurements were taken at a detection angle of 173 • . The Zetasizer software used the Stokes-Einstein relationship to determine the sample droplet size. The studies were done in triplicate at 30 ± 2 • C without dilution of the samples. Thermodynamic Stability of Miconazole Nitrate-Loaded Microemulsions The samples underwent a centrifugation test using a centrifuge (Hettich Universal 30F, Andreas Hettich GmbH & Co. KG, Osterode am Harz, Germany) at 21,382× g for 1 h at a temperature of 30 ± 2 • C. The degree of phase separation, visually evaluated, was used as an indication of the physical stability of the formulations following centrifugation. Only the formulations that demonstrated no phase separation were considered for the freeze-thaw stress test. The formulations underwent three complete cycles of freezing at −20 • C for 24 h followed by thawing at 30 ± 2 • C for 24 h each. The assessment of physical instability, based on the degree of phase separation, was conducted after the completion of the third cycle. Ex Vivo Skin Permeation Study of the Selected Miconazole Nitrate-Loaded Microemulsion Through a newborn pig skin membrane, miconazole nitrate from the chosen microemulsion and conventional cream (1% w/w miconazole nitrate cream) was examined ex vivo. Thermo-regulated vertical Franz diffusion cells were employed, maintaining a temperature of 37 ± 1 • C to achieve a skin surface temperature of 32 ± 1 • C. Since miconazole nitrate has limited water solubility (<1 µg/mL), the use of any additive to enhance the drug release under the sink condition is necessary. In this study, a mixture of PBS (0.01 M, pH 7.4) and methanol at a ratio of 80 and 20 (%v/v) was used as a receptor medium in order to enhance the drug solubility and achieve the sink condition during the experiment. In addition, the medium was proven to maintain the integrity of the pig skin, and the use of hydroalcoholic solutions (e.g., methanol, ethanol) as the receptor medium has been accepted and applied in several other publications [18][19][20][21]. Ten milliliters of the medium were degassed before use. The skin membrane used in the study was taken from the abdominal area of newborn pigs, which were sourced from a regional pig farm located in Nakhon Pathom Province, Thailand, which operates under the supervision of the Department of Livestock Development. The newborn pigs weighed between 1.4 and 1.8 kg and died naturally shortly after birth. As this source of skin is categorized as slaughter waste, it is exempt from the ethics committee of the Institute for Animal Care and Use Committee, Faculty of Pharmacy, Mahidol University. In addition, the skins were collected from carcasses, whereas the ethics committee concerns only alive animals, as indicated by the definition of "animals" in "The Animals for Scientific Purposes Act, B.E. 2558 (A.D. 2015), Thailand" [22]. Consent was verbally obtained from slaughterhouse owners to collect all of the samples from carcasses. The epidermal hair in the abdominal area was clipped with an electric hair clipper. The hair was carefully removed as close as possible to the skin without damaging it. The skin was then rinsed under running tap water in order to remove hair from the surface, and then excised with a scalpel and a number 24 surgical blade. The subcutaneous fat and underlying tissues were carefully removed from the dermal surface. Subsequently, the skin pieces were rinsed with PBS, blotted dry with paper towels, and carefully wrapped in aluminum foil. The prepared membranes were then stored at −20 • C until further use [23]. Before mounting the membrane on the receptor compartment, it was allowed to thaw and hydrate by soaking it in PBS overnight at room temperature (30 ± 2 • C). The membrane was arranged in a manner where the stratum corneum was oriented in an upward-facing position. For the experiments, a sample weighing 0.5 g was carefully applied onto the skin membrane, covering an effective area of 1.77 cm 2 . At specified time intervals (0.5, 1, 2, 4, 6, 12 and 24 h), a volume of 0.5 mL of receptor fluid was withdrawn, and an equal volume of fresh receptor fluid was immediately added. The collected samples underwent filtration using a PTFE membrane filter. Subsequently, the filtrates were subjected to HPLC analysis to determine the amount of active permeation. The cumulative permeated amount (Q t ) of miconazole nitrate was determined using Equation (1). where C t is the active compound concentration in the receptor fluid at each sampling time, C i is the active compound concentration of the i-th sample, and V r and V s are the volumes of the receptor fluid and the sample, respectively. The permeation profiles were generated by plotting the cumulative quantity of miconazole nitrate that permeated through the skin membrane per unit area of the membrane over time. The permeation rates were then determined by interpolating the permeation profiles using linear regression. After the skin permeation study, the miconazole nitrate that had remained in the skin membrane was extracted using the previously established method [24]. The extraction procedure involved wiping the surface of each skin membrane with cotton soaked in a specific volume of PBS to remove any residual sample. Subsequently, the skin was cut into small pieces, homogenized in methanol, and filtered. The skin permeation and retention studies were done at least three times using skin samples from a minimum of three newborn pigs. In Vitro Antifungal Activity of the Selected Miconazole Nitrate-Loaded Microemulsion Antifungal activities of the selected miconazole nitrate-loaded microemulsion and conventional cream (1% w/w miconazole nitrate cream) were evaluated against the standard strain C. albicans ATCC 10231 using the cup-plate method [25]. The SDA medium was prepared by weighing the required SDA amount and dissolving it in water, followed by sterilization in an autoclave at 121 • C for 15 min. Once the melted nutrient medium had cooled to 60 • C under sterile conditions, it was poured into Petri dishes. The dishes were then inoculated with the microorganism being tested. After the plate had a chance to solidify, a sterile borer was used to create a well. Each well received the tested samples, which were then left to stand for 1 h. After that, the plates were placed in an incubator (Binder, Inc., Bohemia, NY, USA) and kept at 37 ± 1 • C. Using a graduated scale, the zone of inhibition (ZOI, mm) was determined after 24 h. The results were reported as mean ± standard deviation (SD), with the test being run in triplicate. Stability Study of the Selected Miconazole Nitrate-Loaded Microemulsion After storing the selected miconazole nitrate-loaded microemulsion at room temperature (30 ± 2 • C) for a duration of 3 months, its stability was assessed. The microemulsion formulation was collected at predetermined intervals, including immediately following preparation and at 1, 2, and 3 months, to evaluate its physical appearance, including color, phase separation, and clarity. Utilizing HPLC, the formulation's miconazole nitrate chemical stability was evaluated. At 30 ± 2 • C, each measurement was carried out three times. SPD-20A UV/VIS detector was used to accomplish the quantitative analysis of miconazole nitrate. As the stationary phase, a guard column-equipped Hypersil GOLD TM column (C18; 150 × 4.6 mm, particle size 5 µm; Thermo Fisher Scientific Inc., Carlsbad, CA, USA) was used. Elution was performed at room temperature (30 ± 2 • C) using a mobile phase solution composed of 0.5% w/v ammonium acetate and methanol (20:80, v/v) at a flow rate of 1 mL/min. UV detection was done at 263 nm with a sample injection volume of 20 µL. In accordance with the International Conference on Harmonization (ICH) guidelines: Text and methodology (Q2(R1)) [26], the HPLC method was validated to access the analytical processes for linearity, precision, accuracy, limit of detection (LOD), and limit of quantitation (LOQ). Utilizing three series of five different working standard solution concentrations, the linearity of the method was assessed over concentrations ranging from 1 to 20 µg/mL. The standard curve's slope, y-intercept, and regression coefficient (r) were determined. By examining three replications of three distinct spiking standard concentrations (1.5, 7.5, and 15 µg/mL) and computing the percent recovery, accuracy was assessed. Both intra-day (repeatability) and inter-day precision were evaluated to determine precision. Triplicate measurements of three different concentrations (1.5, 7.5, and 15 µg/mL) taken within a day were used to assess the repeatability, whereas these same measurements taken on three different days were used to assess the inter-day precision. A calculation was made to determine the relative standard deviation percentage (%RSD). A signal-to-noise ratio of 3 was used to determine the LOD, while a ratio of 10 was used to calculate the LOQ. Preparation of Calibration and Validation Samples Five concentrations of miconazole nitrate-loaded microemulsion (0.00, 0.50, 0.75, 1.00, 1.25, and 1.50% w/w) were prepared. Each microemulsion concentration was further divided into 10 sub-samples, resulting in a total of 60 sub-samples. All sub-samples were quantitatively determined for their actual concentration by using the HPLC method described in Section 2.10, and their NIR spectra were obtained as per the condition described in Section 2.11.2. NIR Spectroscopic Measurement The NIR spectra of all sub-samples and 10 placebo samples were collected using the NIRFlex ® N-500 NIR spectroscope (BÜCHI Labortechnik AG, Flawil, Switzerland) in transflectance mode. The NIRFlex Solids measurement cell, an add-on for measurements of various sample types ranging from solids to liquids, was utilized for microemulsion samples. About 5 mL of microemulsion was placed in a Petri dish and covered with the reference plate, and the NIR spectrum was collected with the resolution of 4 nm between 4000-10,000 cm −1 (1000-2500 nm). Duplicate measurements were performed for each sub-sample, and the average NIR spectrum was chosen for further investigation. PLSR Modelling The 42 out of 60 sub-samples were randomly selected and combined with 10 placebo samples for use as calibration set samples. The remaining 18 sub-samples were designated as the validation set. The raw NIR spectral data were analyzed using the Unscrambler ® program (Aspen Tech, Bedford, MA, USA) to construct the PLSR determination model. Various pretreatment methods, including first (1D) and second (2D) derivatives, standard normal variate (SNV), area normalization, and orthogonal signal correction (OSC), were applied to the raw NIR spectral data. Multiple PLSR models were constructed using raw spectral data and pretreated data, taking wavelength selection into account to obtain the optimum model. Model parameters, such as R 2 model, root mean square error of calibration (RMSEC, Equation (2)), R 2 Pearson, root mean square error of prediction (RMSEP, Equation (3)), and bias (Equation (4)), were considered to select the most optimal PLSR model. where y i andŷ i are the reference and predicted values for sample i from the validation set, respectively, and n is the number of samples in the validation set. The highest R 2 values and lowest error parameters, such as RMSEC, RMSEP, and bias, were model selection criteria. Data Analysis The results were presented as the mean ± SD for three replicates. By utilizing SPSS Statistics 21.0 (IBM, Armonk, NY, USA), the statistical analysis was carried out. One-way analysis of variance (ANOVA) was used to assess significant differences (p < 0.05) between the means. Tukey's honesty significant difference test or Dunnett's T3 test for multiple comparisons was then used to analyze those differences. Data analysis for the HPLC experiment and method validation were performed by Excel program. For PLSR modelling of NIR measurement, all calculations were performed with the Unscrambler ® program. Selection of Microemulsion Components In order to formulate a microemulsion for effective topical delivery of the weakly water-soluble drug miconazole nitrate, it is important to properly choose the oil phase. The selection of components for formulating the microemulsion was based on the miconazole nitrate solubility in the various oils, surfactants, and cosurfactants. Table 1 displays the equilibrium solubility values. Oleic acid exhibited the greatest solubility for miconazole nitrate (0.012 ± 0.003% w/w) among the various oils, followed by caprylic/capric triglyceride. Among the surfactants, PSM showed the maximum solubility (0.654 ± 0.066% w/w), followed by SM. For the cosurfactants, 95% ethanol showed the highest solubility (1.164 ± 0.026% w/w), followed by DEGEE and 2-propanol, respectively. Based on the solubility study of miconazole nitrate, oleic acid and PSM could be the most suitable oil and surfactant for microemulsion development. Construction of Pseudo-Ternary Phase Diagram Oleic acid is frequently utilized in topical preparations as an oil phase and a permeation enhancer [27]. In order to reduce skin irritation and system charge disruption, nonionic surfactants were chosen. PSM, the chosen surfactant in this study, has been previously used in transdermal formulations [28]. In addition, short-chain alcohols such as ethanol and propanol were included in the surfactant phase as cosurfactants to increase the microemulsion region in the phase diagrams. These alcohols have the ability to reduce the hydrophilicity of the polar solvent as well as solubilize high water contents and promote the formation of microemulsion [29,30]. Using a mixture of PSM and various cosurfactants (in ratios of 1:1 to 3:1, by weight) as the surfactant phase, oleic acid as the oil phase, and DI water as the aqueous phase, Figure 1 depicts the pseudo-ternary phase diagrams, illustrating the transparent microemulsion area. Construction of Pseudo-Ternary Phase Diagram Oleic acid is frequently utilized in topical preparations as an oil phase and a permeation enhancer [27]. In order to reduce skin irritation and system charge disruption, nonionic surfactants were chosen. PSM, the chosen surfactant in this study, has been previously used in transdermal formulations [28]. In addition, short-chain alcohols such as ethanol and propanol were included in the surfactant phase as cosurfactants to increase the microemulsion region in the phase diagrams. These alcohols have the ability to reduce the hydrophilicity of the polar solvent as well as solubilize high water contents and promote the formation of microemulsion [29,30]. Using a mixture of PSM and various cosurfactants (in ratios of 1:1 to 3:1, by weight) as the surfactant phase, oleic acid as the oil phase, and DI water as the aqueous phase, Figure 1 depicts the pseudo-ternary phase diagrams, illustrating the transparent microemulsion area. Based on visual observation, the remaining portion of the phase diagram displayed turbidity and exhibited typical emulsion characteristics. The largest microemulsion area was produced by including 2-propanol as a cosurfactant, as shown in Figure 1. Furthermore, compared to formulations using ethanol as the cosurfactant, those using propanol had larger microemulsion areas. The results of the study indicate that the microemulsion area expanded as the chain length of the short-chain alcohols increased, progressing from ethanol to 2-propanol [31]. Additionally, no statistically significant difference was observed in the overall percentage of microemulsion area in the phase diagram when comparing formulations comprising 95% ethanol and DEGEE as the cosurfactant (p > 0.05). Solubility of Miconazole Nitrate in Blank Microemulsions The miconazole nitrate concentration used in the microemulsion formulations was decided based on its solubility in the different blank microemulsions. The values of equilibrium solubility are shown in Table 2, which demonstrates that the amounts of miconazole nitrate entrapped in blank microemulsions were significantly greater (p < 0.05) compared to DI water. The increased drug solubility in microemulsions can be explained by their ability to be solubilized within the interfacial film that forms between the water and oil phases. This interfacial film provides additional sites for the drug to dissolve, resulting in enhanced drug solubility. Based on the solubility study of miconazole nitrate in microemulsions, a concentration of 1% w/w was selected for the preparation of miconazole nitrate-loaded microemulsions. Characterisation of Miconazole Nitrate-Loaded Microemulsions All microemulsions, both unloaded and loaded with miconazole nitrate, were clear, yellowish liquids. They were optically isotropic because they lacked birefringence and appeared uniformly dark under a cross-polarized light microscope. A cross-polarized light microscope is a valuable tool for distinguishing between liquid crystals and microemulsions due to their visual similarity, particularly with lamellar and hexagonal liquid crystals. For lamellar and hexagonal liquid crystals, birefringence can be seen using a cross-polarized light microscope; however, microemulsions do not exhibit birefringence [32]. A dilution test was used to identify the formulation type. The results showed that a brilliant blue aqueous solution, but not oleic acid, could be used to dilute all o/w microemulsions. The formulation's inclusion of miconazole nitrate did not alter the kind of microemulsion in any of the unloaded microemulsions. The formulations showed a conductivity of 7.5-101.5 µS/cm, indicating that the type of microemulsion formed was o/w (Table 3). Miconazole nitrate incorporation significantly increased the electrical conductivity of the unloaded microemulsions. The unloaded microemulsions had apparent pH values ranging from 4.6 to 5.5. All unloaded microemulsions' pH values were slightly reduced by the addition of miconazole nitrate (p > 0.05) ( Table 3). Table 3 shows the apparent viscosity values for all microemulsions, both unloaded and loaded, at a shear rate of 1000 s −1 and a temperature of 30 ± 2 • C. Newtonian flow behavior was present in all microemulsion samples [33]. Miconazole nitrate incorporation had a minor impact on the microemulsion's viscosity (p > 0.05), but it had no overall impact on the flow behavior. Unloaded microemulsions had typical droplet sizes varying between 49 and 372 nm. Each formulation's droplet diameter was significantly impacted by the addition of miconazole nitrate (p < 0.05). In microemulsions containing miconazole nitrate, the average droplet size ranged from 54 to 404 nm (Table 3). Microemulsions typically have droplet sizes between 10 and 140 nm [34]. Larger sizes could, however, occur and be related to the dynamic characteristics of microemulsions. For instance, a microemulsion made of clove oil, polyoxyethylene sorbitan monolaurate, propylene glycol, water, and ketoprofen had mean droplet sizes of 396 nm and a highly variable size distribution [35]. It might be argued that because the creation of microemulsions requires negative Gibbs free energy, they can arise spontaneously, which results in high entropy and dynamic properties of the microemulsion system [36]. Thermodynamic Stability of Miconazole Nitrate-Loaded Microemulsions Microemulsions are characterized by their thermodynamic stability, which sets them apart from emulsions that are kinetically stable and prone to phase separation over time [37]. None of the microemulsions, whether unloaded or loaded with miconazole nitrate, exhibited phase separation after the centrifugation test. Phase separation was observed only in the unloaded and miconazole nitrate-loaded microemulsions that contained PSM and DEGEE in a 3:1 ratio as the surfactant/cosurfactant after the third cycle of the freeze-thaw stress test. This indicates that these particular formulations were not thermodynamically stable, as indicated by the presence of phase separation. Ex Vivo Skin Permeation Study of the Selected Miconazole Nitrate-Loaded Microemulsion Most of the miconazole nitrate-loaded microemulsions exhibited good physicochemical properties. Among all formulations, the miconazole nitrate-loaded microemulsion containing PSM and ethanol in a 1:1 weight ratio as surfactant/cosurfactant was chosen for the ex vivo skin permeation study compared with the conventional cream due to its smaller size. The results of the drug permeation study of the microemulsion and conventional cream at 1% w/w miconazole nitrate are illustrated in Figure 2. At 24 h, the mean cumulative amounts of the drug permeated across pig skin were determined to be 87.6 ± 5.8 and 16.1 ± 2.9 µg/cm 2 from the microemulsion and conventional cream, respectively. The permeation potential of the conventional cream was less than that of the microemulsion due to the semisolid cream base. The calculated permeation flux values were determined to be 5.4 ± 0.2 and 2.4 ± 0.1 µg/cm 2 /h for the microemulsion and conventional cream, respectively. The drug depositions from the microemulsion and conventional cream were 559.1 ± 25.1 and 284.9 ± 12.1 µg/cm 2 , respectively. Therefore, the selected miconazole nitrate-loaded microemulsion illustrated higher cumulative permeation, permeation flux, and drug deposition (retention) than the conventional cream at the end of 24 h. This would be attributable to the various components of microemulsion [38,39]. In addition, this nanometer-sized colloidal carrier increases drug penetration into the skin; the drug that has penetrated the skin concentrates there and stays localized for an extended period, facilitating targeted drug delivery to the skin [40]. In Vitro Antifungal Activity of the Selected Miconazole Nitrate-Loaded Microemulsion Both acute and chronic cutaneous candidiasis are brought on by a type of Candida. Therefore, research into in vitro antifungal activities against these primary causative fac- In Vitro Antifungal Activity of the Selected Miconazole Nitrate-Loaded Microemulsion Both acute and chronic cutaneous candidiasis are brought on by a type of Candida. Therefore, research into in vitro antifungal activities against these primary causative factors is necessary. The selected miconazole nitrate-loaded microemulsion and conventional cream revealed apparent zones of inhibition around the well against standard strain C. albicans ATCC 10231. The result revealed that there was a substantial increase in the in vitro inhibition of C. albicans by the microemulsion formulation (ZOI of 23.3 ± 1.5 mm) compared with the conventional cream (ZOI of 17.3 ± 0.6 mm) (p < 0.05) (Figure 3). The nanometer-sized particles and the high miconazole nitrate solubility in the presence of the surfactant and cosurfactant may be responsible for the increased antifungal activity of the microemulsion formulation. Nanometer-sized particles, owing to their larger surface area, exhibit enhanced permeation capabilities that allow them to easily penetrate the cell walls of fungal strains [41]. Permeation study across pig skin placed for the miconazole nitrate-loaded microemulsion containing polyoxyethylene sorbitan monooleate and ethanol at a weight ratio of 1:1 as surfactant/cosurfactant and the conventional cream at 1% w/w miconazole nitrate using vertical Franz diffusion cells (cumulative drug permeation). The experiment was done at least three times, using skin samples from a minimum of three newborn pigs. In Vitro Antifungal Activity of the Selected Miconazole Nitrate-Loaded Microemulsion Both acute and chronic cutaneous candidiasis are brought on by a type of Candida. Therefore, research into in vitro antifungal activities against these primary causative factors is necessary. The selected miconazole nitrate-loaded microemulsion and conventional cream revealed apparent zones of inhibition around the well against standard strain C. albicans ATCC 10231. The result revealed that there was a substantial increase in the in vitro inhibition of C. albicans by the microemulsion formulation (ZOI of 23.3 ± 1.5 mm) compared with the conventional cream (ZOI of 17.3 ± 0.6 mm) (p < 0.05) (Figure 3). The nanometer-sized particles and the high miconazole nitrate solubility in the presence of the surfactant and cosurfactant may be responsible for the increased antifungal activity of the microemulsion formulation. Nanometer-sized particles, owing to their larger surface area, exhibit enhanced permeation capabilities that allow them to easily penetrate the cell walls of fungal strains [41]. Stability Study of the Miconazole Nitrate-Loaded Microemulsion The optimal miconazole nitrate-loaded microemulsion formulation demonstrated good physicochemical stability at room temperature for 3 months. There was no physical appearance change between t = 0 and t = 3 months and a remaining percentage of miconazole nitrate greater than 95.0% in the formulation. Analysis of Miconazole Nitrate by HPLC The linear equation for the calibration curve of miconazole nitrate was y = 1167.3x − 189.23, demonstrating a strong linear correlation coefficient of 1. The %RSDs of intra-day and inter-day precision were less than 2.0%. The mean recovery for accuracy determination was 100.4 ± 0.3%. The LOQ and LOD values for miconazole nitrate were 0.51 and 0.17 µg/mL, respectively. NIR Spectroscopic Measurement Combined with PLSR Model The development of a non-destructive NIR spectroscopic analysis for the quantitative determination of miconazole nitrate-loaded microemulsions is an innovative and significant step towards the efficient and reliable analysis of these formulations without requiring sample preparation. The microemulsion contains an infinitesimal amount of miconazole nitrate content, which posed a challenge to the development of the NIR analysis. Several PLSR models were performed with both raw and pretreated NIR data, with respect to the actual values acquired from the HPLC method. Various parameters, including the R 2 model, R 2 Pearson, RMSEP, RMSEC, and bias, were utilized to choose the optimum model. The R 2 model ranges between 0 and 1, where a value of 1 indicates a perfect fit of the model to the data, and a value of 0 suggests no relationship between the model and the data and is used to assess the model's goodness-of-fit. A high value for the R 2 Pearson denotes a good fit between the predicted and actual values. RMSEP measures the model's accuracy, with a small value implying good predictive ability. A good PLSR model should have a low RMSEC, indicating good accuracy in predicting the calibration set. Finally, a low bias indicates an unbiased and accurate model between the predicted and actual values. The results indicate that the most appropriate PLSR model was obtained using the OSC pretreatment method, as it yielded a high R 2 model of 0.9919 and a high R 2 Pearson of 0.9958. Additionally, it had the lowest RMSEC of 0.0488 and RMSEP of 0.0390 among all the pretreatment methods, suggesting good predictive power and generalizability to new samples. The relatively low bias value of 0.0061 further indicates that there is no significant systematic bias in the predictions (Table 4 and Figure 4). Although a good PLSR model could be derived from the raw data without any pretreatment methods (model 1 in Table 4), OSC data pretreatment improved model accuracy and prediction efficiency. Furthermore, the principal component analysis of all the spectral data matrix with OSC pretreatment showed better sample grouping by miconazole nitrate concentrations than the original spectral data without pretreatment ( Figure 5). The 1D, 2D, SNV, and area normalization data pretreatment could also provide acceptable PLSR models, but most of these were not dramatically improved in the R 2 model, R 2 Pearson, RMSEC, RMSEP, and bias compared with the original data and OSC pretreated models. These findings suggest that OSC pretreatment is the most appropriate method for deriving the PLSR model with high accuracy and prediction efficiency, as well as enhancing the separation and discrimination of the sample groups. Conclusions Miconazole nitrate has been a promising antifungal agent for treating superficial an Conclusions Miconazole nitrate has been a promising antifungal agent for treating superficial and deep skin infections, primarily caused by Candida strains. However, its limited water solubility has posed a significant challenge for developing effective formulations that can penetrate the skin topically. To enhance topical delivery, miconazole-loaded microemulsions were prepared via spontaneous emulsification. In vitro and ex vivo experiments were conducted to assess the potential of a 1% w/w miconazole nitrate-loaded microemulsion containing oleic acid as an oil phase, PSM and ethanol (1:1) as a surfactant/cosurfactant system, and DI water as an aqueous phase. The results indicate that this microemulsion could effectively deliver miconazole nitrate topically and enhance its bioavailability. Moreover, the study explored the feasibility of using NIR spectroscopy combined with the PLSR model to quantify miconazole nitrate in microemulsion systems. The findings demonstrated that there were no significant statistical differences between the results obtained from the proposed and HPLC methods. The method could potentially be applied to monitor the quality control of miconazole nitrate in various conventional and novel formulations. Furthermore, it offers a rapid, non-destructive analysis without requiring sample preparation.
9,335.2
2023-06-01T00:00:00.000
[ "Medicine", "Chemistry", "Materials Science" ]
Complex Patterns of Gene Fission in the Eukaryotic Folate Biosynthesis Pathway Shared derived genomic characters can be useful for polarizing phylogenetic relationships, for example, gene fusions have been used to identify deep-branching relationships in the eukaryotes. Here, we report the evolutionary analysis of a three-gene fusion of folB, folK, and folP, which encode enzymes that catalyze consecutive steps in de novo folate biosynthesis. The folK-folP fusion was found across the eukaryotes and a sparse collection of prokaryotes. This suggests an ancient derivation with a number of gene losses in the eukaryotes potentially as a consequence of adaptation to heterotrophic lifestyles. In contrast, the folB-folK-folP gene is specific to a mosaic collection of Amorphea taxa (a group encompassing: Amoebozoa, Apusomonadida, Breviatea, and Opisthokonta). Next, we investigated the stability of this character. We identified numerous gene losses and a total of nine gene fission events, either by break up of an open reading frame (four events identified) or loss of a component domain (five events identified). This indicates that this three gene fusion is highly labile. These data are consistent with a growing body of data indicating gene fission events occur at high relative rates. Accounting for these sources of homoplasy, our data suggest that the folB-folK-folP gene fusion was present in the last common ancestor of Amoebozoa and Opisthokonta but absent in the Metazoa including the human genome. Comparative genomic data of these genes provides an important resource for designing therapeutic strategies targeting the de novo folate biosynthesis pathway of a variety of eukaryotic pathogens such as Acanthamoeba castellanii. Introduction The resolution of ancient phylogenetic relationships is proving a difficult task (Philippe and Laurent 1998;Dagan and Martin 2006). Rare genomic characters such as: Insertions and/or deletions within open reading frames (ORFs), intron distribution, and gene fusions are potentially useful tools for polarizing evolutionary relationships and rooting trees (Jensen and Ahmad 1990;Rokas and Holland 2000). In these cases, assuming parsimony, the logic proceeds that taxa A and B possess a rare genomic character, whereas taxa C and D do not, therefore taxa A and B are likely to be monophyletic to the exclusion of taxa C and D. The process of gene fusion and domain recombination is itself an important evolutionary process, leading to: Acquisition of new gene functions (Doolittle 1995), biochemical channeling (Miles et al. 1999), coregulation, colocalization, and potentially promoting the fixation of horizontally transferred genes (Andersson and Roger 2002;Yanai et al. 2002;Rokas 2010, 2011) see also (Lawrence and Roth 1996;Lawrence 1999;Walton 2000). The corollary with investigating gene fusions is that they are also subject to homoplasy in the form of: Horizontal gene transfer (HGT) (Andersson and Roger 2002;Yanai et al. 2002), separation (gene fission), gene duplication with differential loss of subsections of the gene (also a form of gene fission), total gene loss (Nakamura et al. 2007;Leonard and Richards 2012), or convergent evolution (Nara et al. 2000;Stover et al. 2005). Folate is an essential metabolite involved in the biosynthesis of: Adenine and thymidine bases, methionine and histidine amino acids, and formyl-tRNA (Brown 1971). Many plants protists, Fungi, Bacteria, and Archaea manufacture folate de novo (Cossins and Chen 1997;Levin et al. 2004;de Crecy-Lagard et al. 2007) principally via a double-branched pathway involving the pterin and pABA branches which feed into the step mediated by the enzyme encoded by folP (the pathway is illustrated in fig. 1 with gene and protein names listed). In the plant Arabidopsis thaliana many steps, including the proteins encoded by folK-folP, are localized to the mitochondria, whereas the enzymes that catalyze pABA synthesis are localized within the plastid organelle (de Crecy-Lagard et al. 2007). Folate salvage systems are also known from a range of taxa, where pterin and pABA-glutamate fragments produced by folate breakdown are fed into curtailed versions of the pathway (Orsomando et al. 2006;de Crecy-Lagard et al. 2007). For example, in some metazoans the core of the pathway is bypassed by folic acid uptake from food (Cossins 2000;Lucock 2000), leaving only the requirement for: Dihydrofolate reductase (DHFR) and thymidylate synthase (TS) (see figs. 1 and 2). Antifolate drugs (e.g., sulfonamides and sulfones) targeting the DHPS step in the pterin branch (encoded by folP) are therefore important antimicrobial agents (Lawrence et al. 2005) because host animals do not encode the equivalent metabolic trait. Additionally, drugs targeting the latter steps of the pathway (e.g., methotrexate which inhibits DHFR) are used in chemotherapy to target cancer cells (Huennekens 1994;Cossins and Chen 1997). The genes that encode the folate biosynthesis enzymes DHFR and TS are fused in many eukaryotes (Stechmann and Cavalier-Smith 2002) resulting in synthesis of a two domain multifunctional protein. This character has been suggested to be an anciently derived synapomorphy uniting the "bikont" clade Cavalier-Smith 2002, 2003), a group of "ancestrally biciliate eukaryotes" including the: Stramenopiles, Alveolata, Rhizaria (known collectively as the SAR supergroup), Excavata, Cryptophyta, Haptophyta, and Archaeplastida. However, several eukaryotic subgroups appear to have lost either the fused or unfused DHFR and TS-encoding genes (Simpson and Roger 2004;Roger and Simpson 2009) (fig. 2) making this an unreliable character for polarizing evolutionary relationships. In addition, the "bikont" grouping has been revised and these taxa, with the exception of the Excavata, are now grouped within Diaphoretickes (Adl et al. 2012). We also note that Cavalier-Smith has abandoned this rooting system (Cavalier-Smith 2010) in favor of a root within the Excavata (Simpson 2003) rendering the "bikonts" paraphyletic. Furthermore, although myosin II was thought to be exclusive to Amoebozoa and Opisthokonta taxa (Richards and Cavalier-Smith 2005) this gene architecture is found in Heterolobosea 2.-Presence, absence, and fusion state of putative folate pathway encoding genes across the eukaryotes. Taxonomic distribution of the pterin branch of the folate biosynthesis pathway. The red boxes and connecting lines indicate a gene fusion, black boxes represents presence of a putative homologue, and gray indicates gene not identified in the genome sequence data. Amoebozoa and Opisthokonta were formerly referred to as the "unikonts," and likewise SAR, Excavata, and Archaeplastida were formerly referred to as the "bikonts." Note that the putative folB of Trichoplax adhaerens and the putative folB-folK fusion of Nematostella vectensis were removed from phylogenetic analyses due to poor alignment of these sequences, as such their provenance and evolutionary ancestry remains questionable and are therefore indicated by a question mark at the relevant position. (Excavata) (Fritz-Laylin et al. 2010). This suggests a different or deeper ancestry of myosin II. Alternatively, this distribution pattern may be the result of HGT (Berney C, personal communication) with additional examples of HGT-derived genes shared by Heterolobosea and Amoebozoa (Andersson 2011;Herman et al. 2013) supporting the idea that HGT between these groups has played a role. However, an amended version of the "bikont" and "unikont" bifurcation recently gained some direct support using a rooted multigene phylogenetic analysis of genes derived through the mitochondrial endosymbiosis (Derelle and Lang 2012), but also see He et al. (2014) for an alternative tree topology derived from a similar analytical approach. In 2005, Lawrence et al. published the structure of three components of the Saccharomyces cerevisiae folate biosynthesis pathway; a triple domain gene fusion, encompassing the DHNA, HPPK, and DHPS enzymes encoded by folB, folK, and folP genes-steps 3, 4, and 5 in pterin biosynthesis pathway (Lawrence et al. 2005) (fig. 1). Interestingly, gene fusions are common in secondary metabolic networks, for example, the shikimate pathway that forms the prerequisite to the pABA branch of folate biosynthesis is encoded by numerous variant gene fusions (Campbell et al. 2004;Richards et al. 2006) and genes which encode key enzymes of the pABA branch of folate biosynthesis are often found fused (de Crecy-Lagard et al. 2007). Here, we report a phylogenomic analysis of gene fusion characteristics in the pterin folate biosynthesis pathway across the eukaryotes. We use these data to investigate the evolutionary ancestry of the three-domain pterin biosynthesis gene fusion, identifying: a diversity of gene fusion architectures, gene fission events, and a number of gene losses. Using these results, we evaluate this three gene fusion character as synapomorphy for the monophyletic grouping of the Opisthokonta and Amoebozoa finding a high incidence of homoplasy. Materials and Methods Cloning and Sequencing of Folate Triple Domain Gene Fusion from Acanthamoeba castellanii cDNA Using the partially assembled genome reads of the Acanthamoeba castellanii sequencing project (available at the Baylor College of Medicine-https://www.hgsc.bcm.edu/ microbiome/acanthamoeba-castellani-neff, last accessed October 3, 2014), we designed a range of overlapping polymerase chain reaction (PCR) primers (Marshall 2004) to target different domain sections of the three folate biosynthetic genes folB, folK, and folP (see supplementary table S1, Supplementary Material online). Acanthamoeba castellanii Neff strain was grown axenically in a modified M11 defined media (Shukla et al. 1990) without folate (supplementary table S2, Supplementary Material online) to encourage the transcription of folate biosynthesis pathway genes. Cells were collected and suspended in 1 ml of trizol reagent (Invitrogen) and RNA extracted using the single-step acid guanidinium thiocyanate-phenol-chloroform protocol as described by Chomczynski and Sacchi (Chomczynski and Sacchi 1987). The cDNA was then synthesized using the AffinityScript kit with random hexamers (Stratagene). PCR amplification for target folate biosynthesis genes was conducted using Master Mix (Promega, containing 3 mM MgCl 2 , 400 mM of each dNTP, and 50 U/ml of Taq DNA polymerase) to create a 25 ml PCR reaction mix (12.5 ml of Master Mix), 1 ml each primer (10 mM), 9.5 ml of Milli-Q pure water (Millipore), and 1 ml of template cDNA). Acanthamoeba cDNA was diluted to approximately 100 ng/ml using spectrophotometery (NanoDrop ND-1000). Thermocycling followed an initial incubation at 95 C for 5 min, and cycling conditions details in supplementary table S1, Supplementary Material online followed by a 72 C-5 min elongation step. See supplementary table S1, Supplementary Material online, for details of PCR primers used. Successfully amplified PCR products were gelpurified (Wizard SV Gel and PCR Clean-Up kit, Promega) and cloned using TA-cloning (PCR StrataClone Cloning Kit, Agilent Technologies). Five clones were selected from each PCR reaction and externally sequenced using the M13/pUC vector primers via Sanger sequencing (Cogenic Beckman-Coulter sequencing service, High Wycombe). The flanking vector sequences were removed; the sequences trimmed to areas of high chromatograph quality and ambiguously defined bases corrected. The overlapping sequences were then assembled into contigs using Sequencher (Gene Codes) version 4.10.1 program (http://www.genecodes.com/) producing a highconfidence consensus sequence for a partial ORF for the folB, folk, and folP gene fusion (GenBank Acc: AFW17812.1). These data demonstrate that the folB, folk, and folP genes are transcribed as a single three-domain gene fusion. It should be noted that subsequently a draft genome and predicted proteome of Acanthamoeba has been released (Clarke et al. 2013), which contains the same gene fusion of near identical sequence (513/514 identities with no gaps-GenBank Acc: XP_004341460). The fulllength gene derived from the genome sequence was used for the subsequent folB, folk, and folP phylogenetic analyses. Survey of Additional Protist Taxa Using RNA-Seq Data We used the Dictyostelium purpureum (XP_003290941) folB, folk, and folP three gene fusion and Bacillus cereus single domain unfused-genes (folB-NP_829975.1, folK-ZP_ 03233543.1, folP-ZP_07056868.1) as a search query to identify putative homologues using the basic local alignment search tool (tBLASTn) against a set of protistan RNAseq "in-house" data sets. This data set included the unicellular opisthokont Fonticula alba, the amoebozoan Copromyxa protea, and the breviate Pygsuia biforma (PCbi66). From these data, we were able to identify components of the folB, folk, and folP genes from Fonticula and Copromyxa, but not in the breviate P. biforma (PCbi66). Phylogenomic analysis demonstrates that breviate flagellates are related to opisthokonts and the Apusomonadida (Brown et al. 2013). For these RNAseq projects, total RNA was isolated using Trireagent (Sigma) following the protocol supplied by the manufacturer. Construction of cDNA libraries and Illumina RNAseq was performed by the Institut de Recherche en Immunologie et Cancé rologie of Université de Montré al (Canada) for Copromyxa protea (strain CF08-5), the BROAD Institute (Boston) for F. alba (strain ATCC 38817), and Macrogen (South Korea) for the P. biforma (PCbi66). Raw sequence read data were filtered based on quality scores with the fas-tq_quality_filter program of FASTXTOOLS (http://hannonlab. cshl.edu/fastx_toolkit/), using a cutoff filter (a minimum 70% of bases must have quality of 20 or greater). Filtered sequences were then assembled into clusters using the Inchworm assembler of the TRINITY r2011-5-13 package (Grabherr et al. 2011). The F. alba assembly is available via the BROAD Institute; however, the other two assemblies are currently unreleased (manuscript in preparation). All unmasked protein alignments are included as supplementary material, Supplementary Material online, on GitHub (DOI: 10.5281/zenodo.11716) as MASE files which includes the alignment mask information (generated by Seaview [Galtier et al. 1996]). Comparative Genomics and Phylogenetic Analysis Using BLASTp and tBLASTn (Altschul et al. 1990) we initially searched NCBI GenBank, the Joint Genome Institute (http:// genome.jgi-psf.org/), and the Broad Institute (http://www. broadinstitute.org/) genome databases (as of November 2013) using three separate folate biosynthesis domains from B. cereus (folB-NP_829975.1, folK-ZP_03233543.1 and folP-ZP_07056868.1) and the D. purpureum (XP_ 003290941) folB, folk, and folP three gene fusion divided into the three-domain regions. Care was taken to survey the major eukaryotic, archaeal, and bacterial groups; to this end additional BLAST searches were conducted using multiple start seeds from diverse taxa to check for alternative sequence hits. The amino acid sequences gathered for each domain were run through the REFGEN tool (Leonard et al. 2009). The multiple sequence comparison by log-expectation program (v3.8.31) (Edgar 2004) was used to produce a multiple sequence alignment for each domain (folB, folK and folP). Alignments were then manually corrected and masked in SeaView (version 4.2.4) (Galtier et al. 1996). Sequences that caused an unacceptable loss of putatively informative sites (due to the sequence nonalignment or not masking well) or that formed long branches in preliminary analysis were removed. Duplicate entries from closely related taxa, for example, highly similar sequences from different representativeness of the same bacterial or fungal genus (e.g., Escherichia, Bacillus, and Aspergillus) or multiple highly similar genes from the same genome (sister branches on preliminary phylogenetic trees) were removed from the alignments. Phylogenetic analysis was conducted using both Bayesian and maximum-likelihood methodologies with the model of amino acid substitution selected using ProtTest3 (version 3.2.1- [Darriba et al. 2011]-see supplementary figs. S1-S7, Supplementary Material online). Sequences shown to form long branches in the phylogenetic analysis were removed from the alignment to reduce the risk of long-branch attraction artifacts (Felsenstein 1978;, for example, the Microsporidian: Encephalitozoon hellem ATCC 50504 folB-folK-folP gene fusion-XP_003887200, and Plasmodium berghei folK-folP gene fusion-XP_15149005 from the folK alignment, and the analyses rerun. The phylogenies were calculated using parallelized-PTHREADS RAxML (version 7.7-Stamatakis 2006) with 1,000 (nonrapid) bootstrap replicates and using the substitution matrix and gamma distribution identified using ProtTest3 (version 3.2.1) (Yang 1996;Darriba et al. 2011). In a subset of these analyses invariant sites were also included as a model parameter (in accordance with ProtTest3 recommendations), see the figure legends for supplementary figures S1-S7, Supplementary Material online, for more details of the models used. Bayesian phylogenies were also reconstructed using MrBayes (version 3.2). Each analysis was conducted as two independent runs of four metropolis-coupled Markov chain Monte Carlo [MCMCMC] chains and continued until convergence of these runs as determined using the Tracer (version 1.5) (Rambaut and Drummond 2007). Burn-in was then also determined using Tracer. The program TREENAMER (Leonard et al. 2009) was then run on the resulting tree files in order to restore the correct taxa names from the REFGEN tags used during phylogenetic processing. These analyses were also repeated using the same methods but focusing on a reduced taxon data set and a concatenation of the folK and folP alignments to tests for improved topology support for key nodes (supplementary figs. S4-S7, Supplementary Material online). Diversity of Gene Fusions in the Folate Biosynthesis Pathways At the core of pterin branch of the folate biosynthesis pathway are three genes (folB, folk, and folP) that encode sequentially acting enzymes: DHNA, HPPK, and DHPS ( fig. 1). In some fungi these are found as a single gene encoding a threedomain protein (e.g., S. cerevisiae: GenBank accession NP_014143.2- [Lawrence et al. 2005]) suggesting that gene fusion has played a role in the pterin branch of folate biosynthesis. To investigate the evolutionary ancestry of this gene fusion, we conducted comparative genomics of these three domains. These analyses demonstrated a discontinuous distribution across the eukaryotes suggesting a complex pattern of gene loss ( fig. 2). We identified four different domain architectures, as defined by PFAM searches (Bateman et al. 2004), of the eukaryotic folate biosynthesis protein sequences sampled: 1) folB-folB-folK-folP found in a range of fungi and the opisthokont sorocarpic protist F. alba; 2) folB-folK-folP found in Amoebozoa, the basidiomycete fungi Postia placenta, Coprinopsis cinerea, and Melampsora laricis-populina, and the microsporidian E. hellem, (excluded from phylogenetic analysis because it formed a long branch in the phylogenies, like many other microsporidian sequences [Hirt et al. 1999]); 3) folB-folK found in two metazoans; and 4) folK-folP found in a subset of ascomycete fungi, Puccinia graminis, Capsaspora owczarzaki, Sphaeroforma arctica, and a diverse range of Diaphoretickes ( fig. 2). In many Diaphoretickes groups, including SAR, Cryptophyta, and the Excavata, we could not identify a folB gene using standard BLAST similarity searches ( fig. 2). To confirm this result, we used a five iteration PSI-BLAST search using both the B. cereus folB gene and the folB domain of the D. purpureum folB-folK-folP gene fusion as a search seed against the NCBI GenBank nonredundant (NR) protein database (performed both as a general search and a search restricted to eukaryotic taxa). These analyses failed to identify any additional putative folB encoding genes in the eukaryotic genomes available in the GenBank NR database. Pyruvoyltetrahydropterin synthase (PTPS) has been suggested to represent a functional replacement of the DHNA enzyme (folB) (Pribat et al. 2009). To investigate the possibility that this gene has functionally replaced folB in the Diaphoretickes and Excavata, or other eukaryotic groups, we searched the eukaryotes for the presence of genes with similar sequence characteristics across the genomes sampled ( fig. 2). These analyses identified no clear pattern of PTPS/folB presence/absence, providing no support for this hypothesis that PTPS is acting as a like-for-like functional replacement of folB across the eukaryotes. Phylogenetic Analyses of the folB, folK, and folP Domains To further investigate the evolutionary ancestry of the gene fusion character, we calculated individual phylogenies for the three pterin biosynthesis domains with both comprehensive and reduced taxa alignment sampling. The results of these phylogenies are shown in supplementary figures S1-S6, Supplementary Material online, with all six trees demonstrating low levels of topology support while many features of the eukaryotic sections of the tree topologies are inconsistent with established multigene phylogenetic trees (e.g., Rodriguez-Ezpeleta et al. 2005;Hampl et al. 2009;Derelle and Lang 2012;Torruella et al. 2012;Brown et al. 2013). This is typical of single-gene phylogenetic analysis using limited numbers of amino acid alignment characters (i.e., 78, 102, 175, 110, 102, 236 amino acid characters for supplementary figs. S1-S6, Supplementary Material online, respectively) and which encompasses ancient and divergent evolutionary groups. These alignment character numbers do not compare favourably to multigene analyses where it has been shown that in excess of 5,000 amino acid alignment characters are required to robustly resolve the Archaeplastida (Rodriguez-Ezpeleta et al. 2005). Although interestingly, Hampl et al. (2009) demonstrated that a low number of genes are sufficient to recover monophyly of the Opisthokonta branching sister to the Amoebozoa. Our analyses identified a folB-folK gene fusion in the metazoan Branchiostoma floridae genome assembly branching with a phylogenetic cluster of prokaryotes with moderate support within the comprehensive folK phylogeny (1/94% support for a grouping with Material online). Collectively, these trees suggest that the Br. floridae folB-folK branching relationship is consistent with HGT into the Br. floridae genome or, alternatively, contamination of this genome project with a prokaryotic sequence. To explore these possibilities further, we found the genome sequence contig containing the Br. floridae folB-folK gene (GenBank acc: AC150408.2) demonstrating that the prokaryote like Br. floridae folB-folK gene is located in a 180,427 bp contig adjacent to genes that show standard patterns of animal sequence similarity. Analysis of the B. belcheri transcriptome demonstrated that an orthologue of the Br. floridae folB-folK gene is transcribed. Taken together these data suggest that the Br. floridae folB-folK gene is located on native source genome and it is not contamination. Therefore, it is likely to be a prokaryotic-derived HGT into this animal genome. However, it is interesting that an animal lineage could maintain only the first part of a pathway despite lacking the folP gene, whereas many other animal lineages have lost the entire pathway. Further to these data, we detected a putative folB gene in Trichoplax adhaerens and a putative folB-folK fusion gene in Nematostella vectensis. However, these genes were removed from further analyses due to difficulty in alignment of these sequences, as such their provenance and evolutionary ancestry remains questionable as noted on figures 2 and 3. These data suggest a partial folate biosynthesis pathway, or a pathway involving an alternative gene encoding the folP step present in Branchiostoma. Furthermore, we see evidence of incomplete pathways in other organisms, for example, the red alga Cyanidioschyzon lacks an identifiable standard folP gene ( fig. 2). Monophyly of the three-domain gene fusion would signify that the folB-folK gene fusion was the product of a single evolutionary event. However, this relationship was not resolved with strong support in these analyses with only the folB FIG. 3.-Phylogeny of the Apusomonadida, Breviata, Opisthokonta, and Amoebozoa demonstrating variation in the folB-folK-folP fusion gene. Tree topology was calculated using a concatenated alignment of conserved genes identified in (Torruella et al. 2012) and represents the best-known likelihood tree from 100 ML searches in RAxML (PROTCAT+LG) with 1,000 nonrapid bootstraps. ML-BS is an abbreviation of maximum likelihood bootstrap values, FolB-folK-folP fusion gene domain architecture of taxa included is listed down the right column, and fusion state is denoted by the presence/absence of connecting lines. Inferred gene/domain losses are shown as shadow domains. See key for guide to tree topology support values and character state changes. Domain duplication is indicated as (D) in a box of the appropriate domain colour, fission by domain loss events are denoted as (FL5-9) and specific fission events as (F1-4). Total losses of complete ORFs are not illustrated. Note that the putative folB of Trichoplax adhaerens and the putative folB-folK fusion of Nematostella vectensis were removed from phylogenetic analyses due to poor alignment of these sequences, as such their provenance and evolutionary ancestry remains questionable and are therefore indicated by a question mark at the relevant position. phylogenies demonstrating a monophyletic grouping of the three domain folB-folK-folP gene fusions (both as folB-folK-folP and folB-folB-folK-folP) with weak topology support (i.e., 0.539/19% and 0.991/37% support [supplementary figs. S1 and S4, Supplementary Material online, respectively]). Importantly, we note that the only members of the Diaphoretickes and Excavata (formerly the "bikonts") possessing a putative folB gene are the Archaeplastida and that the folB gene of this eukaryotic group branches separately from the other eukaryotes within a clade of bacterial genes with moderate-to-strong posterior probability/bootstrap support (supplementary fig. S1: 0.992/82%, Supplementary Material online and supplementary fig. S4: 1.000/94%, Supplementary Material online) suggesting a separate evolutionary ancestry of this gene to that of the Opisthokonta and Ameobozoa. Given the taxonomic distribution of the folB gene across the Archaeplastida (supplementary figs. S1 and S4, Supplementary Material online), this xenologue is most likely to have been derived either by an ancient horizontal gene transfer from a bacterial source into the Archaeplastida lineage or via the cyanobacterial endosymbiosis that gave rise to the plastid organelle, a process that has been suggested to lead to the acquisition of a number of genes of mixed bacterial ancestry (Brinkman et al. 2002;Martin et al. 2002). Using the A. thaliana folB gene, we searched for evidence of subcellular localization using the "cell eFP browser" (http://bar.utoronto.ca/cell_efp/cgi-bin/cell_efp.cgi?ncbi_gi=15229838, last accessed October 3, 2014) which suggested this gene product was localized to the cytosol or the mitochondria (supplementary table S3, Supplementary Material online). However, because the Archaeplastida folB is not an orthologue of the Opisthokonta/ Amoebozoa version and no additional Diaphoretickes and Excavata folB orthologues are currently available, our folB phylogenetic analysis does not represent a strict test of the monophyly of the folB-folK-folP gene fusion within the eukaryotes. Finally, in an attempt to improve tree resolution and to identify a resolved phylogeny, we conducted a concatenated phylogenetic analysis of the folK and folP genes (supplementary fig. S7, Supplementary Material online). This analysis again recovered a tree with low topology support values and taxonomic relationships inconsistent with established eukaryotic phylogenetic relationships (Rodriguez-Ezpeleta et al. 2005;Hampl et al. 2009;Derelle and Lang 2012;Torruella et al. 2012) and therefore provided no additional data to test the monophyly of folB-folK-folP three-domain gene fusions. folB Tandem Duplication in the Early Opisthokonta Focusing on the "Opisthokonta and Amoebozoa folB-folK-folP" cluster, a clade specifically encompassing the folB-folB-folK-folP and folB-folB gene architectures found in Fungi, F. alba, Sp. arctica, and C. owczarzaki ( fig. 2) forms with weak support in the reduced analysis (0.852/37%-supplementary fig. S4, Supplementary Material online). The taxon distribution of this character suggests that the folB tandem exon-duplication represents a novel genetic character that arose in the last common ancestor of the opisthokonts followed by the loss of these genes in Metazoa and some other opisthokont taxa ( figs. 2 and 3). We can identify this pattern because multigene phylogenies place the Sp. arctica and C. owczarzaki branch sister to the choanoflagellates and metazoans (Torruella et al. 2012), so parsimoniously the folB-folB gene duplication predated the diversification of the major Opisthokonta clades (see fig. 3). The distribution of the Opisthokonta folB duplication therefore provides a character that infers the folB-folK fissions within the opisthokonts are nested events (see fig. 3-F1-4 fission events) and the ancestral Opisthokonta possessed a folB-folK gene fusion. Evidence of Gene Fission in the folB-folK-folP Gene Fusion Our gene fusion character distribution analysis identifies nine fission events either by loss of one or two domains or by separation of the folB-folB-folK-folP fusion in the opisthokonts ( fig. 3). Specifically, these events involve: Fission to form folB-folB and folK-folP, on the Sp. arctica, C. owczarzaki, and Pu. graminis branches ( fig. 3, fission events F1, F2, and F4) and within the Pezizomycotina before the divergence of: Aspergillus carbonarius, Coccidioides immitis, Cochliobolus heterostrophus, Cladonia grayi, Chaetomium globosum, and Neurospora crassa ( fig. 3, fission event F3). Furthermore, these data identify loss of one or both folB domains on five occasions in the branches leading to the basidiomycetes: Co. cinerea, Laccaria bicolor, Wallemia sebi, Po. placenta, and M. laricispopulina ( fig. 3, fission by loss events, FL: 5-9) and the branch leading to the ascomycetes As. carbonarius and Co. immitis. In all nine cases, we reconfirmed the gene architectures by examining gene alignments and the synteny of each candidate fission gene in the relevant genome assemblies. Distribution of Putative Folate Biosynthesis Gene Homologues and Adaptation to Folate Heterotrophy Using a comparative genomic and phylogenetic approach, we have identified the taxonomic distribution of a three protein domain encoding gene fusions in the pterin branch of the folate biosynthesis pathway. In the absence of strong phylogenetic signal demonstrating eukaryote-to-eukaryote HGT our analyses identified multiple gene loss events in different eukaryotic groups (e.g., Metazoa and Excavata), suggesting that the capacity to manufacture folate de novo has been lost on multiple occasions within the eukaryotes. This is consistent with adaptation of these lineages to acquiring folate or folate intermediates from food sources and/or host organisms. Specifically, the comparative genomic data demonstrate that a complete pterin branch is absent from the Metazoa sampled, consistent with the hypothesis that animals acquire folate using "intact folate salvage" from digested food (Lucock 2000), putatively maintaining the last two or three steps of the biosynthesis pathway to facilitate salvage of folic acid ( figs. 1 and 2). A similar pattern of gene presence/absence was identified for the Trypanosoma (Excavata), Naegleria (Excavata), and Thecamonas (Apusomonadida) genomes, suggesting that these protists acquire folate, or precursors of folate (e.g., folic acid), by salvage from external sources. We can therefore infer that these heterotrophic characteristics have resulted in concordant loss of the de novo folate biosynthesis. Likewise the absence, or near absence, of the entire folate biosynthesis pathway in Entamoeba, Trichomonas, and Giardia suggests a dependence on hosts or phagocytosed food for provision of intact folate, as such inhibiting folate synthesis as a therapeutic target is not viable for these parasitic protists, but inhibition of uptake transporters of intact folate may offer an alternative therapeutic strategy. In many Diaphoretickes genomes (e.g., taxa from the SAR group and Cryptophyta) both folK and folP genes were present, but a putative homologue of the folB gene was not identified. These results suggest that this part of the pathway is absent from these taxa or performed by a highly divergent or nonhomologous gene family. A paralogue of folB: folX has been identified in Escherichia coli with 30% identical amino acid residues. This protein was classified as an epimerase and performs the equivalent aldolase type reaction with less than 1% velocity as the DHNA encoded by the Ec. coli folB gene (Haussmann et al. 1998) suggesting this paralogue is not functionally equivalent. Comparative genomic analysis of the distribution of folB gene in prokaryotes identified many phylogenetically disparate groups without an identifiable putative homologue (de Crecy-Lagard et al. 2007) leading these authors to make two suggestions: 1) the enzyme that catalyses this step is encoded by a uncharacterized putative transaldolase gene often found to cluster in the same operons as folK, and/or 2) because other taxa lacked the folB gene and a putative alternative transaldolase-encoding gene; a currently unidentified gene family must encode this enzyme (de Crecy-Lagard et al. 2007). Later work then showed some evidence that the folB in many bacteria has been replaced with a functionally equivalent six-PTPS (Pribat et al. 2009). Analysis of eukaryotic genomes demonstrates many eukaryotic protists lacking an identifiable folB or PTPS encoding gene, suggesting that a currently unidentified functionally equivalent but phylogenetically dissimilar gene may encode an enzyme that catalyses this step. Gene Fusion as an Adaptation for Folate Biosynthesis Our data identified a number of variant gene fusions in pterin branch of the folate biosynthesis genes. These included a gene consisting of three domains and therefore the likely product of two distinct gene fusion events. Our comparative genomic survey suggests that this characteristic is only found in opisthokont taxa including the: Fungi, F. alba, Microsporidia, and a range of Amoebozoa (e.g., Dictyostelium, Acanthamoeba, and Copromyxa). Moreover, two domain variations of these gene fusion forms were identified in a range of eukaryotes ( fig. 2). Gene fusions have been identified elsewhere in the folate biosynthesis pathway Cavalier-Smith 2002, 2003;de Crecy-Lagard et al. 2007) suggesting that gene fusion has been an important process in the evolution of the eukaryotic folate biosynthesis, possibly as a consequence of selection for: Cotranscription, colocalization, promotion of metabolic channeling, or a general improvement of enzyme kinetics (Welch and Gaertner 1975;Meek et al. 1985;Ivanetich and Santi 1990;Miles et al. 1999;Richards et al. 2006). This pattern is consistent with other secondary metabolic pathways that are also localized in the cytosol and show complex patterns of gene fusion (e.g., Nara et al. 2000;Stover et al. 2005;Richards et al. 2006). A genome database search identified fragments of the folB-folK-folP genes in the Ac. castellanii sequencing project (Baylor College of Medicine-https://www.hgsc.bcm.edu/ microbiome/acanthamoeba-castellani-neff, last accessed October 3, 2014) and within the recently completed genome sequence (Clarke et al. 2013). To confirm that this was a bona fide folB-folK-folP triple domain gene fusion, we performed nested PCR on cDNA derived from an axenic culture of Ac. castellanii Neff strain grown in folate-limiting conditions (GenBank Acc: AFW17812.1). This work confirmed that Ac. castellanii transcribes a single gene fusion encoding the folB-folK-folP domain architecture and provides evidence of active folate biosynthesis via a complete pterin branch in Ac. castellanii. Acanthamoeba can cause keratitis infection of the cornea linked to use of contaminated contact lenses (Radford et al. 1995). These data suggests the potential for antimicrobial agents that inhibit pterin branch of folate biosynthesis (e.g., sulfonamides and sulfones) as therapeutic treatment for Acanthamoeba keratitis or as an additive to eye-care and contact lens solutions to prevent infections. Exploiting metabolic differences between Acanthamoeba and the human host is a potentially important avenue to identify new antimicrobials and limit toxic effects (Roberts and Henriquez 2010), particularly in the eye. For example, sulphadiazine has been used to target different metabolic pathways for the successful inhibition of Acanthamoeba growth in vitro (Mehlotra and Shukla 1993) and encouraging reports of its use in vivo have been made in experimentally induced Acanthamoeba meningoencephalitis in mice (Rowan-Kelly et al. 1982) and in granulomatous amoebic encephalitis in AIDS patients (Seijo Martinez et al. 2000). Phylogenetic Evidence for Frequency of Gene Fusion and Fission Events We conducted a series of phylogenetic analyses to investigate if the gene fusion characters were monophyletic and identify any cases of gene fissions. Our results demonstrate the presence of a complex pattern of gene loss (discussed above). Comparisons of the distribution of different folate fusion genes to the established Opisthokonta phylogeny (James et al. 2006;Torruella et al. 2012) combined with individual domain phylogenetic analyses suggest a minimum of nine gene fission events (five by fission through domain loss [deletion] and four by fission through separation and retention of a separate genes encoding the constituent domains) (fig. 3). These suggest that gene fissions occur at a high rate in this pathway and folB-folK-folP gene fusions are not stable characters. This is consistent with a growing body of data demonstrating that the process of gene/domain separation is an important factor in gene evolution (Kummerfeld and Teichmann 2005;Nakamura et al. 2007;Leonard and Richards 2012). Next, we used phylogenetic analysis to polarize the ancestry of the folB-folK-folP gene fusion. Our phylogenetic analysis generally proved inconclusive, because we failed to recover tree resolution and specifically because there is no Diaphoretickes and Excavata orthologue of the Amoebozoa and Opisthokonta folB gene. Taken together the phylogenies, therefore, do not constitute an appropriate test of the monophyly of the three-domain gene fusion clade (i.e., Amoebozoa and Opisthokonta). Furthermore, as the individual folate pathway gene phylogenies were generally unresolved, it is possible that undetected cases of hidden paralogy, multiple folB tandem duplications, and HGT may have occurred in the evolution of this pathway. HGT is especially a concern as some literature suggests that gene clustering increases the possibility that genes become fixed by selection once they have undergone transfer. This is because they lead to the acquisition of functional modules, either as an operon and/or gene fusions (e.g., Andersson and Roger 2002;Rokas 2010, 2011). Such factors would therefore act to further complicate the evolution of this pathway, but at present are hard to quantify using single-gene phylogenies. As we saw no additional evidence for HGT other than that discussed (i.e., ancestral acquisition of the folB gene in the Archaeplastida and acquisition of a folB-folK gene fusion in Branchiostoma from a likely prokaryotic source), we use the more parsimonious interpretation of vertical inheritance to explain the gene distribution observed. The phylogenies provided no strong support for the paraphyly and convergent evolution of the three-domain gene fusion in the Amoebozoa and Opisthokonta. Therefore, in the absence of strong signal to support an alternative hypothesis and based on current taxonomic distribution of this character, we currently favour the null hypothesis that the folB-folK-folP three-domain gene fusion is monophyletic and arose once and before the diversification of the opisthokonts and amoebozoans. We do acknowledge that alternative hypotheses involving fissions and loss in the Diaphoretickes and Excavata taxa, or convergent gene fusions in the Amoebozoa and Opisthokonta taxa are only slightly less parsimonious given current data. This is an important concern as our data demonstrated that this gene fusion is not a stable character, subject to frequent gene fission and partial and total gene loss. Consequently, perhaps the overriding message of this work is that rare-derived genomic characters, such as gene fusions, can be noisy and therefore these data should not be applied to resolving evolutionary relationships without testing their ancestry and susceptibility to homoplasy.
8,397.2
2014-09-23T00:00:00.000
[ "Biology" ]
Quasi one dimensional transport in individual electrospun composite nanofibers We present results of transport measurements of individual suspended electrospun nanofibers Poly(methyl methacrylate)-multiwalled carbon nanotubes. The nanofiber is comprised of highly aligned consecutive multiwalled carbon nanotubes. We have confirmed that at the range temperature from room temperature down to ∼60 K, the conductance behaves as power-law of temperature with an exponent of α ∼ 2.9−10.2. The current also behaves as power law of voltage with an exponent of β ∼ 2.3−8.6. The power-law behavior is a footprint for one dimensional transport. The possible models of this confined system are discussed. Using the model of Luttinger liquid states in series, we calculated the exponent for tunneling into the bulk of a single multiwalled carbon nanotube αbulk ∼ 0.06 which agrees with theoretical predictions. We present results of transport measurements of individual suspended electrospun nanofibers Poly(methyl methacrylate)-multiwalled carbon nanotubes.The nanofiber is comprised of highly aligned consecutive multiwalled carbon nanotubes.We have confirmed that at the range temperature from room temperature down to ∼60 K, the conductance behaves as power-law of temperature with an exponent of α ∼ 2.9−10.2.The current also behaves as power law of voltage with an exponent of β ∼ 2.3−8.6.The power-law behavior is a footprint for one dimensional transport.The possible models of this confined system are discussed.Using the model of Luttinger liquid states in series, we calculated the exponent for tunneling into the bulk of a single multiwalled carbon nanotube α bulk ∼ 0.06 which agrees with theoretical predictions.C 2014 Author(s).All article content, except where otherwise noted, is licensed under a Creative Commons Attribution 3.0 Unported License. [http://dx.doi.org/10.1063/1.4862168]As nanosized devices are becoming more and more significant, electron-electron interactions due to the size confinement, become more important for transport and worthy of consideration and discussion.Electron-electron interactions are a main player in one dimensional transport bringing about behavior different from the classic Fermi liquid.Different kinds of behaviors are associated with electron-electron interactions: Luttinger liquid behavior resulting from repulsive short range electron-electron interactions, [1][2][3][4][5] Wigner crystal behavior resulting from long range Coulomb interaction 6 or environmental Coulomb blockade. 7The characteristic fingerprint for quasi-one dimensional behavior is a power-law dependence of the conductance with temperature and of the current with applied potential.The power-law was observed in various systems ranging from carbon nanotubes, [1][2][3][6][7][8] conducting conjugated polymer nanowires, 5, 9-11 semiconductors nanowires 4,12,13 and fractional quantum Hall edge states. 14 ne way of obtaining continuous nanofibers is by electrospinning. 15,16 During the pros, the nanofibers can be filled with nanosized fillers, resulting in highly aligned one dimensional nanofibers.In that respect, it is a unique system which creates a long nanochain of confined conduction.Doping insulating polymers such as Poly(methyl methacrylate) (PMMA) with multiwalled carbon nanotubes (MWCNTs) creates a confined one dimensional system. This sstem can present either short range or long range electron-electron interactions depending on the temperature and morphology which can further elucidate the nature of one dimensional behavior. In this paper, we report our transport studies of individual suspended electrospun MWCNT-PMMA nanofiber.We perform analysis of early experimental data for a MWCNT-PMMA nanofiber a Corresponding author; electronic address<EMAIL_ADDRESS>to determine the adaptation of different models for one dimensional conductors and in particular the model for Luttinger liquid states in series. We fabricated three samples.Two samples with 20 wt% of MWCNTs and one sample with 25 wt% of MWCNTs.For the first two samples, we dispersed 0.6g of MWCNTs purchased from Bayer Material Science -Baytubes C150 15,17 in 60g of dimethyl acetamide (Carl Roth, GMbH) with 1g of hydroxypropyl cellulose (from Aldrich Mw 100.000) by ultrasonification for 30 minutes.3g of PMMA (from Aldrich Mw 996.000) were dissolved in 30g of acetone.Dimethyl acetamide and acetone were used without any additional purification.For dispersion tip sonicator Sonopuls HD3100, Bandelin GmbH, equipped with a cup horn operating at 10 kHz and 100 W output power was used. 16Components were mixed together and were dispersed by ultrasonification for additional 30 minutes.Nanofibers were electrospun using Yflow R 2.2.D-300 lab -scale electrospinning unit with single-nozzle electrospinning (Yflow Sistemas y Desarrollos S.L) with an injector-collector distance of 15 cm, an applied voltage of +9.5/−8.0kV, and a flow rate of 2 ml/h.For the third sample we used 25 wt% of MWCNTs with flow rate of 1 ml/h, applied voltage of +8/−12 kV and injector-collector distance of 12 cm. For electrical conductivity study, nanofibers were spun directly onto a highly doped Si/SiOx substrate with 600nm oxide with prefabricated trenches.For the fabrication of the trenches, 100mm 100 wafers with one polished side were used.A cleaning process was performed using H 2 SO 4 +H 2 O 2 .After this, a photolithography process was performed.The resist thickness was 1.5 μm.Trenches in width of 50 μm were etched using deep silicon etching followed by resist stripping and another cycle of H 2 SO 4 +H 2 O 2 cleaning.For the insulation of the substrate, a 600 nm thick SiO 2 layer was deposited on the wafer using wet oxidation at 1000 o C. Candidate nanofibers were located using an optical microscope and afterward were connected through direct aluminum bonding onto the nanofibers.Atomic force microscopy imaging was performed with Park XE-150 in tapping mode to determine the diameter of the nanofibers.The electrical measurements were performed with a Keithley 4200-SCS and an Oxford closed cycle cryostat with 2-probe in Helium ambiance. Fig. 1(a) presents an image of our device taken with an optical microscope.The nanofibers can be seen as faint black lines running between the electrodes.The total length of each nanofiber as estimated from the optical microscope are ∼100 μm in average.From atomic force microscopy images we estimated the diameter of a single nanofiber as ∼120 nm.The average length of MWCNT is ∼1.5 μm.Fig. 1(b) shows a transmission electron microscopy image of the nanofiber.It can be seen that the MWCNTs align inside the nanofiber in series and create a nanochain of conduction with minimal separation.We have three working devices in total.Two working devices out of 25 devices from one sample that were fabricated with 20 wt% MWCNTs and one working device out of 51 devices from two samples that were fabricated with with 25 wt% MWCNTs.The results for the devices discussed, are presented in Fig. 2 and summarized in Table I.Fig. 2 shows the conductance as a function of temperature.As can be seen, the conductance follows a power-law behavior G(T) ∝ T α over a large temperature range as long as the condition eV k B T is satisfied.For highly resistive devices such as device 2. the drop of the conductance is fast and already in relatively high temperatures the current can no longer be detected.Repeated measurements reveal a Kondo-like behavior in low temperatures and is described elsewhere. 25This may indicate a highly resistive joint 25 such as in device .2 the conductance drops quickly below the threshold sensitivity of the system.Table I presents the values corresponding to the samples.FIG. 3. I-V characteristics at 4K.As can been seen, the power-law behavior of the current under large potential eV k B T is retained.The red line is a fit of the data with voltages above 0.4 V. creating a small detachment in the nanochain of conduction.However, device 2. in high temperatures still obeys power-law behavior. Fig. 3 shows that also the I-V characteristics retain power-law behavior I (V ) ∝ V β at low temperature where eV k B T is satisfied.12,13 In carbon nanotube/polymer composite nanofibers beyond a concentration percolation threshold, 18 the electrical conductivity rises sharply to saturation. Th dependence between the electrical conductivity to nanotube alignment also obeys similar rules. Beyond thcritical alignment threshold 18 the electrical conductivity shows a sharp rise with respect to the degree of carbon nanotube alignment.For highly aligned carbon nanotubes in composite nanofibers, the onset of percolation pathway begins from loading above 3 wt%.18 As our loading is 20-25 wt%, we stand well above this threshold.Therefore, the electrospinning process created a nanochain of MWCNTs separated by minimal tunneling barriers (Fig. 1). Hece, we can expect a quasi-one dimensional behavior.However, the length and the morphology of each nanofiber in consideration does not necessarily mean that it would behave similarly to MWCNTs or that the power-law fashion would mean a long nanochain of Luttinger liquid states in series (Fig. 1). For the confined system we have, there are a few models to consider.The first option is one dimensional variable range hopping 19,20 and fluctuation induced tunneling. 21They predict ln G ∼ −1/T p , which contradict the temperature dependence observed experimentally.The discrepancy between our results to the fluctuation induced tunneling 21,22 model exemplifies the fact that the separation between the consecutive MWCNTs is minimal (Fig. 1).For three-dimensional disordered system 23 in the critical regime of the metal-insulator transition, the temperature dependence of the resistivity follows a universal power-law σ (T) ∼ T γ with 0.33 < γ < 1.In our case all power-law exponents exceed unity (Table I) and therefore this model is invalid.We also rule out space-charge limited current 24 where I ∼ V β with β ∼ 2. According to our fits β 2 (Table I and Fig. 3), which is in contradiction to this explanation. Turning to the model of Wigner crystal-it is caused by long range Coulomb interaction and occurs in solids with a dilute system of electrons such as carbon nanotubes 6 where the Coulomb energy E C is bigger than the kinetic energy of the electrons E F .As our structure is comprised of consecutive MWCNTs divided by tunneling barriers and impurities (Fig. 1) -this morphology can easily create potential wells where electrons get trapped and create Wigner crystal.Every Wigner crystal state would be pinned down by the tunneling insulating barriers, impurities and kinks.However, since each potential well is essentially a MWCNT with small energy gap (<100 meV), it would be difficult to observe any Wigner crystal state.At this time, we cannot fully explore this behavior at low temperature.A classical Wigner crystal, on the other hand, would require an exponential relation for the conductance with temperature 19 in low temperatures and one dimensional variable range hopping behavior.This is not recorded in our system. To be consistent with Luttinger liquid behavior the system must follow a few requirements: power-law of the current under large potential eV k B T and of the conductance at small potential eV k B T .The number of the channels in the nanofiber decide on the power-law parameters of β and α.More generally, the Luttinger liquid model states the current depends 1-3, 6, 8 on voltage and temperature as where is the Gamma function, I 0 is a constant, and α and β are the power-law parameters introduced above.Fig. 4 shows how at low temperatures the current follows a universal curve.This agrees with the Luttinger liquid model described by Eq. ( 1).The Luttinger liquid model characterizes a system by its interaction parameter g derived from the tunneling density of states. 1,3 or strong repulsive electron-electron interactions, g 1 while for non interacting electrons, g = 1.According to Luttinger liquid theory predictions for a system where impurity and kink barriers are dominant α follows: We estimate g 1 ∼ 0.04, g 2 ∼ 0.02, g 3 ∼ 0.08 respectively for the devices under consideration (A similar value can be reached also for intertube tunneling).This indicates a strong repulsive electron-electron interactions regime in the nanofibers.For a single conduction channel, e.g. at low temperatures where only few channels survive and are active, the correlation between the power-law exponents is In our measurements β is always lower than α + 1 as already observed in inorganic nanowires and polymer nanowires. 9,12 he absolute values of α and β determined by us are also higher FIG. 4. I/T (α+1) vs. eV /k B T plot for different temperature where α value is taken from a fit of G(T) ∝ T α (Table I).At low temperatures the curves gather up into a universal line as Eq. ( 1) predicts.The continuous black line accentuates the universal line.The figure of device 1. is presented. than of MWCNTs 3, 7 (α, β ∼ 0.3); again, they are close to those found in conjugated conducting polymer nanowires 9 and long InSb nanowires. 12Observing characteristics resembling Luttinger liquid behavior is quite striking, since our devices are ultra-long and composed of many segments divided by insulating barriers.This apparent strong confinement is probably due to the morphology of the nanofiber.Comparing our result to nanotube devices with a single kink 1 where α ∼ 2.2 or polymer nanowire device 9 where α ∼ 2.8, we can see that the value of α ∼ 2.9 for device 3. is close to these results despite its size.Our devices are clearly rich with kinks and impurities along their long course.Every kink, impurity and PMMA barrier mark another tunneling barrier between consecutive Luttinger liquid states (Fig. 1).It is no wonder then, that for devices 1. and 2. we get higher values of α and strong power-law dependence.Finally, we discuss environmental Coulomb blockade.Basically, in the limit of a multichannel device, environmental Coulomb blockade corresponds with Luttinger liquid which for many modes in parallel predicts a power-law behavior 7 as well.We consider the disordered conductor as an effective LC transmission line, which was found to be valid for MWCNT in the range of not too small voltages, 7 eV k B T .Considering our nanofiber as a lumped transmission line and neglecting the resistive part yields an impedance Z = n √ L/C, where n is the number of joints in series (Fig. 1) inside the nanochain of conduction, L ∼ 1 nH/μm is the kinetic inductance of a single MWCNT and C ∼ 30aF/μm typical value of the capacitance of a single MWCNT. 7For the most conductive device 3. with the lowest α, Z ∼ 275 K at 10 K (where eV k B T ).From this we find n ∼ 50.This is consistent with the length of our device ∼100 μm and the average length of the MWCNT (1.5 μm). For single MWCNT the exponent for tunneling into the bulk of a nanotube α bulk .Bulk tunneling can be used as a building block to derive other exponents. 7For example tunneling to the end of Luttinger liquid state α end = 2α bulk .A theoretical value for α bulk was calculated 7 0.02-0.08.Since in our nanofibers the conduction channels are connected in series (Fig. 1), we consider n as the number of joints in the nanochain and correspondingly α ∼ n • α bulk .For the lowest value of α ∼ 2.9 we get α bulk ∼ 0.06.This agrees with the theoretical predictions. 7We note, however, that previous experimental results 7 found higher values for α bulk . In conclusion, we showed quasi-one dimensional transport behavior in electrospun nanofibers.We found that the conductance behaves as power-law from room temperature down to ∼60 K.The I-V curve is also described by a power-law.These characteristics are a fingerprint for one dimensional transport systems with Luttinger liquid or environmental Coulomb blockade.Therefore, we have constructed a device made of many Luttinger liquid states in series which behaves as an effective lumped LC transmission line.Despite the obvious similarities to a single MWCNTs and polymer nanowires, it bears many differences that still need to be resolved. FIG. 1. (a).Optical image of a representing individual MWCNT-PMMA nanofiber.The nanofiber is visible as a faint black line over the trench and is connected by direct bonds.To assist the eye, the image was sharpened and arrows are pointing at the nanofiber.It can be seen that only a single nanofiber is nested across the trench.The trench width is 50 μm.The total length of the nanofiber between the bonds is ∼100 μm.(b).A transmission electron microscopy image of the nanofiber.The MWCNTs align inside the nanofiber in series and create a nanochain of conduction with minimal separation. TABLE I . Values for nanofibers used in transport experiments. FIG.2.Conductance vs. temperature for the working devices.The black lines are a fit over the results at high temperatures as expected from the condition eV k B T .All devices obey power-law behavior where in highly resistive devices
3,726.8
2014-01-09T00:00:00.000
[ "Physics" ]
Notes on Schubert, Grothendieck and Key Polynomials We introduce common generalization of (double) Schubert, Grothendieck, Demazure, dual and stable Grothendieck polynomials, and Di Francesco-Zinn-Justin polynomials. Our approach is based on the study of algebraic and combinatorial properties of the reduced rectangular plactic algebra and associated Cauchy kernels. Introduction The Grothendieck polynomials had been introduced by A. Lascoux and M.-P. Schützenberger in [29] and studied in detail in [37]. There are two equivalent versions of the Grothendieck polynomials depending on a choice of a basis in the Grothendieck ring K ⋆ (F l n ) of the complete flag variety F l n . The basis {exp(ξ 1 ), . . . , exp(ξ n )} in K * (F l n ) is a one choice, and another choice is the basis {1 − exp(−ξ j ), 1 ≤ j ≤ n}, where ξ j , 1 ≤ j ≤ n} denote the Chern classes of the tautological linear bundles L j over the flag variety F l n . In the present paper we use the basis in a deformed Grothendieck ring K * ,β (F n ) of the flag variety F l n generated by the set of elements {x i = x (β) i = 1 − exp(β ξ i ), i = 1, . . . , n}. This basis has been introduced and used for construction of the β-Grothendieck polynomials in [8], [9]. A basis in the classical Grothendieck ring of the flag variety in question corresponds to the choice β = −1. For arbitrary β the ring generated by the elements {x (β) i , 1 ≤ i ≤ n} has been identified with the Grothendieck ring corresponding to the generalized cohomology theory associated with the multiplicative formal group law F (x, y) = x + y + β x y, see [15]. The Grothendieck polynomials corresponding to the classical K-theory ring K ⋆ (F l n ), i.e. the case β = −1, had been studied in depth by A. Lascoux and M.-P. Schützenberger in [30]. The β-Grothendieck polynomials has been studied in [8], [10], [15]. The plactic monoid over a finite totally ordered set A = {a < b < c < . . . < d} is the quotient of the free monoid generated by elements from A subject to the elementary Knuth transformations [21] bca = bac & acb = cab, and bab = bba & aba = baa, (1.1) for any triple {a < b < c} ⊂ A. To our knowledge, the concept of "plactic monoid" has its origins in a paper by C.Schensted [52], concerning the study of the longest increasing subsequence of a permutation, and a paper by D. Knuth [21], concerning the study of combinatorial and algebraic properties of the Robinson-Schensted correspondence 1 . As far as we know, this monoid and the (unital) algebra P(A) corresponding to that monoid 2 , had been introduced, studied and used in [53], Section 5, to give the first complete proof of the famous Littlewood-Richardson rule in the theory of Symmetric functions. A bit later this monoid, was named the "monoïde plaxique" and studied in depth by A. Lascoux and M.-P. Schützenberger [28]. The algebra corresponding to plactic monoid is commonly known as plactic algebra. One of the basic properties of the plactic algebra [53] is that it contains the distinguish commutative subalgebra which is generated by noncommutative elementary symmetric polynomials e k (A n ) = i 1 >i 2 >...>i k a i 1 a i 2 · · · a i k , k = 1, . . . , n, (1.2) see e.g. [53], Corollary 5.9, [7]. We refer the reader to nice written overview [40] of the basic properties and applications of the plactic monoid in Combinatorics. It is easy to see that the plactic relations for two letters a < b, namely, aba = baa, bab = bba, imply the commutativity of noncommutative elementary polynomials in two variables. In other words, the plactic relations for two letters imply that ba(a + b) = (a + b)ba, a < b. It has been proved in [7] that these relations together with the Knuth relations (1.1) for three letters a < b < c, imply the commutativity of noncommutative elementary symmetric polynomials for any number of variables. In the present paper we prove that in fact the commutativity of nocommutative elementary symmetric polynomials for n = 2 and n = 3 implies the commutativity of that polynomials for all n, see Theorem 2.23 3 . One of the main objectives of the present paper is to study combinatorial properties of the generalized plactic Cauchy kernel where P n stands for the set of parameters {p ij , 2 ≤ i + j ≤ n + 1, i > 1, j > 1}, and U := U n stands for a certain noncommutative algebra we are interested in, see Section 5. We also want to bring to the attention of the reader on some interesting combinatorial properties of rectangular Cauchy kernels F (P n,m , U) = where P n,m = {p ij } 1≤i≤n 1≤j≤m . We treat these kernels in the (reduced) plactic algebras PC n and PF n,m correspondingly. The algebras PC n and PF n,m are finite dimensional and have bases parameterized by certain Young tableaux described in Section 5.1 and Section 6 correspondingly. Decomposition of the rectangular Cauchy kernel with respect to the basis in the algebra PF n,m mentioned above, gives rise to a set of polynomials which are common generalizations of the (double) Schubert. β-Grothendieck, Demazure and Stanley polynomials. To be more precise, the polynomials listed above correspond to certain quotients of the plactic algebra PF n,m and appropriate specializations of parameters {p ij } involved in our definition of polynomials U α ({p ij }), see Section 6. As it was pointed out in the beginning of Introduction, the Knuth (or plactic) relations (1.1) have been discovered in [21] in the course of the study of algebraic and combinatorial properties of the Robinson-Schensted correspondence. Motivated by the study of basic properties of a quantum version of the tropical/geometric Robinson-Schensted-Knuth correspondence -work in progress, but see [1], [18], [19], [47], [48] for definition and basic properties of the tropical/geometric RSK, -the author of the present paper came to a discovery that a certain deformations of the Knuth relations preserve the Hilbert series (resp.the Hilbert polynomials) of the plactic algebras P n and F n (resp. the algebras PC n and PF n ). More precisely, let {q 2 , . . . , q n } be a set of (mutually commuting) parameters, and U n := {u 1 , . . . , u n } be a set of generators of the free associative algebra over Q of rank n. Let Y, Z ⊂ [1, n] be subsets such that Y ∪Z = [1, n] and Y ∩Z = ∅. Let us set p(a) = 0 , if a ∈ Y and p(a) = 1, if a ∈ Z. Define super quantum Knuth relations among the generators u 1 , . . . , u n as follows: SP L q : (−1) p(i)p(k) q k u j u i u k = u j u k u i , i < j ≤ k, (−1) p(i)p(k) q k u i u k u j = u k u i u j , i ≤ j < k. We define • deformed/quantum superplactic algebra SQP n to be the quotient of the free associative algebra Q u 1 , . . . , u n by the two-sided ideal generated by the set of quantum Knuth relations (SP L q ), • reduced deformed/quantum superplactic algebras SQPC n and SQPF n,m to be the quotient of the algebra SQP n by the two-sided ideals described in Definitions 5.13 and 6.6 correspondingly. We state Conjecture The algebra SQP n and the algebras SQPC n and SQPF n,m , are flat deformations of the algebras P n , PC n and PF n,m correspondingly. In fact one can consider more general deformation of the Knuth relations, for example take a set of parameters Q := {q ik , 1 ≤ i < k ≤ n} and impose on the set of generators {u 1 , . . . , u n } the following relations q ik u j u i u k = u j u k u i , i < j ≤ k, q ik u i u k u j = u k u i u j , i ≤ j < k. However we don't know how to describe a set of conditions on parameters Q which imply the flatness of the corresponding quotient algebra(s), as well as we don't know an interpretation and dimension of the algebras SQPC n and SQPF n,m for a "generic" values of parameters Q. We expect the dimension of algebras SQPC n and SQPF n,m each depends piece-wise polynomially on a set of parameters {q ij ∈ Z ≥0 , 1 ≤ i < j ≤ n} , and pose a problem to describe its polynomiality chambers. We also mention and leave for a separate publication(s), the case of algebras and polynomials associated with superplactic monoid [44], [27], which corresponds to the relations SP L q with q i = 1, ∀i. Finally we point out on interesting and important paper [43] wherein the case Z = ∅, and the all deformation parameters are equal to each other., has been independently introduced and studied in depth. Let us repeat that the important property of plactic algebras P n is that the noncommutative elementary polynomials e k (u 1 , . . . , n n−1 ) := n−1≥a 1 ≥a 2 ≥a k ≥1 u a 1 · · · u a k , k = 1, . . . , n − 1, generate a commutative subalgebra inside of the plactic algebra P n , see e.g. [28], [7]. Therefore the all our finite dimensional algebras introduced in the present paper, have a distinguish finite dimensional commutative subalgebra. We have in mined to describe this algebras explicitly in a separate publication. In Section 2 we state and prove necessary and sufficient conditions in order the elementary noncommutative polynomials form a mutually commuting family. Surprisingly enough to check the commutativity of noncommutative elementary polynomials for any n, it's enough to check these conditions only for n = 2, 3. However a combinatorial meaning of a generalization of the Lascoux-Schützenberger plactic algebra P n invented, is still missing. The plactic algebra PF n,m introduced in Section 6, has a monomial basis parametrized by the set of Young tableaux of shape λ ⊂ (n m ) filled by the numbers from the set {1, . . . , m}. In the case n = m it is well-known [14], [25], [45], that this number is equal to the number of symmetric plane partitions fit inside the cube n × n × n. Surprisingly enough this number admits a factorization in the product of the number of totally symmetric plane partitions (T SP P ) by the number of totally symmetric self-complementary plane partitions(TSSCPP) fit inside the same cube. A similar phenomenon happens if |m − n| ≤ 2, see Section 6. More precisely,we add to the well-known equalities • #|B 1,n | = 2 n , #|B 2,n | = 2n+1 n , #|B 3,n | = 2 n Cat n , [55], A003645, where AMSHT (2n) denotes the number of alternating sign matrices of size 2n × 2n invariant under a half-turn and CSSP P (2n) denotes the set of cyclically symmetric selfcomplementary plane partitions in the 2n-cube. It is well-known that ASMHT (2n) = ASM(n) × CSP P (n), where CSP P (n) denotes the number of cyclically symmetric plane partitions in n-cube, and CSSCP P (2n) = ASM(n) 2 , see e.g. [3], [26], [55], A006366. • Construct bijection between the set of plane partitions fit inside n-cube and the set of (ordered) triples (π 1 , π 2 , ℘), where (π 1 , π 2 ) is a pair of T SSCP P (n) and ℘ is a cyclically symmetric plane partition fit inside n-cube. These relations have strait forward proofs based on the explicit product formulas for the numbers but bijective proofs of these identities are an open problem. It follows from [28], [38] that the dimension of the (reduced) plactic algebra PC n is equal to the number of alternating sign matrices of size n × n (ASM(n) = T SSCP P (n)). Therefore the Key-Grothendieck polynomials can be obtained from U-polynomials (see Section 6, Theorem 6.9) after the specialization p ij = 0, if i + j > n + 1. Namely, for any permutation w ∈ S n and composition ζ ⊂ δ n , we introduce polynomials denote a collection of divided difference operators which satisfy the Coxeter and Hecke relations for any reduced decomposition w = s i 1 · · · s i ℓ of a permutation in question; ζ + denotes a unique partition obtained from ζ by ordering its parts, and v ζ ∈ S n denotes the minimal length permutation such that v ζ (ζ) = ζ + . Assume that h = 1 4 . If α = γ = 0, these polynomials coincide with the β-Grothendieck polynomials [8], if β = α = 1, γ = 0 these polynomials coincide with the Di Francesco-Zin-Justin polynomials [12], if β = γ = 0, these polynomials coincide with dual α-Grothendieck polynomials H α w (X). Conjecture 1.2 For any permutation w ∈ S n and any composition ζ ⊂ δ n , polynomials We expect that these polynomials have some geometrical meaning to be discovered. More generally we study divided difference type operators of the form depending on parameters a, b, c, h, e and satisfying the 2D-Coxeter relations We find that the necessary and sufficient condition which ensure the validity of the 2D-Coxeter relations is the following relation among the parameters: (a+b)(a-c)+h e = 0 . Therefore, if the above relation between parameters a, b, c, d, h, e hold, the the for any permutation w ∈ S n the operator where w = s i 1 · · · s i ℓ is any reduced decomposition of w, is well-defined. Hence under the same assumption on parameters, for any permutation w ∈ S n one can attach the well-defined polynomial and in much the same fashion to define polynomials for any composition α such that α i ≤ n − i, ∀i. We have used the notation T (x) (a,b,c,d,h,e) w to point out that this operator acts only on the variables X = (x 1 , . . . , x n ); for any composition α ∈ Z n ≥0 , α + denotes a unique partition obtained from α by reordering its parts in (weakly) decreasing order, and w α denotes a unique minimal length permutation in the symmetric group S n such that w α (α) = α + . In the present paper we are interested in to list a conditions on parameters A := {a, b, c, d, h, e} with the constraint which ensure that the above polynomials G (a,b,c,d,h,e) w (X) and D (a,b,c,d,h,e) α (X) or their specialization x i = 1, ∀i, have nonnegative coefficients. We state the following conjectures: In the present paper we treat the case A = (−β, β + α + γ, γ, 1, (α + γ)(β + γ)). As it was pointed above, in this case polynomials G A w (X) are common generalization of Schubert,β-Grothendieck and dual β-Grothendieck, and Di Francesco-Zin-Justin polynomials. We expect a certain c interpretation of the polynomials G A w for general β, α and γ. As it was pointed out earlier, one of the basic properties of the plactic monoid P n is that the nonocommutative elementary symmetric polynomials {e k (u 1 , . . . , u n−1 )} 1≤k≤n−1 generate a commutative subalgebra in the plactic algebra in question. One can reformulate this statement as follows. Consider the generating function where we set e 0 (U) = 1. Then the commutativity property of noncommutative elementary symmetric polynomials is equivalent to the following commutativity relation in the plactic as well as in the generic plactic, algebras P n and P n , [7], and Theorem 2.23, Now let us consider the Cauchy kernel where we assume that the pairwise commuting variables z 1 , . . . , z n−1 commute with the all generators of the algebras P n and P n . In what follows we consider the natural completion P n of the plactic algebra P n to allow consider elements of the form (1 + x u i ) −1 . Elements of this form exist in any Hecke type quotient of the plactic algebra P n . Having in mind this assumption, let us compute the action of divided difference operators ∂ z i,i+1 on the Cauchy kernel. In the computation below, the commutativity property of the elements A i (x) and A i (y) plays the key role. Let us start computation of According to the basic property of the elements A i (x), one sees that the expression A i (z i ) A i (z i+1 ) is symmetric with respect to z i and z i+1 , and hence is invariant under the action of divided difference operator ∂ z i,i+1 Therefore. It is clearly seen that It is easy to see that if one adds Hecke's type relations on the generators . Therefore in the quotient of the plactic algebra P n by the Hecke type relations listed above and by the "locality" relations one obtains Finally, if a = 0, then the above identity takes the following form In other words the above identity is equivalent to the statement [9] that in the IdCoxeter algebra IC n the Cauchy kernel C(P n , U) is the generating function for the b-Grothendieck polynomials. Moire over, each (generalized) double b-Grothendieck polynomial is a positive linear combination of the key-Grothendieck polynomials. In the special case b = −1 and P ij = x i + y j if 2 ≤ i + j ≤ n + 1, p ij = 0, if i + j > n + 1 this result had been stated in [39]. As a possible mean to define affine versions of polynomials treated in the present paper, we introduce the double affine nilCoxeter algebra of type A and give construction of a generic family of Hecke's type elements 5 we will be put to use in the present paper. As Appendix we include several examples of polynomials studied in the present paper to illustrate results obtained in these notes. We also include an expository text concerning the MacNeille completion of a poset to draw attention of the reader to this subject. It is the MacNeille completion of the poset associated with the (strong) Bruhat order on the symmetric group, that was one of the main streams of the study in the present paper. A bit of history. Originally these notes have been designed as a continuation of [8]. The main purpose was to extend the methods developed in [10] to obtain by the use of plactic algebra, a noncommutative generating function for the key (or Demazure) polynomials introduced by A. Lascoux and M.-P. Schützenberger [34]. The results concerning the polynomials introduced in Section 4, except the Hecke-Grothendieck polynomials, see Definition 4.6, has been presented in my lecture-courses "Schubert Calculus" have been delivered in the Graduate School of Mathematical Sciences, the University of Tokyo, November 1995 -April 1996, and in the Graduate School of Mathematics, Nagoya University, October 1998 -April 1999. I want to thank Professor M. Noumi and Professor T. Nakanishi who made these courses possible. Some early versions of the present notes are circulated around the world and now I was asked to put it for the wide audience. I would like to thank Professor M. Ishikawa (Department of Mathematics, Faculty of Education, University of the Ryukyus, Okinawa, Japan) and Professor S.Okada (Graduate School of Mathematics, Nagoya University, Nagoya, Japan) for valuable comments. The plactic algebra P n is an (unital) associative algebra over Z generated by elements {u 1 , · · · , u n−1 } subject to the set of relations Proposition 2.2 ( [28])Tableau words in the alphabet U = {u 1 , · · · , u n−1 } form a basis in the plactic algebra P n . In other words, each plactic class contain a unique tableau word. In particular, Remark 2.3 There exists another algebra over Z which has the same Hilbert series as that of the plactic algebra P n . Namely, define algebra L n to be an associative algebra over Z generated by the elements {e 1 , e 2 , . . . , e n−1 }, subject to the set of relations (e i , (e j , e k )) := e i e j e k − e j e i e k − e j e k e i + e k e j e i = 0, f or all 1 ≤ i, j, k ≤ n − 1, j < k. Note that the number of defining relations in the algebra L n is equal to 2 n 3 . One can show that the dimension of the degree k homogeneous component L (k) n of the algebra L n is equal to the number semistandard Young tableaux of the size k filled by the numbers from the set {1, 2, . . . , n}. Definition 2.4 The local plactic algebra LP n is an associative algebra over Z generated by elements {u 1 , . . . , u n−1 } subject to the set of relations One can show (A.K) that Definition 2.5 (Nil Temperley-Lieb algebra) Denote by T L (0) n the quotient of the local plactic algebra LP n by the two-sided ideal generated by the elements {u 2 1 , . . . , u 2 n−1 }. Proposition 2.6 The Hilbert polynomial Hilb(T L (0) n , t) is equal to the generating function for the number of 321-avoiding permutations of the set {1, 2, ..., n} having inversion number equal to k, see [55], A140717, for other combinatorial interpretations of polynomials Hilb(T L (0) n , t). We denote by T L (β) n the quotient of the local plactic algebra LP n by the two-sided ideal generated by the elements {u 2 1 − β u 1 , . . . , u 2 n−1 − β u n−1 }. Definition 2.7 The modified plactic algebra MP n is an associative algebra over Z generated by {u 1 , . . . , u n−1 } subject to the set of relations (P L1) and that Definition 2.8 The (reduced) nilplactic algebra N P n is an associative algebra over Q generated by {u 1 , · · · , u n−1 } subject to the relations 6 the set of relations (P L1), and that Proposition 2.10 The nilplactic algebra N P n has finite dimension, its Hilbert polynomial Hilb(N P n , t) has degree n 2 and dim(N P n ) ( n 2 ) = 1. Definition 2.12 The idplactic algebra IP (β) n is an associative algebra over Q(β) generated by {u 1 , · · · , u n−1 } subject to the relations 5) and the set of relations (P L1). In other words, the idplactic algebra IP n is the quotient of the plactic algebra P n by the the two-sided ideal generated by elements Proposition 2.13 Each idlplactic class contains a unique tableau word of the smallest length. For each word w denote by rl(w) the length of a unique tableau word of minimal length which is idplactic equivalent to w. 6 Original definition of the nilplactic relations given in [32] involves only relations (P L1) and It had been shown [33] that the Schensted construction for the plactic congruence extends to the nilplactic case. However as it seen from the following example, as a consequence of relations (P L1) one has and therefore noncommutative elementary symmetric polynomials e 1 (u 1 , u 2 , u 3 ) and e 2 ((u 1 , u 2 , u 3 ) do not commute modulo the nilplactic congruence defined in [32]. Indeed, u 1 u 3 u 1 ≡ u 3 u 1 u 3 . In order to guarantee the commutativity of all noncommutative elementary polynomials, we add relations Cf with definition of idplactic relations listed in Definition 2.11. Example 2.14 Consider words in the alphabet {a < b < c < d}. Then rl(dbadc) = 4 = rl(cadbd), rl(dbadbc) = 5 = rl(cbadbd). Indeed, Note that according to our definition, tableau words w = 31, w = 13 and w = 313 belong to different idplactic classes. Proposition 2.15 The idplactic algebra IP (β) n has finite dimension, and its Hilbert polynomial has degree n 2 . Definition 2.17 The idplactic Temperly-Lieb algebra PT L (β) n is define to be the quotient of the idplactic algebra IP (β) n by the two-sided ideal generated by the elements , and Coef f t max Hilb(PT L n , t) = 1, if n is even, and = 2, if n is odd. Definition 2.18 The nilCoxeter algebra N C n is defined to be the quotient of the nilplactic algebra N P n by the two-sided ideal generated by elements Clearly the nilCoxeter algebra N C n is a quotient of the modified plactic algebra MP n by the two-sided ideal generated by the elements Definition 2.19 The idCoxeter algebra IC (β) n is defined to be the quotient of the idplactic algebra IP (β) n by the two-sided ideal generated by the elements It is well-known that the algebras N C n and IC (β) n have dimension n!, and the elements {u w := u i 1 · · · u i ℓ }, where w = s i 1 · · · s i ℓ is any reduced decomposition of w ∈ S n , form a basis in the nilCoxeter and idCoxeter algebras N C n and IC (β) n . Remark 2.20 There is a common generalization of the algebras defined above which is due to S.Fomin and C.Greene [7]. Namely, define generalized plactic algebra P n to be an associative algebra generated by elements u 1 , · · · , u n−1 , subject to the relations (P L2) and relations The relation (2.5) can be written also in the form Then the elements A i,j (x) and A i,j (y) commute in the generalized plactic algebra P n . Moreover, the algebra C 1,n is a maximal commutative subalgebra of P n . To establish Theorem 2.20 , we are going to prove more general result. To start with, let us define generic plactic algebra P n . Definition 2.23 The generic plactic algebra P n is an associative algebra over Z generated by {e 1 , · · · , e n−1 } subject to the set of relations Clearly seen that relations (2.6)−(2.8) are consequence of the plactic relations (P L1) and (P L2). Theorem 2.24 Define Then the elements A n (x) and A n (y) commute in the generic plactic algebra P n . Moreover the elements A n (x) and A n (y) commute if and only if the generators {e 1 , . . . , e n−1 } satisfy the relations (2.6) − (2.8). Proof For n = 2, 3 the statement of Theorem 1.22 is obvious. Now assume that the statement of Theorem 1.22 is true in the algebra P n . We have to prove that the commuta- Using relations (2.7) we can move the commutator (e i , e n ) to the left, since i < a < n, till we meet the term (1 + xe n ). Using relations (2.6) we see that Therefore we come to the following relation Finally let us observe that Finally, if i < j, then (e i + e j , e j e i ) = 0 ⇐⇒ (2.6), if i < j < k and the relations (2.6) hold, then (e i + e j + e k , e j e i + e k e j + e k e i ) = 0 ⇐⇒ (2.7), if i < j < k and relations (2.6) and (2.7) hold, then (e i + e j + e k , e k e j e i ) = 0 ⇐⇒ (2.8); the relations (e j e i + e k e j + e k e i , e k e j e i ) = 0 are a consequence of the above ones. Definition 2.25 (Compatible sequences b) Given a word a ∈ R(T ) (resp. a ∈ IR(T )), denote by C(a) (resp. IC(a)) the set of sequences of positive integers, called compatible sequences, b : (2.10) Finally, define the set C(T ) (resp. IC(T )) to be the union C(a) (resp. the union IC(a)), where a runs over all words which are plactic (resp. idplactic) equivalent to the word w(T ). be the set of (mutually commuting) variables. Definition 2.27 (1) Let T be a semistandard tableau, and n := |T |. Define the double key polynomial K T (P) corresponding to the tableau T to be (2.11) (2) Let T be a semistandard tableau, and n := |T |. Define the double key Grothendieck polynomial GK T (P) corresponding to the tableau T to be (2.12) In the case when p i,j = x i + y j , ∀i, j, where X = {x 1 , . . . , x n } and Y = {y 1 , . . . , y n } denote two sets of variables, we will write K T (X, Y ), GK T (X, Y ), . . . , instead of K T (P), GK T (P), . . . . Definition 2.28 Let T be a semistandard tableau, denote by α(T ) = (α 1 , · · · , α n ) the exponent of the smallest monomial in the set We will call the composition α(T ) to be the bottom code of tableau T. Divided difference operators In this subsection we remind some basic properties of divided difference operators will be put to use in subsequent Sections. For more details, see [46]. Let f be a function of the variables x and y (and possibly other variables), and η = 0 be a parameter. Define the divided difference operator ∂ xy (η) will as follows where the operator s η xy acts on the variables (x, y, . . .) according to the rule: s η xy transforms the pair (x, y) to (η −1 y, η x), and fixes all other variables. We set by definition, s η yx := s η −1 xy . The operator ∂ xy (η) takes polynomials to polynomials and has degree −1. The case η = 1 corresponds to the Newton divided difference operator ∂ xy := ∂ xy (1). Let x 1 , . . . , x n be independent variables, and let P n := Q[x 1 , . . . , x n ]. For each i < j put It is interesting to consider also an additive or affine analog ∂ xy [k] of the divided difference operators ∂ xy (η), namely, Remark 4.2 We can also introduce polynomials Z w , which are defined recursively as follows: However, one can show that , formula (6), had been used by A.Lascoux to describe the transition on Grothendieck polynomials, i.e. stable decomposition of any Grothendieck polynomial corresponding to a permutation w ∈ S n . into a sum of Grasmannian ones corresponding to a collection of Grasmannin permutations v λ ∈ S ∞ , see [37] for details. The above mentioned operators D i had been used in [37] to construct a basis Ω α | α ∈ Z ≥0 that deforms the basis which is built up from the Demazure ( known also as key) polynomials. Therefore polynomials KG[α](X; β = −1) coincide with those introduced by A. Lascoux in [37]. In [51] the authors give a conjectural construction for polynomials Ω α based on the use of extended Kohnert moves, see e.g. [45], Appendix by N. Bergeron, for definition of the Kohnert moves. We state Conjecture that α are defined in [51] using the K-theoretic versions of the Kohnert moves. For β = −1 this Conjecture has been stated in [51]. It seems an interesting problem to relate the K-theoretic Kohnert moves with certain moves of 1 ′ s introduced in [8]. We will use notation S w (X), G w (X), ..., for polynomials S w (X, 0), G w (X, 0), ... . • Di Francesco-Zin-Justin polynomials) Definition 4.4 For each permutation w ∈ S n the Di Francesco-Zinn-Justin polynomials DZ w (X) are defined recursively as follows: if w is the longest element in S n , then DZ w (X) = R δ (X, 0); otherwise, if w and i are such that w i > w i+1 , i.e. l(ws i ) = l(w) − 1, then (1) Polynomials DZ w (X) have nonnegative integer coefficients. (2) For each permutation w ∈ S n the polynomial DZ w (X) is a linear combination of key polynomials K[α](X) with nonnegative integer coefficients. As for definition of the double Di Francesco-Zin-Justin polynomials DZ w (X, Y ) they are well defined, but may have negative coefficients. Definition 4.6 Let w ∈ S n , define Hecke-Grothendieck polynomials KN β,α w (X n ) to be where as before x δn := x n−1 where u = s i 1 · · · s i ℓ is any reduced decomposition of a permutation taken. • More generally, let β, α and γ be parameters, consider divided difference operators For a permutation w ∈ S n define polynomials where w = s i 1 · · · s i ℓ is any reduced decomposition of w. Remark 4.7 A few comments in order. (a) The divided difference operators {T i := T (β,α,γ) i 1 , i = 1, · · · , n − 1} satisfy the following relations • (Hecke relations) Therefore the elements T β,α w are well defined for any w ∈ S n . • (Inversion) w constitute a common generalization of the β-Grothendieck polynomials , namely, G • (Stability) Let w ∈ S n be a permutation and w = s i 1 s i 2 · · · s i ℓ be any its reduced decomposition. Assume that i a ≤ n − 3, ∀ 1 ≤ a ≤ ℓ, and define permutation w := (1) is equal to the degree of the variety of pairs commuting matrices of size n × n, • the bidegree of the affine homogeneous variety V w , w ∈ S n , [12], is equal to . Note that the assmption β = 0 is necessary. The number KN (β=1,α=1) w (1) is equal to the number of Schröder paths of semilength (n-1) in which the (2, 0)-steps come in 3 colors and with no peaks at level 1, see [55], A162326 for further properties of these numbers. It is well-known, see e.g. [55], A126216, that the polynomial KN (β,α=0) w (1) counts the number of dissections of a convex (n + 1)-gon according the number of diagonals involved, where as the polynomial KN (β,α) w (1) (up to a normalization) is equal to the bidegree of certain algebraic varieties introduced and studied by A. Knutson [22]. A few comments in order. We state more general Conjecture in Introduction. In the present paper we treat only the case r = 0, since a combinatorial meaning of polynomials KN (a,b,c,a+c+r) w (1) in the the case r = 0 is missed for the author. Let α = (α 1 ≤ α 2 ≤ · · · ≤ α r ) be a composition, define partition α + = (α r ≥ · · · ≥ α 1 ). Proposition 4.12 If α = (α 1 ≤ α 2 ≤ · · · ≤ α r ) is a composition and n ≥ r, then For example, KG[0, 1, 2, · · · , n − 1] = 1≤i<j≤n (x i + x j + x i x j ). Note that Comments 4.1 Definition 4.13 Define degenerate affine 2d nil-Coxeter algebra AN C (2) n to be an associative algebra over Q generated by the set of elements {{u i,j } 1≤i<j≤n and x 1 , . . . , x n } subject to the set of relations Now for a set of parameters 8 A := (a, b, c, h, e) define elements are valid, if and only if the following relation among parameters a, b, c, e, h holds 9 (a + b)(a − c) + h e = 0. (4.13) (3) (Yang-Baxter relations) Relations 8 By definition, a parameter assumed to be belongs to the center of the algebra in question 9 The relation (4.13) between parameters a, b, c, e, h defines a rational four dimensional hypersurface. Its open chart {e h = 0} contains, for example, the following set (cf [37]): {a = p 1 p 4 − p 2 p 3 , b = p 2 p 3 , c = p 1 p 4 , e = p 1 p 3 , h = p 2 p 4 }, where (p 1 , p 2 , p 3 , p 4 ) are arbitrary parameters. However the points (−b, a + b + c, c, 1, (a + c)(b + c), (a, b, c) ∈ N 3 } do not belong to this set (5) Assume that parameters a, b, c, h, e satisfy the conditions (4.13) and that b c+1 = h e. Example 4.16 • Each of the set of elements by itself generate the symmetric group S n . • If one adds the affine elements s • It seems an interesting problem to classify all rational, trigonometric and elliptic divided difference operators satisfying the Coxeter relations. A general divided difference operator with polynomial coefficients had been constructed in [31], see also Lemma 4.14,(4.13). One can construct a family of rational representations of the symmetric group (as well as its affine extension) by "iterating" the transformations s A = (a, b, c, h, e) be a sequence of integers satisfying the conditions (4.5). Denote by ∂ A i the divided difference operator Definition 4.17 (1) Let w ∈ S n be a permutation. Define the generalized Schubert polynomial corresponding to permutation w as follows and w 0 denotes the longest element in the symmetric group S n . (2) Let α be a composition with at most n parts, denote by w α ∈ S n the permutation such that w α (α) = α, where α denotes a unique partition corresponding to composition α. Lemma 4.18 Let w ∈ S n be a permutation. • If A = (0, 0, 0, 1, 0), then S A w (X n ) is equal to the Schubert polynomial S w (X n ). In all cases listed above the polynomials S A w (X n ) have non-negative integer coefficients. . Define the generalized key or Demazure polynomial corresponding to a composition α as follows 0, 1, 0, 0), then K A α (X n ) is equal to key (or Demazure) polynomial corresponding to α. In all cases listed above the polynomials S A w (X n ) have non-negative integer coefficients. . • If A = (−1, q −1 , −1, 0, 0) and λ is a partition, then (up to a scalar factor) polynomial K A λ (X n ) can be identify with a certain Whittaker function (of type A), see [4], Theorem A. Note that operator satisfy the Coxeter and Hecke relations, namely (T In [4] the operator T A i has been denoted by T i . • Let w ∈ S n be a permutation and m = (i 1 , . . . , i ℓ ) be a reduced word for w, i.e. w = s i 1 · · · s i ℓ and ℓ(w) = ℓ. Denote by Z m the Bott-Samelson nonsingular variety corresponding to the reduce word m. It is well-known that the Bott-Samelson variety Z m is birationally isomorphic to the Schubert variety X w associated with permutation w, i.e. the Bott-Samelson variety Z m is a desingularization of the Schubert variety X w . Follow [4] define the Bott-Samelson polynomials Z m (x, λ, v) as follows • If A = (−β, β + α, 0, 1, βα) , then S A w (X n ) constitutes a common generalization of the Grothendieck and the Di Francesco-Zin-Justin polynomials. It is easily seen that φ T i = T i+1 φ, i = 0, · · · , n − 2, and φ 2 T n−1 = T 1 φ 2 . It has been established in [41] how to use the operators φ, T 1 , . . . , T n−1 to to give formulas for the interpolation Macdonald polynomials. Using operators φ, T where T i = T t,−t,1,0,0 i , generate a commutative subalgebra in the double affine nilCoxeter algebra DANC n . Note that the algebra DANC n contains lot of other interesting commutative subalgebras, see e.g. [16]. It seems interesting to give an interpretation of polynomials generated by the set of operators T t,−t,1,h,e i , i = 0, · · · , n − 1 in a way similar to that given in [41]. We expect that these polynomials provide an affine version of polynomials KN (−t,−1,1,1,0) w (X), w ∈ S n ⊂ S af f n , see Remark 4.7. Note that for any affine permutation v ∈ S af f n , the operator where v = s i 1 · · · s i ℓ is any reduced decomposition of v, is well-defined up to the sign ±1. It seems an interesting problem to investigate properties of polynomials L v [α](X n ), where v ∈ S af f n and α ∈ Z n ≥0 , and find its algebra-geometric interpretations. Cauchy kernel Let u 1 , u 2 , · · · , u n−1 be a set of generators of the free algebra F n−1 , which assumed also to be commute with the all variables P n := {p i,j , 2 ≤ i + j ≤ n + 1, i ≥ 1, j ≥ 1}.. Definition 5.1 The Cauchy kernel C(P n , U) is defined to be as the ordered product (5.14) For example, In the case {p ij = x i , ∀j} we will write C n (X, U) instead of C(P n , U). where a = (a 1 , . . . , a p ), b = (b 1 , . . . , b p ), w(a, b) = p j=1 u a j +b j −1 , and the sum in (4.10) runs over the set S n := We denote by S (0) n the set {(a, b) ∈ S n | w(a, b) is a tableau word}. The number of terms in the right hand side of (5.15) is equal to 2 ( n 2 ) , and therefore is equal to the number #|ST Y (δ n , ≤ n)| of semistandard Young tableaux of the staircase shape δ n := (n−1, n−2, . . . , 2, 1) filled by the numbers from the set {1, 2, . . . , n}. It is also easily seen that the all terms appearing in the RHS(4.10) are different, and thus #|S n | = #|ST Y (δ n , ≤ n)|. We are interested in the decompositions of the Cauchy kernel C(P n , U) in the algebras P n , N P n , IP n , N C n and IC n . Plactic algebra P n Let λ be a partition and α be a composition of the same size. Denote by ST Y (λ, α) the set of semistandard Young tableaux T of the shape λ and content α which must satisfy the following conditions: • for each k = 1, 2, · · · , the all numbers k are located in the first k columns of the tableau T . In other words, the all entries T (i, j) of a semistandard tableau T ∈ ST Y (λ, α) have to satisfy the following conditions: T i,j ≤ j. For a given (semi-standard) Young tableau T let us denote by R i (T ) the set of numbers placed in the i-th row of T , and denote by ST Y 0 (λ, α) the subset of the set ST Y 0 (λ, α) involving only tableaux T which satisfy the following constrains : To continue, let us denote by A n (respectively by A (0) n ) the union of the sets ST Y (λ, α) (resp. that of ST Y 0 (λ, α)) for all partitions λ such that λ i ≤ n − i for i = 1, 2, · · · , n − 1, and all compositions α, l(α) ≤ n − 1. Finally, denote by A n (λ) (resp.A (0) n (λ)) the subset of A n (resp. A (0) n (λ)) consisting of all tableaux of the shape λ. • There exists a bijection ρ n : A n −→ ASM(n) such that the image Im (A (0) n ) contains the set of n × n permutation matrices. • The number of column strict, as well as row strict diagrams which are contained inside the staircase diagram (n, n − 1, . . . , 2, 1) is equal to 2 n . We expect that the image ρ n ( n−1 k=0 A n ((k))) coincides with the set of n × n permutation matrices corresponding to either 321-avoiding or 132-avoiding permutations. Now we are going to define a statistic n(T ) on the set A n . Definition 5.5 Let λ be a partition, α be a composition of the same size. For each tableau It is instructive to display the numbers {A n (λ), λ ⊂ δ n } as a vector of the length equals to the n − th Catalan number. For example, It is easy to see that the above data, as well as the corresponding data for n = 5, coincide with the list of refined totally symmetric self-complementary plane partitions that fit in the box 2n × 2n × 2n (T SSCP P (n) for short) listed for n = 1, 2, 3, 4, 5 in [12], Appendix D. In particular, λ⊂δn A λ (t) = 1≤j≤n−1 A n,j t j−1 , where A n,j stands for the number of alternating sign matrices (ASM n for short) of size n × n with a 1 on top of the j-th column. |A n | = |T SSCP P (n)| = |ASM n |. It is well-known [3] that and the total number A n of ASM of size n × n is equal to where F n denotes the number of forests of trees on n labeled nodes; K ρn,λ denotes the Kostka number, i.e. the number of semistandard Young tableaux of the shape ρ n := (n − 1, n − 2, . . . , 1) and content/weight λ; for any partition λ = (λ 1 ≥ λ 2 ≥ . . . ≥ λ n ≥ 0) we set m i (λ) = {j | λ j = i}. Note that the rigged configuration bijection gives rise to an embedding of the set of labeled regular tournaments with n := 2k + 1 nodes to the set ST Y (ρ n , ≤ n), if n is an odd integer, and to the set ST Y (ρ n−1 , ≤ n − 1), if n is even. Definition 5.13 Define algebra PC n to be the quotient of the plactic algebra P n by the twosided ideal J n by the set of monomials Theorem 5.14 • The algebra PC n has dimension equals to ASM(n), • Hilb(PC n , q) = λ∈δ n−1 |A λ | q |λ| , • Hilb((PC n+1 ) ab , q) = n k=0 n−k+1 n+1 n+k n q k , cf [55], A009766. 10 For the reader convenience we recall a definition of a tableau word. Let T be a (regular shape) semistandard Young tableau. The tableau word w(T ) associated with T is the reading word of T is the sequence of entries of T obtained by concatenating the columns of T bottom to top consecutively starting from the first column. For example, take . The corresponding tableau word is w(T ) = 5321432433. By definition, a tableau word is the tableau word corresponding to some (regular shape) semistandard Young tableau. It is well-known [34] that the number of tableau subwords contained in I 0 is equal to the number of alternating sign matrices ASM (n). Definition 5.15 Denote by PC ♯ n the quotient of the algebra PC n by the two-sided ideal generated by the elements {u i u j − u j u i , |i − j| ≥ 2}. Proposition 5.16 Dimension dim PC ♯ n of the algebra PC ♯ n is equal to the number of Dyck paths whose ascent lengths are exactly {1, 2, . . . , n + 1}. See [55],A107877 where the first few of these numbers are displayed. Problem 5.19 Denote by A n the algebra generated by the curvature of 2-forms of the tautological Hermitian linear bundles ξ i , 1 ≤ i ≤ n, over the flag variety F l n , [54]. It is well-known [50] that the Hilbert polynomial of the algebra A n is equal to where the sum runs over the set F (n) of forests F on the n labeled vertices, and inv(F ) (resp. maj(F )) denotes the inversion index (resp. the major index) of a forest F. 11 Clearly that dim(A n ) ( n 2 ) ) = dim(PC n ) ( n 2 ) = dim(H ⋆ (F l n , Q) ( n 2 ) = 1. = s(n + 2, 2), where s(n, k) demotes the Stirling number of the first kind, see e.g. [55], A000914. 11 For the readers convenience we recall definitions of statistics inv(F ) and maj(F ). Given a forest F on n labeled vertices, one can construct a tree T by adding a new vertex (root) connected with the maximal vertices in the connected components of F. The inversion index inv(F ) is equal to the number of pairs (i, j) such that 1 ≤ i < j ≤ n, and the vertex labeled by j lies on the shortest path in T from the vertex labeled by i to the root. The major index maj(F ) is equal to x∈Des(F ) h(x); here for any vertex x ∈ F , h(x) is the size of the subtree rooted at x; the descent set Des(F ) of F consists of the vertices x ∈ F which have the labeling strictly greater than the labeling of its child. Problems (1) Is it true that Hilb(PC n , t) − Hilb(A n , t) ∈ N[t] ? If so, as we expect, does there exist an embedding of sets ι : F (n) ֒→ A n such that inv(F ) = n(ι(F )) for all F ∈ F n ? See Section 5.1 for definitions of the set A n and statistics n(T ), T ∈ A n , Definition 5.5. Comments 5.3 One can ask a natural question : when does noncommutative elementary polynomials e 1 (A), · · · , e n (A) form a q-commuting family, i.e. e i (A) e j (A) = q e j (A) e i (A), 1 ≤ i < j ≤ n ? Clearly that in the case of two variables one needs to necessitate the following relations e i e j e i + e j e j e i = q e j e i e i + q e j e i e j , i < j. Having in mind to construct a quantization, or q-analogue of the plactic algebra P n , one would be forced to the following relations q e j e i e j = e j e j e i and q e j e i e i = e i e i e j e i , i < j. It is easily seen that these two relation are compatible iff q 2 = 1. Indeed. e j e j e i e j = q e j e i e j e i = q 2 e j e j e i e i , =⇒ q 2 = 1. In the case q = 1 one comes to the Knuth relations (P L 1) and (P L 2). In the case q = −1 one comes to the "odd" analogue of the Knuth relations, or "odd" plactic relations (OP L n ), i.e., (OP L n ) : More generally, let Q n := {q ij } 1≤i<j≤n−1 be a set of parameters. Define generalized plactic algebra QP n to be (unital) associative algebra over the ring Z[{q ±1 ij } 1≤i<j≤n−1 ] generated by elements u 1 , . . . , u n−1 subject to the set of relations Proposition 5.21 Assume that q ij := q j , ∀ 1 ≤ i < j. Then the reduced generalized plactic algebra QPC n is a free Z[q ±1 2 , . . . , q ±1 n−1 ]-module of rank equals to the number of alternating sign matrices ASM(n). Moreover, Hilb(QPC n , t) = Hilb(PC n , t), Hilb(QP n , t) = Hilb(P n , t). Recall that reduced generalized plactic algebra is the quotient of the generalized plactic algebra by the two-sided ideal J n introduced in Definition 5.13. Example 5.22 (A) (Super plactic monoid, [44], [27]) Assume that the set of generators U := {u 1 , . . . , u n−1 } is divided on two non-crossing subsets, say Y and Z, Y ∪ Z = U, Y ∩ Z = ∅. To each element u ∈ U let us assign the weight wt(u) as follows: wt(u) = 0, if u ∈ Y , and wt(u) = 1 if u ∈ Z. Finally, define parameters of the generalized plactic algebra QP n to be q ij = (−1) wt(u i ) wt(u j ) . As a result we led to conclude that the generalized plactic algebra QP n in question coincides with the super plactic algebra PS(V ) introduced in [44]. We will denote this algebra by SP k,l , where k = |Y |, l = |Z|. We refer the reader to papers [44] and [27] for more details about connection of the super plactic algebra and super Young tableaux, and super analogue of the Robinson-Schensted -Knuth correspondence. We are planning to report on some properties of the Cauchy kernel in the super plactic algebra elsewhere. (B) (q-analogue of plactic algebra) Now let q = 0, ±1 be a parameter, and assume that q ij = q, ∀ 1 ≤ i < j ≤ n − 1. This case has been treated recently in [43]. We expect that the generalized Knuth relations (5.17) are related with quantum version of the tropical/geometric RSK-correspondence (work in progress), and, probably, with a q-weighted version of the Robinson-Schensted algorithm, presented in [48]. Another interesting problem is to understand a meaning of Q-plactic polynomials coming from the decomposition of the Cauchy kernels C n and F n in the reduced generalized plactic algebra QPC n . Nilplactic algebra N P n Let λ be a partition and α be a composition of the same size. Denote by ST Y (λ, α) the set of columns and rows strict Young tableaux T of the shape λ and content α such that the corresponding tableau word w(T ) is reduced, i.e. l(w(T )) = |T |. Theorem 5.23 (1) In the nilplactic algebra N P n the Cauchy kernel has the following decomposition (2) Let T ∈ B n be a tableau, and assume that its bottom code is a partition. Then Example 5.24 For n = 4 one has C 4 (X, U) = Idplactic algebra IP n Let λ be a partition and α be a composition of the same size. Denote by ST Y (λ, α) the set of columns and rows strict Young tableaux T of the shape λ and content α such that l(w(T )) = rl(w(T )), i.e. the tableau word w(T ) is a unique tableau word of minimal length in the idplactic class of w(T ), cf Example 1.9. Denote by D n the union of the sets ST Y (λ, α) for all partitions λ such that λ i ≤ n − i for i = 1, 2, · · · , n − 1, and all compositions α, l(α) ≤ n − 1. Theorem 5.25 In the idplactic algebra IP n the Cauchy kernel has the following decomposition (2) Let T ∈ D n be a tableau, and assume that its bottom code is a partition. Then NilCoxeter algebra N C n Theorem 5.28 In the nilCoxeter algebra N C n the Cauchy kernel has the following decomposition Let w ∈ S n be a permutation, denote by R(w) the set of all its reduced decompositions. Since the nilCoxeter algebra N C n is the quotient of the nilplactic algebra N P n , the set R(w) is the union of nilplactic classes of some tableau words w(T i ) : R(w) = C(T i ). Moreover, R(w) consists of only one nilplactic class if and only if w is a vexillary permutation. In general case we see that the set of compatible sequences CR(w) for permutation w is the union of sets C(T i ). Corollary 5.29 Let w ∈ S n be a permutation of length l, then (2) Double Schubert polynomial S w (X, Y ) is a linear combination of double key polynomials K T (X, Y ), T ∈ B n , w = w(T ), with nonnegative integer coefficients. IdCoxeter algebras A few remarks in order. Let w ∈ S n be a permutation, denote by IR(w) the set of all decompositions in the idCoxeter algebra IC n of the element u w as the product of the generators u i , 1 ≤ i ≤ n − 1, of the algebra IC n . Since the idCoxeter algebra IC n is the quotient of the idplactic algebra IP n , the set IR(w) is the union of idplactic classes of some tableau words w(T i ) : IR(w) = IR(T i ). Moreover, the set of compatible sequences IC(w) for permutation w is the union of sets IC(T i ). Corollary 5.32 Let w ∈ S n be a permutation of length l, then is a linear combination of double key Grothendieck polynomials KG T (X, Y ), T ∈ B n , w = w(T ), with nonnegative integer coefficients. F-kernel and symmetric plane partitions Let us fix natural number n and k, and a partition λ ⊂ (n k ). Clearly the number of such partitions is equal to n+k n ; note that in the case n = k the number 2n n is equal to the Catalan number of type B n . Denote by B n,k (λ) the set of semistandard Young tableaux of shape λ filled by the numbers from the set {1, 2, . . . , n}. For a tableau T ∈ B n,k set as before, Denote by B n,k := λ⊂(n k ) B n,k (λ). Lemma 6.1 ( [14], [25] ) The number of elements in the set B n,k is equal to See also [55], A073165 for other combinatorial interpretations of the numbers #|B n,k |. For example, the number #|B n,k | is equal to the number of symmetric plane partitions fit inside the box n × k × k. Note that in the case n = k the number B n := B n,n is equal to the number of symmetric plane portions fit inside the n × n × n-box, see [55], A049505. Let us point to that in general it may happen that the number #|B n,n+2 | does not divisible by any ASM(m), m ≥ 3. For example, B 3,5 = 4224 = 2 5 × 3 × 11. On the other hand, it's possible that the number #|B n,n+2 | is divisible by ASM(n = 1), but does not divisible by ASM(n + 2). For example, B 4,6 = 306735 = 715 × 429, but 306735 ∤ 7436 = ASM(6). Problem 6.8 Let Γ := Γ n,m k,ℓ = (n k , m ℓ ), n ≥ m be a "fat hook". Find generalizations of the identity (6.21) and those listed in [17], p. 71, to the case of fat hooks, namely to find "nice" expressions for the following sums • Find "bosonic" type formulas for these sum at the limit n −→ ∞, ℓ −→ ∞, m, k are fixed. • (Plactic decomposition of the F n -kernel) where summation runs over the set of semistandard Young tableaux T of shape λ ⊂ (n) m filled by the numbers from the set {1, . . . , m}. • , where λ denotes the shape of a tableau T , and λ ′ denotes the conjugate/transpose of a partition λ. • The polynomial L n (d) has non-negative coefficients, and polynomial L n (d) + d n is symmetric and unimodal. MacMeille completion of a partially ordered set 12 Let (Σ, ≤) be a partially ordered set (poset for short) and X ⊆ Σ. Define • The set of upper bounds for X, namely, • The set of lower bounds for X, namely, In the present paper we are interesting in properties of the MacNeille completion of the Bruhat poset B n = B(S n ) corresponding to the symmetric group S n . Below we briefly describe a construction of the MacNeille completion L n (S n ) := MN n (B n ) follow [28], and [57], v. 2, p. 552, d. Theorem 7.8 ([28]) The poset L ( S n ) is a complete distributive lattice with number of vertices equals to the number ASM(n) that is the number of alternating sigh matrices of size n × n. Moreover, the lattice L ( S n ) is order isomorphic to the MacNeille completion of the Bruhat poset B n . Indeed it is not difficult to prove that the set of all monotonic triangles obtained by applying repeatedly operation (=meet) to the set {T (w), w ∈ S n of triangles corresponding to all elements of the symmetric group S n , coincides with the set of all monotonic triangles L(S ⋉ . The natural map κ : S n −→ L(S n is obviously embedding, and all other conditions of Proposition 7.2 are satisfied. Therefore L(S n ) = MN (B n .the fact that the lattice L(S n is a distributive one follows from the well-known identity max(x, min(y, z)) = min(max(x, y), max(x, z)), x, y, z ∈ R ≥0 ) 3 . Finally the fact that the cardinality of the lattice L(S n ) is equal to the number ASM(n) had been proved by A. Lascoux and M.-P. Schützenberger [28]. If T = [t ij ] ∈ L(S n ), define rank of T , denoted by r(T ), as follows: . It had been proved by C. Ehresmann [6] that • v ≤ w with respect to the Bruhat order in the symmetric group S n if and only if T i,j (v) ≤ T i,j (w) for all 1 ≤ i < j ≤ n − 1. It follows from an improved tableau criterion for Bruhat order on the symmetric group [2] that 15 15 It has been proved in [2], Corollary 5, that the Ehresmann criterion stated above is equivalent to either the criterion T i,j ≤ T (2) i,j f or all j such that w j > w j+1 and 1 ≤ i ≤ j, or that T i,j ≤ T i,j f or all j ∈ {1, 2, . . . , n − 1}\{k | v k > v k+1 } and 1 ≤ i ≤ j.
15,356.6
2015-01-29T00:00:00.000
[ "Mathematics" ]
Impact of the New P2Y12 Receptor Inhibitors on Mortality in ST-Elevation Myocardial Infarction Patients with Cardiogenic Shock and / or After Cardiopulmonary Resuscitation Undergoing Percutaneous Coronary Intervention Background: Little is known about clinical efficacy of newer P2Y12 receptor inhibitors in ST-elevation myocardial infarction patients presenting with cardiogenic shock or after cardiopulmonary resuscitation. The aim of our study was to establish the possible role of newer P2Y12 receptor inhibitors prasugrel and ticagrelor on survival in comparison to clopidogrel administration in ST-elevation myocardial infarction patients presenting with cardiogenic shock and / or after cardiopulmonary resuscitation. Method: The present study was an analysis of 187 patients with ST-elevation myocardial infarction presenting with cardiogenic shock and / or after cardiopulmonary resuscitation. Groups with newer P2Y12 receptor inhibitors (107 patients) and with clopidogrel (80 patients) were compared and followed for median 160 days (25th, 75th percentile: 6,841). Mortality at 14 days, 30 days and one year were compared between the groups. Results: Mortality at 14 days was similar in both groups. A strong trend towards a lower mortality at 30 days was noticed in the newer P2Y12 receptor inhibitors group [39 (48.8%) patients in clopidogrel group died versus 38 (35.5%) in the newer P2Y12 group receptor inhibitors; p = 0.07]. All-cause mortality at one year was significantly higher in the group with clopidogrel administration [47 (58.8%) patients in clopidogrel group died versus 46 (43.0%) in the newer P2Y12 receptor inhibitors group; p = 0.039]. Conclusion: In ST-elevation myocardial infarction patients presenting with cardiogenic shock and/or after cardiopulmonary resuscitation, the administration of newer P2Y12 receptor inhibitors reduced the one-year mortality in comparison to clopidogrel. The use of newer P2Y12 receptor inhibitors may be advocated in this very high risk group of patients. *Corresponding authors: Vojko Kanic, Division for Internal Medicine, Department of Cardiology and Angiology, University Medical Centre Maribor, Maribor, Slovenia, Tel: +38623212901; Fax: +38623312393; E-mail<EMAIL_ADDRESS>Received February 06, 2016; Accepted February 20, 2016; Published February 26, 2016 Citation: Kanic V, Vollrath M, Naji FH, Markota A, Sinkovic A (2016) Impact of the New P2Y12 Receptor Inhibitors on Mortality in ST-Elevation Myocardial Infarction Patients with Cardiogenic Shock and / or After Cardiopulmonary Resuscitation Undergoing Percutaneous Coronary Intervention. Cardiovasc Pharm Open Access 5: 175. doi:10.4172/2329-6607.1000175 Copyright: © 2016 Kanic V, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. be further impaired because of the lower cytochrome P450-dependent metabolism [14,16,17]. The platelet inhibitory effect of all P2Y12 (especially that of clopidogrel) is significantly reduced after CPR with therapeutic hypothermia [18]. Data on platelet reactivity and clinical efficacy of newer P2Y12 in STEMI patients with CS or after CPR is inconclusive and conflicting. Prasugrel may inhibit platelets incompletely and seems to have similar pharmacodynamics as clopidogrel in patients with CS and therapeutic hypothermia [13,19]. Ticagrelor (without the need for biotransformation) inhibits platelets significantly earlier than clopidogrel; however, some non-responders were noticed after CPR and therapeutic hypothermia [18,20]. Newer P2Y12 were not yet proven to be clinically superior to clopidogrel in patients with CS and Introduction ST-Elevation Myocardial Infarction (STEMI) represents a highly pro-thrombotic state with platelets being greatly activated [1,2]. The early and strong platelet inhibition seems to be of paramount importance in patients with STEMI undergoing Percutaneous Coronary Intervention (PCI) [3]. Newer P2Y12 receptor inhibitors prasugrel and ticagrelor (newer P2Y12) exhibit more rapid, potent, and consistent platelet inhibition than clopidogrel and reduce the risk for ischemic cardiovascular complications [4][5][6][7][8][9][10]. In STEMI patients, an initial delay in the onset of newer P2Y12 antiplatelet action was observed and ticagrelor did not appear superior to prasugrel [3]. However, patients with Cardiogenic Shock (CS) or after Cardio Pulmonary Resuscitation (CPR) were mostly excluded from randomized studies and data on the clinical efficacy of these drugs comparing to clopidogrel in these patients is sparse [11][12][13]. CS has a profound effect on drug absorption and metabolism due to the disturbance of microcirculation, the use of catecholamines and opioids which results in slower platelet inhibition [13][14][15]. The pharmacological properties of the newer P2Y12 are promising in a CS setting since their bioactivation is more rapid and consistent when compared to clopidogrel [2,13]. Mild therapeutic hypothermia leads to a pro-thrombotic milieu per se and activation of the P2Y12 receptor inhibitors (P2Y12) may be further impaired because of the lower cytochrome P450-dependent metabolism [14,16,17]. The platelet inhibitory effect of all P2Y12 (especially that of clopidogrel) is significantly reduced after CPR with therapeutic hypothermia [18]. Data on platelet reactivity and clinical efficacy of newer P2Y12 in STEMI patients with CS or after CPR is inconclusive and conflicting. Prasugrel may inhibit platelets incompletely and seems to have similar pharmacodynamics as clopidogrel in patients with CS and therapeutic hypothermia [13,19]. Ticagrelor (without the need for biotransformation) inhibits platelets significantly earlier than clopidogrel; however, some non-responders were noticed after CPR and therapeutic hypothermia [18,20]. Newer P2Y12 were not yet proven to be clinically superior to clopidogrel in patients with CS and / or after CPR [13]. The aim of the present analysis was to establish the possible influence of newer P2Y12 administration on short and longterm survival of STEMI patients with CS and / or after CPR. Method Data from STEMI patients presenting with CS and/or after CPR (262 patients) was analyzed from the hospital database from January 2009 to December 2014. Patients who did not receive P2Y12 (75 patients) during treatment were excluded from the study. Groups with newer P2Y12 (107 patients) and with clopidogrel (80 patients) were compared and followed until January 31, 2015. Patients were followed for median 160 days (25 th , 75 th percentile: 6,841). Mortality at 14-days, 30-days and one year were compared between the groups. Our center is the referral 24/7 center for PCI covering population of 850.000 people. The study was approved by the local ethical committee. Pre-hospital pharmacological treatment In patients without contraindications aspirin 500 mg orally or 300 mg intravenous and enoxaparin 1 mg/kg intravenous (unfractionated heparin 5000 IU intravenous since 2011) was administered at the first medical contact. Upstream administration of P2Y12 was at the discretion of the emergent physician. Until 2011, a loading dose of clopidogrel 300-600 mg was used, prasugrel 60 mg or ticagrelor 180 mg mostly thereafter. Glycoprotein IIb-IIIa receptor inhibitors were not used upstream. In-hospital pharmacological treatment In patients without contraindications aspirin 500 mg orally or 300 mg intravenous, enoxaparin 1 mg/kg intravenous or unfractionated heparin 5000 IU intravenous or bivalirudin (loading dose 0.75 mg/ kg IV and 1.75 mg/kg/h infusion, depending on the year of treatment, were used in patients who came directly to our center. Loading dose of clopidogrel 300-600 mg was used until 2011, prasugrel 60 mg or ticagrelor 180 mg mostly thereafter. Administration of the newer-P2Y12 prasugrel (60 mg) or ticagrelor (180 mg) on top of the prehospital loading dose of clopidogrel has not been a common practice. The adjunctive use of glycoprotein IIb-IIIa receptor inhibitors was at the discretion of the operator. They were used in more than 85% of patients in 2009 and in less than 40% in 2014. Bivalirudin has been increasingly used since 2011. Mild induced therapeutic hypothermia We routinely use a very simple and cheap protocol which includes sedation/relaxation, cold fluid infusion and external cooling with ice packs [21]. We advise to start the cooling protocol when patients are on their way to our center [21]. Temperature shall be maintained within the range between 32°C and 34°C for 24 hours. Data was collected from the hospital database, PCI database and Slovenian National Registry of causes of deaths. To assess baseline clinical characteristics of the study cohort, we collected data concerning admission and discharge dates, birth, gender, laboratory values, PCI, stents used, lesions, antiplatelet and antithrombotic therapy, TIMI grade flow before and after procedure, and clinical outcomes. STEMI definition was based on the current Guidelines [22,23]. CS was defined according to clinical and hemodynamic criteria including hypotension (systolic blood pressure ≤ 90 mm Hg for ≥ 30 minutes or need for supportive measures to maintain systolic blood pressure of > 90 mm Hg) and evidence of end-organ hypoperfusion. Thrombolysis in Myocardial Infarction (TIMI) flow grades were used for coronary flow assessment [24]. Major bleeding was defined as a hemoglobin decrease ≥ 50 g/L and/ or transfusion. Only definite stent thromboses according to Academic Research Consortium definition proved by angiography or autopsy were included [25]. Study end points The study end points were all-cause mortality at day 14, at day 30 and at one year. Statistical methods We counted end point events that occurred during the followup period and compared their rates between the cohorts of patients receiving newer P2Y12 or not. Follow-up began on the date of the admission and continued until the date of death or until January 31, 2015 whichever came first. We constructed Kaplan-Meier curves for patients with or without newer P2Y12 for unadjusted mortality. Cox proportional hazards regression was used to compute Hazard Ratios (HRs) as estimates for mortality at one year. We controlled for age, gender, drug eluting stent, left main PCI, GPIIb-IIIa receptor inhibitors, TIMI grade flow before and after PCI, intra-aortic balloon pump insertion, bivalirudin, major bleeding, stent thrombosis, and newer P2Y12 in all regression analyses. Additionally several Cox models were built according to age in order to identify which age boundary is of importance. Distributions of continuous variables in the 2 groups were compared with either 2-sample t-test or the Mann-Whitney test according to whether data followed the normal distribution. Distributions of categorical variables were compared with the chi-square test. Data was analyzed with SPSS 21.0 software for Windows (SPSS, Inc., Chicago, Illinois). All p values were two-sided and values less than 0.05 were considered as statistically significant. Descriptive data for patients The study encompassed 187 patients with STEMI presented either with CS and / or after CPR. Mean age was 65.2 years and 36.4% of the patients were older than 70 years. Men were involved in 71.7%. Newer P2Y12 were administered to 57.2% of patients and 44.4% of patients received P2Y12 in pre-hospital settings. LAD was a target lesion in 51.3%, RCA in 23.0%, and LCX in 23.5%. Balloon angioplasty (without stent implantation) was the treatment of choice in 10.7% and unsuccessful PCI occurred in 7.0%. In an average patient, one stent was used and 30.5% of stents were DES. IABP was inserted in 14.4%. TIMI grade flow at admission (0.0) and after procedure (3.0) was similar in both groups. Therapeutic hypothermia was performed in 40% of patients. Major bleeding occurred in 11.4% of the patients, acute stent thrombosis in 3.2%, and the median hospitalization was 9 days, Bivalirudin was used more often in the newer P2Y12 group (28.4% versus 5.0%) and they tend to undergo CPR more often (73.8% versus 60%). The adjunctive usage of GPIIb-IIIa receptor inhibitors (58.9% versus 78.8%) was more frequent in the clopidogrel group as was left main PCI (7.5% versus 19.8%). Baseline patients' characteristics are listed in Table 1. In fact the comparison of patients treated with newer P2Y12 with patients treated with clopidogrel is a comparison of older and newer treatment: many changes in treatment contribute to the decrease in mortality in time (newer stents, bivalirudin, bleeding, etc). Indeed, patients treated with the newer P2Y12 tend to be younger; more often treated with DES stents and received more often bivalirudin. There was an excess of patients with left main stenosis in the clopidogrel group. Despite all differences, we identified newer P2Y12 as an independent prognostic factor for one-year death in the multivariate analysis. Earlier and more consistent platelet inhibition with newer P2Y12 seems to be the clue. We hypothesized the effective platelet inhibition with clopidogrel was much more delayed due to CS, CPR or therapeutic hypothermia than in "normal" STEMI patients in comparison to the newer P2Y12. Unfortunately, platelet inhibition was not measured therefore present findings should be considered as hypothesisgenerating for future randomized trials in a similar setting. TIMI grade flow at admission or after PCI did not differ between the groups probably due to slower platelet inhibition in both groups. No difference in major bleeding between the groups was observed. Patients with newer P2Y12 were less frequently treated with adjunctive usage of GPIIb-IIIa receptor inhibitors (p = 0.05) and more often with bivalrudin (p < 0.0001). This fact may explain similar incidence of major bleeding which would have been expected to be higher in the newer P2Y12 group. Bivalrudin was proved to be safer regarding bleeding than heparin with GPIIb-IIIa receptor inhibitors [14,[26][27][28]. Mortality Mortality at 14-days was similar in both groups (37.5% in clopidogrel group versus 30.8% in newer P2Y12 group). There was a strong trend towards a lower mortality in 30-days in the newer P2Y12 group (35.5% versus 48.8%; p = 0.07). At one year, significantly less patients died in the newer P2Y12 group (43.0% versus 58.8%; p = 0.039) (Table 2 and Figure 1). Discussion To the best of our knowledge, this analysis represents the first study which compared the influence of the newer P2Y12 and clopidogrel on one-year mortality in STEMI patients with CS and / or after CPR. According to our analysis, these patients had a better long-term survival if treated with newer P2Y12 than patients treated with clopidogrel. Also a strong trend towards a better 30-days survival in the newer P2Y12 group was noticed (p = 0.07). Furthermore, there was a similar incidence in acute stent thrombosis, although bivalirudin was used more often in the newer P2Y12 group. This may probably be explained with the usage of a higher dose of bivalirudin after PCI (1.75 mg/kg/h) and the prehospital administration of newer P2Y12 [29,30]. Whether or not pre-hospital administration of P2Y12 plays a role in these patients is also unknown. In STEMI patients without CS or after CPR, the pre-hospital administration of ticagrelor was not associated with a better outcome [30]. It is known that In STEMI patients with CS or after CPR the optimal platelet inhibition occurs hours after administration of P2Y12 [1,18,20]. The time delay of P2Y12 administration would be expected to be of lesser importance in these patients since there is a delay of less than one hour in pre-hospital settings in comparison to in-hospital administration. We did not address that question. Our study suggests that the newer P2Y12 administration may be advocated in this very high risk group of patients. A large prospective, randomized, multi-center trial is required to answer all the remaining questions. Limitations This was an observational single-center study, the overall number of patients was low and the number of included patients differed between the groups. All ischemic and hemorrhagic events were available for the hospitalization but not for the follow-up period. Platelet inhibition was not measured, so we can only speculate that effective platelet inhibition with newer P2Y12 was different than with clopidogrel. Both prasugrel and ticagrelor were used despite the fact that prasugrel is a pro-drug. However, up to now, we do not have data if bio-activation of these two drugs in critically ill patients is different and if this affects the clinical outcome. Conclusion Long-term mortality was significantly lower in STEMI patients presenting with CS and/or after CPR when newer P2Y12 were used and a strong trend towards better 30-days mortality was present. No difference of major bleeding or acute ST was noticed. Our study suggests newer P2Y12 may be beneficial in this special subset of patients.
3,595.2
2016-02-26T00:00:00.000
[ "Medicine", "Biology" ]
Cryogenic integrated spontaneous parametric down-conversion Scalable quantum photonics relies on interfacing many optical components under mutually compatible operating conditions. To that end, we demonstrate that spontaneous parametric down-conversion (SPDC) in nonlinear waveguides, a standard technology for generating entangled photon pairs, squeezed states, and heralded single photons, is fully compatible with cryogenic operating conditions required for superconducting detectors. This is necessary for the proliferation of integrated quantum photonics in integration platforms exploiting quasi-phase-matched second-order nonlinear interactions. We investigate how cryogenic operation at 4 K affects the SPDC process by comparing the heralding efficiency, second-order correlation function and spectral properties with operation at room temperature. Generation, manipulation, and detection of quantum light are based on a wide range of photonic quantum technologies [1][2][3]. In a fully-integrated platform, each technology must be mutually compatible [2], not only with regard to the optical degrees of freedom of interest, but also their operating conditions. While nonlinear optics, in particular frequency conversion and electro-optic manipulation, is optimized for operation under ambient conditions, many quantum photonic technologies, in particular superconducting single-photon detectors and lownoise single emitters, require cryogenics [4,5]. To enable the mutual integration of these components, it is therefore necessary to adapt existing techniques and technologies to be functional in the same environment. In various configurations, this can be used to generate heralded single photons [9], entangled states [10], and squeezed states [11]. SPDC is a nonlinear interaction in a material exhibiting a second-order optical nonlinearity, in which a pump photon spontaneously decays into two daughter photons. Energy and momentum conservation (phase-matching) dictate the wavelengths at which these daughter photons are emitted. This process is wellestablished as a means to generate nonclassical light under ambient conditions. SPDC has been used extensively as a source of nonclassical light in materials such as lithium niobate, in which phase-matching is achieved through periodic inversion of the spontaneous polarization of the crystal [12][13][14][15]. Furthermore, waveguides can be fabricated in this material, further enhancing the nonlinear interaction and allowing additional control of the phase-matching properties. Titanium in-diffused waveguides in particular have shown a wide range of functionality combining quantum light sources and electro-optic processing on low-loss chips [16][17][18][19][20]. Demonstrating this functionality under cryogenic operating conditions is therefore vital to augment the range of integrated circuits that can be implemented. In this paper, we show that cryogenic SPDC is indeed possible, and discuss variations to the phase-matching caused by the large temperature change. We show photon pair emission and joint spectral measurements when the sample is cooled to 4.7 K. This is directly compared to the same sample at room temperature. The SPDC process we consider is quasi-phase-matched Type-II SPDC in titanium in-diffused waveguides in periodically poled lithium niobate [18,21,22]. This interaction results in photons generated in orthogonal polarization modes, traditionally called signal and idler. In this configuration, the signal and idler modes can be deterministically separated, and the measurement of one photon can be used to herald the presence of the other. In order to understand SPDC under cryogenic conditions, we study the temperature-dependent variations in the spectral properties of signal and idler. These are dictated by energy and momentum conservation during the nonlinear interaction. Energy conservation is specified by the pump beam; it is independent of the sample temperature T . By contrast, momentum conservation is determined by the crystal length, its dispersion, and the poling period, all of which exhibit temperature dependence. Momentum conservation is governed by the difference in propagation constants k of the interacting modes, given by where λ is the wavelength, and the subscripts p, s, i denote the pump, signal and idler modes, respectively. In general, due to the dispersion properties of the material, the interacting modes are not phase-matched, i.e. ∆k = 0. However, in many second-order materials, momentum conservation can be achieved by introducing an additional contribution to the momentum, arising from a periodic inversion of the crystal symmetry, known as periodic poling [13]. Eq. 1 is thus modified to ∆k ′ = ∆k ± 2π/Λ, where Λ is the poling period of the crystal. Thus, Λ can be chosen such that ∆k ′ = 0 for a given combination of signal and idler wavelengths λ s,i . By writing k = 2πn/λ, where n is the refractive index, we can account for the temperature dependence of the interaction, as well as the thermal contraction of the poling period Λ(T ). Without loss of generality, the phasematching condition may be written as where n TE (λ, T ) and n TM (λ, T ) are the temperaturedependent effective refractive indices of the transverseelectric (TE) and transverse-magnetic (TM) polarized modes. For the following calculations, we use temperature dependent effective refractive index data based on Sellmeier equations for bulk lithium niobate [23,24], which are extrapolated for low temperatures and modified for the waveguide geometry. An additional correction is applied to the extrapolated data, which is empirically obtained by measuring the second harmonic generation (SHG) spectra during the cool-down of the waveguide (for details on the extrapolation and measurement procedure, see [25]). Empirical data for the thermal expansion coefficient of lithium niobate in the y-direction is available for T ≥ 60 K [26]. The expansion coefficient tends to zero at 0 K, therefore when extrapolating for temperatures below 60 K, the sample length and the poling period can be assumed constant. We calculate the changes in the phase-matching properties for temperatures in a range from 300 K down to 0 K by computing the signal and idler wavelength pair (λ s , λ i ) which fulfills energy conservation and temperature dependent momentum conservation (Eq. 2). The calculation is performed for a pump wavelength of λ p = 778 nm and for four poling periods, which are chosen to enable spectrally degenerate SPDC around room temperature (see Fig. 1). From room temperature to 4 K, the wavelengths of signal and idler are expected to shift by about 90 nm. In fact, the temperature dependence of phase-matching is often exploited to achieve a desired nonlinear interaction, by placing the sample on a temperature controlled mount [18]. Under cryogenic operation, thermal tuning is not possible, therefore Fig. 1 shows the importance of achieving the correct poling period for operation at a fixed temperature. These calculations account for macroscopic, welldefined changes to the refractive indices. In many second-order materials, other temperature-dependent effects may change the resulting spectral behavior. In the case of lithium niobate, the material is pyroelectric, piezoelectric, and photorefractive [27,28], meaning temperature changes (rather than absolute temperature) and optical power can locally alter the refractive index and therefore the phase-matching. Previous work has shown this may be a transient effect during temperature cycling, which nevertheless exhibits stable operation under constant temperature [25]. Building on these calculations, we experimentally investigated the phase-matching and other source properties of a lithium niobate waveguide chip operated under cryogenic conditions. The waveguide sample is fabricated by titanium in-diffusion into a z-cut congruently-grown uncoated lithium niobate chip. These waveguides support low-loss propagation of TE and TM polarization modes, with losses down to 0.03 dB/cm at 1550 nm under room temperature operation [16,29]. A single chip of length 24.4 mm contains 16 groups of three straight waveguides, and each waveguide group is periodically poled with poling periods from 8.98 µm to 9.12 µm in increments of 0.02 µm. At room temperature, these poling periods allow for degenerate Type-II SPDC with the signal and idler wavelength in the telecom C-band. The experimental setup is shown in Fig. 2. We place our waveguide sample inside a free-space coupled cryostat, with a base temperature of 4.7 K. The waveguide end facets are optically accessible through transparent windows which allows us to couple the laser beam to the waveguide with aspheric lenses positioned outside the cryostat. The in-and out-coupling lenses are antireflection coated for the pump and down-converted light, respectively. We can move the sample inside the cryostat with a motorized mount to change between waveguides and therefore different poling periods at any time. In order to prepare the pump beam for the SPDC process, we use the SHG signal from a bulk periodically poled MgO-doped lithium niobate (MgO:PPLN) crystal (heated to about 70 • C). We pump the SHG with an ultrashort pulsed infrared laser with a repetition rate of 80 MHz. The pump spectrum exhibits a Gaussian shape with a central wavelength of (778.0 ± 0.1) nm and a bandwidth of (3.2 ± 0.1) nm. We set the polarization of the pump light to excite the TE waveguide mode, in order to pump Type-II SPDC. Following the chip, long-pass filters (LPF) remove the high-energy pump beam, and the signal and idler photons are separated via a broadband polarizing beam splitter (PBS) before being coupled into single-mode fibers. We use superconducting nanowire single-photon detectors (SNSPDs) located in a separate cryostat to measure the photons. Single counts and coincidences of signal and idler are recorded with a time-tagging module. We characterize our SPDC source at room temperature (295 K) and under cryogenic conditions (4.7 K). The free-space coupled cryostat shown in Fig. 2 allows us to use exactly the same setup for both measurements. Moreover, it enables us to optimize the beam coupling to the waveguide end facets at any time. We characterize our source with regard to the spectral properties and the source performance metrics. We investigate the spectral properties by employing a home-built scanning-grating spectrometer setup (for details, see Appendix A). We measure the marginal spectra by inserting one spectrometer into the signal or idler path in front of the polarization controller. Afterwards, we perform a measurement of the joint spectral intensity (JSI) by applying one spectrometer to the signal path, and another one to the idler path. These results are shown in Fig. 3. Our results show that cryogenic operation of SPDC is possible and that the spectral properties behave as expected. For both measurements, the marginal spectra exhibit a Gaussian shape and the JSI is represented by an elongated ellipse. The obtained central wavelengths of signal and idler are summarized in Table I. The measured wavelengths are in very good agreement with our simulation of a wavelength shift of about 90 nm (compare Fig. 1). The Gaussian fits provide the spectral bandwidths ∆λ s and ∆λ i of signal and idler. The signal bandwidth decreases during the cooldown from ∆λ s = (32.08 ± 0.28) nm to ∆λ s = (27.4 ± 0.5) nm, while the cryogenic idler bandwidth ∆λ i = (17.94 ± 0.21) nm is unchanged compared to the room temperature result of ∆λ i = (17.27 ± 0.10) nm. The decrease in the signal bandwidth matches the slight change in the angle of the measured JSI, which agrees with our theoretical predictions. In order to simulate the JSI, we take into account the pump wavelength and spectral width, the poling period, and the effective refractive indices. We keep the effective length of the waveguide as a variable parameter and perform an optimization until the simulation fits the measured JSI best (for details on the simulation, see Appendix B). According to the optimization, the effective length decreases from (7.3 ± 0.3) mm to (3.65 ± 0.05) mm when cooling down the sample. We expect that the decrease in effective length is due to photorefractive, pyroelectric, and piezoelectric effects, which can impair the mode guiding properties of the waveguide. In addition to the spectral features of the cryogenic source, we compare the source performance metrics at both room temperature and 4.7 K. We study the brightness B, the Klyshko (heralding) efficiency η Klyshko , the coincidences-to-accidentals ratio CAR, and the heralded autocorrelation function g (2) h (0). These results are summarized in Table I. The brightness of our source is given by B = C si /P trans , where C si is the coincidence rate of signal and idler, and P trans is the transmitted pump power. The brightness of a waveguide source scales with the effective length of the sample [30]. Our spectral measurements indicate a reduction of the effective length by approximately half, which indeed results in a commensurate reduction in the brightness by the same factor. The reduced brightness also affects the signal-to-noise of the source, which is evident in the Klyshko efficiency η Klyshko = C 2 si /(C s C i ), where C s and C i are the single count rates. Our results show the Klyshko efficiency at cryogenic temperatures is roughly halved compared to room temperature, consistent with the reduced brightness at constant noise. A decrease in the signal-to-noise ratio is further verified by investigating the coincidences-to-accidentals ratio, which, in the low generation probability regime, is given by CAR = (C si R rep )/(C s C i ), where R rep is the laser repetition rate. Compared to room temperature, we observe a lower CAR value at 4.7 K by a factor of approximately √ 2, which is consistent with the reduced effective length. Finally, we measured the heralded autocorrelation function to investigate the photon number purity of our source. We add a 50:50 fiber beam splitter to the signal path in front of the polarization controller. For this configuration, the heralded autocorrelation function is calculated by g (2) h (0) = (C s1s2i C i )/(C s1i C s2i ), where C s1i and C s2i are the coincidence rates of the two signal photons with the idler photons, and C s1s2i are the threefold coincidences. The heralded g (2) h (0) remains well below the classical threshold of 1, but increases by a factor of two with respect to the room temperature value. At cryogenic temperatures, all figures of merit are consistent with a reduction in the signal-to-noise ratio by a factor of two, compared with room temperature operation. We expect this decrease to be due to photorefractive effects which distort the guided mode [31], depending on laser intensity and exposure time. This distortion reduces conversion efficiency. Nevertheless, the figures of merit demonstrate a high-quality SPDC source at cryogenic temperature. Demonstrating mutual compatibility of operating conditions is crucial for the proliferation of quantum technologies. As part of this process, we demonstrated that spontaneous parametric down-conversion in quasiphase-matched waveguides is compatible with the operating temperatures required for superconducting detectors. Despite changing the operating temperature by nearly two orders of magnitude, the source remained fully operational. This makes our source competitive for a wide variety of fully-integrated quantum circuits. ACKNOWLEDGMENTS This project is supported by the Bundesministerium für Bildung und Forschung (BMBF), Grant Number 13N14911. Appendix A: Scanning-grating spectrometer design We investigate the spectral properties of our spontaneous parametric down-conversion (SPDC) source by employing a home-built scanning-grating spectrometer with single-photon resolution. The setup of our spectrometer is shown in Fig. 4. The design comprises a diffraction grating, placed in a motorized rotation mount, which allows us to scan the incident angle. We filter the backreflected wavelength components by coupling to a singlemode fiber. To enable wavelength-insensitive fiber coupling, we use reflective collimators at the input and output, which are based on an off-axis parabolic mirror. This collimator design enables wavelength-independent collimation, which is important to achieve comparable performance given the expected wavelength shifts of up to 100 nm across the temperature range. The fiber-to-fiber throughput of this device is (66 ± 2) %, and it exhibits a transmission bandwidth of (0.909 ± 0.007) nm, which was verified independently using a tunable diode laser at the wavelength range of interest (1440 nm-1640 nm). We achieve broadband spectral resolution at the singlephoton level with extremely low noise by counting the transmitted photons with a superconducting nanowire single-photon detector (SNSPD). We measure the marginal spectra by applying one spectrometer to the signal or idler path, respectively. The other path remains unfiltered. In order to keep uncorrelated noise counts as low as possible, we detect a heralded spectrum by measuring the transmitted photons in coincidence with the herald photons. Thus, the coincidence counts drop to approximately zero at the edge of the marginal spectra (see Fig. 3). To measure the joint spectral intensity, we insert one spectrometer into the signal path and another one into the idler path. We scan the signal wavelength over a range of 30 nm with a resolution of 1 nm, while performing a full measurement of the idler spectrum for every signal wavelength step. This way, by measuring the coincidence counts of signal and idler, we capture the JSI of our SPDC source. Appendix B: Theoretical simulations In order to evaluate our spectral measurement results and to estimate the effective length of our waveguide, we perform a simulation of the joint spectral intensity (JSI) at room temperature and at 4.7 K. We start with the description of the photon-pair state generated by an SPDC source, which can be written as [32] whereâ † s (ω s ) andâ † i (ω i ) are the photon creation operators of the frequency modes ω s and ω i . The term f (ω s , ω i ) is the joint spectral amplitude (JSA), which provides a full description of the spectral properties of signal and idler. It combines the pump distribution function α(ω s + ω i ) and the phase-matching function Φ(ω s , ω i ), associated with the energy and momentum conservation of the nonlinear interaction. The JSA is defined by where N is a normalization constant. Since our pump spectrum exhibits a Gaussian shape, we write the pump distribution function, for a pair of signal and idler frequencies ω s and ω i , as where ω p is the central pump frequency, and σ is the standard deviation, directly related to the spectral bandwidth ∆ω p , according to ∆ω p = 2 √ 2 ln 2σ. We identify ω p and ∆ω p by measuring the intensity profile of our pump beam with a commercially available optical spectrum analyzer. The phase-matching function is given by [33] Φ(ω s , ω i ) = sinc ∆k ′ (ω s , ω i ) where L is the effective length of the waveguide, and ∆k ′ is the phase-mismatch of the propagation constants. By applying Eq. 1 together with the modification ∆k ′ = ∆k − 2π/Λ and writing the wavelength as λ = c/(2πω), we can express the phase-mismatch by While the pump distribution function is independent of temperature, we need to take the temperature dependence of the phase-matching function into account. Therefore, we include the thermal contraction of the interaction length L(T ) and the poling period Λ(T ), as well as the temperature dependence of the effective refractive indices n(ω, T ), into our calculation of Eq. B4. We perform the simulation of the JSI for a twodimensional array of frequencies (ω s ,ω i ), centered around ω p /2. For every frequency pair (ω s ,ω i ), we calculate the pump distribution function for our pump laser source from Eq. B3, and the phase-matching function for our waveguide sample from Eq. B4. Next, we multiply the amplitudes of α(ω s + ω i ) and Φ(ω s , ω i ), according to Eq. B2. The JSI is then obtained from the JSA by taking the absolute square: JSI = |f (ω s , ω i )| 2 . In order to determine the effective length of the waveguide, we keep L(T ) in Eq. B4 as a variable parameter, with an upper limit set to the length of the complete waveguide. While keeping all other parameters fixed, we simulate the JSI with the same resolution as our measurement data for different effective lengths. We perform an optimization by comparing the simulated JSIs to the normalized measured JSI and determining the simulation with the smallest standard deviation. The resulting joint spectral intensities, for a sample temperature of 295 K and 4.7 K, are shown in Fig. 3. In Fig. 5 we display the same simulated JSIs, together with the corresponding pump distribution and phase-matching functions, in a higher resolution. Based on the simulations that best matched our measured data, we assume a decrease in the effective length from (7.3 ± 0.3) mm to (3.65 ± 0.05) mm. It can be seen from Fig. 5 that the decreased effective length at cryogenic temperatures corresponds to a broadening of the phase-matching function, and therefore the JSI. This broadening is also clearly visible in our measured joint spectral intensities shown in Fig. 3. Moreover, the simulations verify the slight change in the angle of the measured JSI, since there is also a change in the slope of the pump distribution function, when displayed in dependence of signal and idler wavelengths. Compared to the pump distribution function, there is only a very small change in the angle of the phase-matching function, which results from a change in the group velocities of the interacting modes.
4,855.2
2021-10-14T00:00:00.000
[ "Physics" ]
Effect of Sintering Temperature on Adhesion Property and Electrochemical Activity of Pt/YSZ Electrode The (Pt/YSZ)/YSZ sensor unit is the basic component of the NOx sensor, which can detect the emission of nitrogen oxides in exhaust fumes and optimize the fuel combustion process. In this work, the effect of sintering temperature on adhesion property and electrochemical activity of Pt/YSZ electrode was investigated. Pt/YSZ electrodes were prepared at different sintering temperatures. The microstructure of the Pt/YSZ electrodes, as well as the interface between Pt/YSZ electrode and YSZ electrolyte, were observed by SEM. Chronoamperometry, linear scan voltammetry, and AC impedance were tested by the electrochemical workstation. The results show that increasing the sintering temperature (≤1500 °C) helped to improve adhesion property and electrochemical activity of the Pt/YSZ electrode, which benefited from the formation of the porous structure of the Pt/YSZ electrode. For the (Pt/YSZ) electrode/YSZ electrolyte system, O2− in YSZ is converted into chemisorbed O2 on Pt/YSZ, which is desorbed into the gas phase in the form of molecular oxygen; this process could be the rate-controlling step of the anodic reaction. Increasing the sintering temperature (≤1500 °C) could reduce the reaction activation energy of the Pt/YSZ electrode. The activation energy reaches the minimum value (1.02 eV) when the sintering temperature is 1500 °C. Introduction Emission gases, including oxides of carbon, oxides of nitrogen and oxides of sulfur, have been a highly regarded research area because of the growing awareness of environmental protection. Nitrogen oxides (mainly NO and NO 2 ) bring acid rain and photochemical smog, which pose a great threat to human health and environmental safety [1]. NO x sensor is a key device to control this problem by monitoring the NO x content in exhaust gas and optimizing the fueling combustion process [2][3][4]. At present, NO x sensors are mainly divided into the following four types: potential type, mixed potential type, complex impedance type and current type [5][6][7][8], of which the current type sensor is the only one commercially used until now. The structure of this type of NO x sensor is shown in Figure 1a which contains a small hole in the left side for the exhaust gas to enter. This sensor mainly consists of two cavities and three oxygen-pumping cells. The pump oxygen battery has a (Pt/YSZ) electrode/YSZ electrolyte sensor unit structure, which consists mainly of an oxygen-vacancy-rich YSZ electrolyte and two highly electrochemically active Pt/YSZ electrode. The detection process is shown in Figure 1b,c. The main pump and auxiliary pump fully pump the O 2 in the exhaust gas in the two cavities, inducing the conversion of NO 2 into NO, and NO is finally decomposed into N 2 and O 2 . When the decomposed O 2 is pumped away by the measuring pump oxygen cell, the concentration of decomposed O 2 can be obtained by measuring the corresponding pump current, and the NO x concentration can be obtained after conversion. In this NO x sensor unit structure, the interface matching of YSZ solid electrolyte and Pt/YSZ electrode, as well as the electrochemical activity of Pt/YSZ electrode, seriously affect the performance of NO x sensors. Moreover, the electrode composition, electrode thickness, sintering process, microstructure, and morphology are all key factors for the Pt/YSZ electrode [9][10][11][12]. Jaccoud et al. [10] found that the Pt electrode prepared by Pt slurry had better electrochemical performance than that prepared by sputtering. Nurhamizah [13] found that the Pt electrode with porous structure has better electrochemical performance. Boer [14] and Xia [15] found that a certain proportion of YSZ powder in the Pt slurry could improve the porous structure of the Pt electrode and improve the adhesion of the Pt electrode to the YSZ electrolyte. Li et al. [16] found that properly increasing the sintering temperature could promote the electrode to obtain greater electrochemical activity and more three-dimensional network electrode structure. In most of the previous research, YSZ green tapes were firstly sintered to high density ceramic as electrolyte, on which the Pt/YSZ electrode slurry was printed and finally sintered together. However, this method is not suitable for NO x sensors with multilayer ceramic structures and different functional electrodes. For the (Pt/YSZ)/YSZ sensor unit, Pt/YSZ electrode slurry is firstly printed on the surface of YSZ green tapes, and then followed by high temperature co-sintering. In general, defects such as warpage, cracking and delamination would be the main challenge in the sintering process. The problem of inconsistent sintering shrinkage between electrode and electrolyte is a major difficulty in the research of Pt/YSZ electrode. Another difficulty is ensuring the excellent electrochemical activity of the Pt/YSZ electrode. It is well known that increasing sintering temperature, as a key part of the electrode preparation, has huge impact on electrode microstructure and electrochemical performance [17]. The properties of Pt/YSZ electrodes printed on sintered YSZ ceramics are widely studied, but there are few systematic reports on the effect of sintering temperatures above 1400 • C on adhesion property and electrochemical activity of Pt/YSZ electrodes [12,18,19]. In this experiment, the effect of sintering temperature up to 1550 • C on the performance of Pt/YSZ electrode was studied. The chrono-current, linear scanning voltammetry and AC impedance of the Pt/YSZ electrodes were tested by an electrochemical workstation. The microstructure of the Pt/YSZ electrodes, as well as the interface between Pt/YSZ electrode and YSZ electrolyte, were also observed by SEM. posed O2 is pumped away by the measuring pump oxygen cell, the concentration of decomposed O2 can be obtained by measuring the corresponding pump current, and the NOx concentration can be obtained after conversion. In this NOx sensor unit structure, the interface matching of YSZ solid electrolyte and Pt/YSZ electrode, as well as the electrochemical activity of Pt/YSZ electrode, seriously affect the performance of NOx sensors. Moreover, the electrode composition, electrode thickness, sintering process, microstructure, and morphology are all key factors for the Pt/YSZ electrode [9][10][11][12]. Jaccoud et al. [10] found that the Pt electrode prepared by Pt slurry had better electrochemical performance than that prepared by sputtering. Nurhamizah [13] found that the Pt electrode with porous structure has better electrochemical performance. Boer [14] and Xia [15] found that a certain proportion of YSZ powder in the Pt slurry could improve the porous structure of the Pt electrode and improve the adhesion of the Pt electrode to the YSZ electrolyte. Li et al. [16] found that properly increasing the sintering temperature could promote the electrode to obtain greater electrochemical activity and more three-dimensional network electrode structure. In most of the previous research, YSZ green tapes were firstly sintered to high density ceramic as electrolyte, on which the Pt/YSZ electrode slurry was printed and finally sintered together. However, this method is not suitable for NOx sensors with multilayer ceramic structures and different functional electrodes. For the (Pt/YSZ)/YSZ sensor unit, Pt/YSZ electrode slurry is firstly printed on the surface of YSZ green tapes, and then followed by high temperature co-sintering. In general, defects such as warpage, cracking and delamination would be the main challenge in the sintering process. The problem of inconsistent sintering shrinkage between electrode and electrolyte is a major difficulty in the research of Pt/YSZ electrode. Another difficulty is ensuring the excellent electrochemical activity of the Pt/YSZ electrode. It is well known that increasing sintering temperature, as a key part of the electrode preparation, has huge impact on electrode microstructure and electrochemical performance [17]. The properties of Pt/YSZ electrodes printed on sintered YSZ ceramics are widely studied, but there are few systematic reports on the effect of sintering temperatures above 1400 °C on adhesion property and electrochemical activity of Pt/YSZ electrodes [12,18,19]. In this experiment, the effect of sintering temperature up to 1550 °C on the performance of Pt/YSZ electrode was studied. The chrono-current, linear scanning voltammetry and AC impedance of the Pt/YSZ electrodes were tested by an electrochemical workstation. The microstructure of the Pt/YSZ electrodes, as well as the interface between Pt/YSZ electrode and YSZ electrolyte, were also observed by SEM. (a) Experimental Procedure The YSZ green tapes were prepared by tape casting process. Triethanolamine (AR), polyvinyl butyral (PVB, AR), polyethylene glycol (PEG, AR) and dibutyl phthalate (DBP, AR) were used as dispersant, binder and plasticizer, respectively. Ethanol (AR) and butanone (AR) were used as solvent. The above reagents were purchased from Sinopharm Chemical Reagent Co., Ltd. The physical properties of YSZ and Pt materials were summarized in Table 1. The Pt/YSZ electrode slurry (Pt + 15 wt.% YSZ powder) was provided by GRINMAT Engineering Institute Co., Ltd. The Pt/YSZ electrode slurry was printed on the YSZ green tapes, and the (Pt/YSZ)/YSZ sensor units were obtained by sintering at different sintering temperatures under air condition. The schematic diagram of the sample preparation process of the (Pt/YSZ)/YSZ sensor unit is shown in Figure 2. The microscopic morphology of the samples was observed by SEM (JSM-7610F, Japan). The electrical properties of the Pt/YSZ electrodes were tested by a CHI660D electrochemical workstation, the mixture of gases with 10 vol.% O2 and 90 vol.% N2 was added during the test. Specific test parameters were as follows: (1) Chrono-amperometric experiment: set a fixed voltage of 600 mV between the two poles, the scanning time was 0-180 s, the test temperature was 750 °C, (2) Linear scan voltammetry test: the scanning voltage was −0.6 V to 0.6 V, the test temperature was 750 °C, (3) AC Impedance (EIS) Test: Set the frequency range to 0.001 Hz-10 MHz, the signal voltage was 500 mV, the test temperatures were 600 °C, 650 °C, 700 °C, 750 °C and 800 °C, respectively. Experimental Procedure The YSZ green tapes were prepared by tape casting process. Triethanolamine (AR), polyvinyl butyral (PVB, AR), polyethylene glycol (PEG, AR) and dibutyl phthalate (DBP, AR) were used as dispersant, binder and plasticizer, respectively. Ethanol (AR) and butanone (AR) were used as solvent. The above reagents were purchased from Sinopharm Chemical Reagent Co., Ltd. The physical properties of YSZ and Pt materials were summarized in Table 1. The Pt/YSZ electrode slurry (Pt + 15 wt.% YSZ powder) was provided by GRINMAT Engineering Institute Co., Ltd. The Pt/YSZ electrode slurry was printed on the YSZ green tapes, and the (Pt/YSZ)/YSZ sensor units were obtained by sintering at different sintering temperatures under air condition. The schematic diagram of the sample preparation process of the (Pt/YSZ)/YSZ sensor unit is shown in Figure 2. The microscopic morphology of the samples was observed by SEM (JSM-7610F, Japan). The electrical properties of the Pt/YSZ electrodes were tested by a CHI660D electrochemical workstation, the mixture of gases with 10 vol.% O 2 and 90 vol.% N 2 was added during the test. Specific test parameters were as follows: (1) Chrono-amperometric experiment: set a fixed voltage of 600 mV between the two poles, the scanning time was 0-180 s, the test temperature was 750 • C, (2) Linear scan voltammetry test: the scanning voltage was −0.6 V to 0.6 V, the test temperature was 750 • C, (3) AC Impedance (EIS) Test: Set the frequency range to 0.001 Hz-10 MHz, the signal voltage was 500 mV, the test temperatures were 600 • C, 650 • C, 700 • C, 750 • C and 800 • C, respectively. Micromorphologies of Pt/YSZ Electrodes The micromorphology of the Pt/YSZ electrodes under different sintering temperatures is shown in Figure 3. The YSZ grains are dispersed among the Pt grains, which reduces the aggregation of Pt grains and promotes the formation of a porous structure in Pt/YSZ electrode [20]. When the sintering temperature reaches 1550 °C, the Pt grains gradually grow, aggregate and even melt together, which destroys the porous structure of the Pt/YSZ electrode. It was reported that extremely high temperature could cause over-sintering of the electrolyte material, which results in the pore size and porosity decrease, and the active area of electrode reaction presents a first increasing and then decreasing trend [12,16]. When the sintering temperature decreases to 1500 °C, the large particles of Pt and the small particles of YSZ are uniformly mixed together to form an optimal porous electrode structure. With the comparative analysis, it can be concluded that the Pt/YSZ electrode with a sintering temperature of 1500 °C has a better porous structure, which would benefit gas transmission and NOx evaluation process. Micromorphologies of Pt/YSZ Electrodes The micromorphology of the Pt/YSZ electrodes under different sintering temperatures is shown in Figure 3. The YSZ grains are dispersed among the Pt grains, which reduces the aggregation of Pt grains and promotes the formation of a porous structure in Pt/YSZ electrode [20]. When the sintering temperature reaches 1550 • C, the Pt grains gradually grow, aggregate and even melt together, which destroys the porous structure of the Pt/YSZ electrode. It was reported that extremely high temperature could cause over-sintering of the electrolyte material, which results in the pore size and porosity decrease, and the active area of electrode reaction presents a first increasing and then decreasing trend [12,16]. When the sintering temperature decreases to 1500 • C, the large particles of Pt and the small particles of YSZ are uniformly mixed together to form an optimal porous electrode structure. With the comparative analysis, it can be concluded that the Pt/YSZ electrode with a sintering temperature of 1500 • C has a better porous structure, which would benefit gas transmission and NO x evaluation process. Figure 4 shows the EDS analysis of Pt/YSZ electrodes prepared at different sintering temperatures. As the characteristic X-ray peaks of Zr and Pt are similar (Zr (Lα2.0424) and Pt (Mα2.0485)), it is difficult to distinguish these two elements by EDS. Zr and Y are solid-dissolved together. In this paper, Y element is used instead of YSZ to interpret the composition of the YSZ and Pt/YSZ. Figure 4 shows the EDS analysis of Pt/YSZ electrodes prepared at different sintering temperatures. As the characteristic X-ray peaks of Zr and Pt are similar (Zr (Lα2.0424) and Pt (Mα2.0485)), it is difficult to distinguish these two elements by EDS. Zr and Y are soliddissolved together. In this paper, Y element is used instead of YSZ to interpret the composition of the YSZ and Pt/YSZ. Figure 4 shows the EDS analysis of Pt/YSZ electrodes prepared at different sintering temperatures. As the characteristic X-ray peaks of Zr and Pt are similar (Zr (Lα2.0424) and Pt (Mα2.0485)), it is difficult to distinguish these two elements by EDS. Zr and Y are soliddissolved together. In this paper, Y element is used instead of YSZ to interpret the composition of the YSZ and Pt/YSZ. Adhesion Analysis of Pt/YSZ Electrodes It is necessary to evaluate the electrode adhesion behavior because the interface mismatch between the Pt/YSZ electrode and the YSZ electrolyte sintered at a high temperature seriously affects the performance of the (Pt/YSZ)/YSZ sensor unit. The Pt/YSZ electrode adhesion was tested using an ultrasonic cleaner (QR-020S, 40 kHz and 120 W). The Pt/YSZ electrode was then placed in a cleaner filled with distilled water and the weight of each sample was measured before and after the test over a test period of 10 min to 90 min. The samples were dried at 120 • C for 30 min before the sample weight was measured. The weight loss of the electrode is calculated as follows [21]: Among them, W 0 , W 1 and W 2 represent the weight of the YSZ electrolyte and the weight of the Pt/YSZ electrode before and after ultrasonic vibration, respectively. Figure 5 shows the weight loss of Pt/YSZ electrodes prepared at different sintering temperatures. With the increase of sintering temperature, the weight loss of Pt/YSZ electrodes reaches smaller, which means that increasing the sintering temperature significantly improves the adhesion of Pt/YSZ electrodes. Adhesion Analysis of Pt/YSZ Electrodes It is necessary to evaluate the electrode adhesion behavior because the interface mismatch between the Pt/YSZ electrode and the YSZ electrolyte sintered at a high temperature seriously affects the performance of the (Pt/YSZ)/YSZ sensor unit. The Pt/YSZ electrode adhesion was tested using an ultrasonic cleaner (QR-020S, 40 kHz and 120 W). The Pt/YSZ electrode was then placed in a cleaner filled with distilled water and the weight of each sample was measured before and after the test over a test period of 10 min to 90 min. The samples were dried at 120 °C for 30 min before the sample weight was measured. The weight loss of the electrode is calculated as follows [21]: Among them, , and represent the weight of the YSZ electrolyte and the weight of the Pt/YSZ electrode before and after ultrasonic vibration, respectively. Figure 5 shows the weight loss of Pt/YSZ electrodes prepared at different sintering temperatures. With the increase of sintering temperature, the weight loss of Pt/YSZ electrodes reaches smaller, which means that increasing the sintering temperature significantly improves the adhesion of Pt/YSZ electrodes. The matching results under different sintering temperatures can also be reflected in the micro-morphology of the interface between the Pt/YSZ electrode and the YSZ electrolyte, as shown in Figure 6. Combining the secondary electron image and the backscattered electron image, the interpenetrating structure between YSZ particles and Pt particles with a relatively complete interface are clearly observed. At lower sintering temperatures, the flat interface between the electrode and the electrolyte can be clearly observed. It can be explained that the lower sintering temperature cannot provide sufficient diffusion driving force to promote the interdiffusion of YSZ particles and Pt particles. With the increase of the sintering temperature, the grains of YSZ and Pt gradually grow up, the grain boundaries migrate, and the degree of bonding between the grains is significantly enhanced. Combined with the morphologies of Pt/YSZ electrodes in Figure 3, it can be found that The matching results under different sintering temperatures can also be reflected in the micro-morphology of the interface between the Pt/YSZ electrode and the YSZ electrolyte, as shown in Figure 6. Combining the secondary electron image and the backscattered electron image, the interpenetrating structure between YSZ particles and Pt particles with a relatively complete interface are clearly observed. At lower sintering temperatures, the flat interface between the electrode and the electrolyte can be clearly observed. It can be explained that the lower sintering temperature cannot provide sufficient diffusion driving force to promote the interdiffusion of YSZ particles and Pt particles. With the increase of the sintering temperature, the grains of YSZ and Pt gradually grow up, the grain boundaries migrate, and the degree of bonding between the grains is significantly enhanced. Combined with the morphologies of Pt/YSZ electrodes in Figure 3, it can be found that the increase of the sintering temperature is beneficial to the sintering matching of the electrode and the solid electrolyte. However, when the temperature reaches 1550 • C, the over-sintering phenomenon of the Pt particles and YSZ particles is very obvious. the increase of the sintering temperature is beneficial to the sintering matching of the electrode and the solid electrolyte. However, when the temperature reaches 1550 °C, the oversintering phenomenon of the Pt particles and YSZ particles is very obvious. Chronoamperometry A constant potential is applied to the Pt/YSZ electrodes to obtain a current-time curve, which is called chrono-amperometry. The stability of the electrode can be investigated by observing the change of the current value, and the electrochemical activity can be analyzed by comparing the stable current value [22,23]. The chrono-amperometric Chronoamperometry A constant potential is applied to the Pt/YSZ electrodes to obtain a current-time curve, which is called chrono-amperometry. The stability of the electrode can be investigated by observing the change of the current value, and the electrochemical activity can be analyzed by comparing the stable current value [22,23]. The chrono-amperometric curves at different sintering temperatures are shown in Figure 7. Each curve shows the same trend of change, and the current reaches a stable value in a relatively short period of time. In NO x testing process, the anodic reaction model of the Pt/YSZ electrode system can be expressed as [24]: Linear Scan Voltammetry Analysis In a certain potential range, apply a continuous triangular wave signal to the Pt/YSZ electrodes, scan from the cathode direction to the anode direction with a constant scanning rate, and record the curve of current versus voltage, which is called linear scan voltammetry (LSV) [26]. By analyzing the cathodic and anodic peaks of the linear voltammetry curve, the possible electrode reactions, the reversibility of the electrode reactions and the source of the reaction products can be studied. The rate of the electrode reactions can be evaluated by comparing the slopes of the curves. The LSV curves of the Pt/YSZ electrodes are shown in Figure 7. No obvious cathodic peaks could be observed, but anodic peaks could be observed at about 0.5 V, which is consistent with previous research [27]. Figure S2 shows the LSV curve of the Pt/YSZ electrode when the scan voltage is expanded to −2 V, but there was still no current saturation plateau. In the Pt/YSZ electrode reaction system, the anodic peak appears because the anodic reaction (1) occurs, where the poor conductive Pt-O produced by the reaction accumulates on the Pt/YSZ electrode, hindering the charge transfer process, which also explains the phenomenon of the current drop shown in Figure 7. No cathodic peak was observed, one possible explanation is that this process of releasing oxygen does not seem to be related to any electrochemical process, but a chemical reaction [28]: Figure S3 shows the photoelectron spectrum obtained from the surface of the Pt/YSZ electrode, only Pt peaks were observed, and no PtO peaks were observed. Although no obvious cathodic peak is observed, the current changed drastically with increasing potential, approximately fitting the curve of the cathodic reaction to a straight line, and the fitting results are shown in Figure 8. It can be seen that these slopes gradually increase as When the electrode was anodically polarized, O 2− in YSZ electrolyte diffused to the surface of Pt/YSZ electrode, releasing two electrons to generate oxygen atoms. A portion of the oxygen reached the Pt/YSZ electrode and reacted with the Pt atoms to form PtO. The electrical conductivity of PtO is very poor, which hinders the charge transfer process, thus the current density decreases rapidly in a short time [25]. The number of platinum atoms on the electrochemical reaction site is limited, and the reaction reaches saturation in a short time [24]. The other portion O 2− is carried out by reaction (3), that is, the reaction moves to the interface between the Pt/YSZ electrode and the YSZ electrolyte, and O 2− in YSZ is converted into chemisorbed O 2 on Pt/YSZ. It is desorbed into the gas phase in the form of molecular oxygen, which does not accumulate at the Pt/YSZ electrode, and the reaction can proceed infinitely, as well as obtain a stable current density. The tendency of the current to decrease first and then stabilize may be the result of the combined effect of the two reaction models. The possible anodic reaction model of the Pt/YSZ electrode is shown in Figure S1. As the sintering temperature gradually increases, the current of the Pt/YSZ electrode increases first and reaches the maximum value at 1500 • C, followed by sharp current decrease at 1550 • C. Linear Scan Voltammetry Analysis In a certain potential range, apply a continuous triangular wave signal to the Pt/YSZ electrodes, scan from the cathode direction to the anode direction with a constant scanning rate, and record the curve of current versus voltage, which is called linear scan voltammetry (LSV) [26]. By analyzing the cathodic and anodic peaks of the linear voltammetry curve, the possible electrode reactions, the reversibility of the electrode reactions and the source of the reaction products can be studied. The rate of the electrode reactions can be evaluated by comparing the slopes of the curves. The LSV curves of the Pt/YSZ electrodes are shown in Figure 7. No obvious cathodic peaks could be observed, but anodic peaks could be observed at about 0.5 V, which is consistent with previous research [27]. Figure S2 shows the LSV curve of the Pt/YSZ electrode when the scan voltage is expanded to −2 V, but there was still no current saturation plateau. In the Pt/YSZ electrode reaction system, the anodic peak appears because the anodic reaction (1) occurs, where the poor conductive Pt-O produced by the reaction accumulates on the Pt/YSZ electrode, hindering the charge transfer process, which also explains the phenomenon of the current drop shown in Figure 7. No cathodic peak was observed, one possible explanation is that this process of releasing oxygen does not seem to be related to any electrochemical process, but a chemical reaction [28]: Figure S3 shows the photoelectron spectrum obtained from the surface of the Pt/YSZ electrode, only Pt peaks were observed, and no PtO peaks were observed. Although no obvious cathodic peak is observed, the current changed drastically with increasing potential, approximately fitting the curve of the cathodic reaction to a straight line, and the fitting results are shown in Figure 8. It can be seen that these slopes gradually increase as the sintering temperature increases. The slope relationship of these curves is: 1500 • C > 1450 • C > 1550 • C > 1400 • C > 1350 • C. It can be inferred that the cathodic reaction rate of the Pt/YSZ electrode is the fastest when the sintering temperature is 1500 • C, indicating the highest electrochemical catalytic activity. AC Impedance Spectrum Analysis The AC impedance spectrums of the Pt/YSZ electrodes sintered at different temperatures tested at 750 °C is shown in Figure 9. The equivalent circuit is shown in Figure 10, which includes two RCPE elements (Rse and Rct) connected in series with R0. R0 represents the wire resistance, which is not displayed in the AC impedance spectrum. The Rse element corresponds to the first small semicircle in the AC impedance spectrum and is located in the high frequency region, usually representing the resistance of the YSZ electrolyte. Rct corresponds to the second semicircle in the AC impedance spectrum, which is located in the low frequency region and represents the Pt/YSZ electrode resistance. The spectrum can be used to express the difficulty of charge transfer across the interface between the electrode and the electrolyte solution during the electrode reaction process [29]. Through the AC impedance, the resistances of the electrolyte and electrode and the activation energy of the reaction can be calculated respectively. The smaller the resistance is, the easier the charge transfer process will become. And the smaller the reaction activation energy is, the higher the electrochemical activity of the electrode will be [30,31]. The di- AC Impedance Spectrum Analysis The AC impedance spectrums of the Pt/YSZ electrodes sintered at different temperatures tested at 750 • C is shown in Figure 9. The equivalent circuit is shown in Figure 10, which includes two RCPE elements (R se and R ct ) connected in series with R 0 . R 0 represents the wire resistance, which is not displayed in the AC impedance spectrum. The R se element corresponds to the first small semicircle in the AC impedance spectrum and is located in the high frequency region, usually representing the resistance of the YSZ electrolyte. R ct corresponds to the second semicircle in the AC impedance spectrum, which is located in the low frequency region and represents the Pt/YSZ electrode resistance. The spectrum can be used to express the difficulty of charge transfer across the interface between the electrode and the electrolyte solution during the electrode reaction process [29]. Through the AC impedance, the resistances of the electrolyte and electrode and the activation energy of the reaction can be calculated respectively. The smaller the resistance is, the easier the charge transfer process will become. And the smaller the reaction activation energy is, the higher the electrochemical activity of the electrode will be [30,31]. The diameter of the semicircle on the AC impedance spectrum represents the size of the resistance, and a small diameter is recommended. Obviously, increasing sintering temperature (≤1500 • C) reduces the resistance of the electrodes. However, the electron migration becomes difficult while the sintering temperature reaches 1550 • C. A possible explanation for this phenomenon is that the destroyed porous structure for Pt/YSZ electrode, resulting an abnormal increase in resistance. According to the Arrhenius equation, the variation of the conductivity of the sample with the test temperature (600 °C, 650 °C, 700 °C, 750 °C and 800 °C, respectively) can be expressed as [32]: (5) transform it into another form: Among them: is the conductivity (mS·cm −1 ), T is the test temperature (K), k is Boltzmann constant (1.38 × 10 −23 J/K), E is the activation energy (eV), A is a constant. Therefore, taking 1000/T as the abscissa and ln as the ordinate to draw a curve, according to the fitting slope of the obtained curve, the specific value of the diffusion activation energy can be calculated. The Arrhenius curves of Pt/YSZ electrodes at different sintering temperatures are shown in Figure 11. And the activation energy of the electrode reaction can be obtained by linear fitting of each point, as shown in Table 2. With the increase of sintering temperature, the electrode activation energy first increases and then decreases, and the electrode activation energy decreases to the smallest value at 1500 °C (1.02 eV). According to the Arrhenius equation, the variation of the conductivity of the sample with the test temperature (600 °C, 650 °C, 700 °C, 750 °C and 800 °C, respectively) can be expressed as [32]: (5) transform it into another form: Among them: is the conductivity (mS·cm −1 ), T is the test temperature (K), k is Boltzmann constant (1.38 × 10 −23 J/K), E is the activation energy (eV), A is a constant. Therefore, taking 1000/T as the abscissa and ln as the ordinate to draw a curve, according to the fitting slope of the obtained curve, the specific value of the diffusion activation energy can be calculated. The Arrhenius curves of Pt/YSZ electrodes at different sintering temperatures are shown in Figure 11. And the activation energy of the electrode reaction can be obtained by linear fitting of each point, as shown in Table 2. With the increase of sintering temperature, the electrode activation energy first increases and then decreases, and the electrode activation energy decreases to the smallest value at 1500 °C (1.02 eV). According to the Arrhenius equation, the variation of the conductivity of the sample with the test temperature (600 • C, 650 • C, 700 • C, 750 • C and 800 • C, respectively) can be expressed as [32]: transform it into another form: Among them: σ is the conductivity (mS·cm −1 ), T is the test temperature (K), k is Boltzmann constant (1.38 × 10 −23 J/K), E is the activation energy (eV), A is a constant. Therefore, taking 1000/T as the abscissa and ln σT as the ordinate to draw a curve, according to the fitting slope of the obtained curve, the specific value of the diffusion activation energy can be calculated. The Arrhenius curves of Pt/YSZ electrodes at different sintering temperatures are shown in Figure 11. And the activation energy of the electrode reaction can be obtained by linear fitting of each point, as shown in Table 2. With the increase of sintering temperature, the electrode activation energy first increases and then decreases, and the electrode activation energy decreases to the smallest value at 1500 • C (1.02 eV). Conclusions In this study, Pt/YSZ electrodes were prepared by a conventional sintering method, and the effects of sintering temperature on the adhesion properties, micromorphology, and electrochemical activity of Pt/YSZ electrodes were investigated. Increasing the sintering temperature (≤1500 °C) helps to improve the adhesion properties and electrochemical activity of Pt/YSZ electrodes, and is beneficial to the formation of porous structures of Pt/YSZ electrodes. For the (Pt/YSZ) electrode/YSZ electrolyte system, O 2− in YSZ is converted into O2 chemisorbed on Pt/YSZ, desorbed into the gas phase in the form of molecular oxygen, this process may be a rate-controlled anodic reaction control Step. Increasing the sintering temperature (≤1500 °C) reduces the reaction activation energy of the Pt/YSZ electrode. The activation energy reaches a minimum value (1.02 eV) when the sintering temperature is 1500 °C. Supplementary Materials: The following supporting information can be downloaded at: www.mdpi.com/xxx/s1, Figure S1 Conclusions In this study, Pt/YSZ electrodes were prepared by a conventional sintering method, and the effects of sintering temperature on the adhesion properties, micromorphology, and electrochemical activity of Pt/YSZ electrodes were investigated. Increasing the sintering temperature (≤1500 • C) helps to improve the adhesion properties and electrochemical activity of Pt/YSZ electrodes, and is beneficial to the formation of porous structures of Pt/YSZ electrodes. For the (Pt/YSZ) electrode/YSZ electrolyte system, O 2− in YSZ is converted into O 2 chemisorbed on Pt/YSZ, desorbed into the gas phase in the form of molecular oxygen, this process may be a rate-controlled anodic reaction control Step. Increasing the sintering temperature (≤1500 • C) reduces the reaction activation energy of the Pt/YSZ electrode. The activation energy reaches a minimum value (1.02 eV) when the sintering temperature is 1500 • C. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/ma15103471/s1, Figure S1: The possible anodic reaction model of the Pt/YSZ electrode; Figure S2: The LSV curve of the Pt/YSZ electrode when the scan voltage is expanded to −2 V; Figure S3: The photoelectron spectrum obtained from the surface of the Pt/YSZ electrode.
7,770
2022-05-01T00:00:00.000
[ "Materials Science" ]
A Study on the Bifurcation of the Proceeds from Convertible Bonds Issuance and Its Policy Significance for Chinese Accounting Standards It is difficult to separate the proceeds from a convertible bonds issue into debt and equity components. Based on some previous different approaches, there is a focus on the improved approach whose views are convertible bond issue proceeds can be separated into accrual debt value, accrual equity value and accrual option value according to the characteristics of debt, equity and hybrid securities with embedded option of convertible bonds. It is concluded that the improved approach can be more in accord with modern financial theory and provide a more accurate evaluation of capital structure. And there are some policy significances for Chinese Accounting Standards. Introduction Around the world, the issue size of the Convertible Bond has been increased to more than US$600 billion since New York and Erie RailRoad in USA issued the first convertible bonds in 1843.The convertibles market collapsed last September after Lehman Brothers filed for bankruptcy protection.Recently, with a rising market in American, new issues have begun to make a comeback, with 9 listed companies and a combined capitalisation of more than 3.2 billion dollars after more than one year extended lull. In China, Since China Baoan Group Co.,Ltd issued the first convertible bonds in 1992 (so far), 40 or so convertible bonds are being traded on the Shanghai and Shenzhen stock exchanges, with a combined capitalisation of more than 50 billion Yuan.Chinese firms, spurred by a rebounding domestic share market in financial crisis, are poised to launch a boom in convertible bond issuance.In fact, convertible market trading volume hits near 6-year high.Turnover in the fledgling market reached 5.37 billion Yuan in February 2009, the highest since December 2003, and remained relatively high at 3.33 billion Yuan in March 2009.China convertible bond issuance set to surge again. Convertible bonds can be swapped for equity when the stock rises to a preset level, and if the shares don't perform, holders are protected by the security's value as a bond.So they are very attractive.However, a convertible bond is a special financial product without a reciprocal obligation for both sides which is embedded options, having the characteristics of both debt and equity.So it is difficult to recognize and measure its potential value.So the primary purpose of this paper is to research the bifurcation method of the proceeds from convertible bonds issuance and its policy significance for current Chinese Accounting Standards. Former Studies on the Division of Convertible Bond Issue Proceeds According to current accounting standards APB Opinion No.14 which is still effective in US today, the issuing company maintain rigid and simplified classification throughout the life of the convertible debt even as the value of the option to convert changes subsequent to issuance.The International Accounting Standards Board issued International Accounting Standard No. 32 in 1998 requiring the division of convertible bonds into debt and equity but without specific rules on how to calculate the debt and equity values.Instead, this standard offers only suggestions.One is to value the debt component as the present value of the interest payments and the principal and then subtract this amount from the total value of the debt issue to arrive at the value of the equity portion.The other suggestion is from Black-Scholes model, which directly gets the future value, and then gets the debt value by subtracting future value from total issue proceeds. Before 2006, Chinese current accounting standards also view convertible bonds as debt.The Chinese Finance Minister issued Accounting Standard for Business Enterprises No.22, No. 34, No. 37 in 2006, which was carried out on January 1, 2007.These new accounting standards require the initial issue price to be spilt between debt and equity. In addition, in China, according to Zhu (2006, p.69), convertible bond value equals straight bond value and conversion option value, which is in fact a finally hybrid security.Si (2004, p.15) viewed the value of convertible bonds as portfolios of straight bond and embedded options. All over the world, before Marcelle and Ann (2005, p.44) brought forward the expected value, most studies focused on the effect on convertible bonds by Modern Option Pricing Theory.Vigeland (1982, p.348) was an earlier researcher who noted that the option theory could be applied to the probability and timing of conversions.King (1984, p.419) actually calculated the option values for many convertible bond issues and regarded as equity. The current accounting practice and theory has offered a useful framework for applying the current accounting standards for the initial value recognition of convertible bonds.However, as a derivative financial instrument, the option of being converted or not being converted for convertible bonds has not been embodied in the earlier accounting practice and research.But the last excepted value method brought forward by Marcelle and Ann (2005) has made significant contributions in these developments. The Expected Value Method believes that the embedded option of convertible bonds is neither debt nor equity, but with the change of the underlying price, it can produce both debt and equity.If the bond is converted, the bond principal does not need to be repaid.The only payments will be the coupons that were paid prior to conversion.If the bond is not converted, all principal and coupons will be paid.So the expected value of the convertible bond is built on the probabilities of conversion.In certain convertible probability, the expected value of debt is the weighted average of straight bond value and present coupon value, and the weight is the probability of conversion.Accordingly, the expected value of equity can be obtained by subtracting expect debt value from total price of bond issued, which can also obtained from Black-Scholes model. That is to say, if n is potential number of shares, p is conversion probability, the expected number of shares is n (p).The probability of conversion (p) is the key factor which can be calculated by the Black-Scholes model. So the formula about the expected value of the debt and the expected equity value are as follows: The expected value of the debt = (1-p)(value of the straight bond)+ p (value of the coupons) The expected equity value = total value-the expected value of the debt The above methodology also called the separating method, dynamically separates and recognizes the debt and equity value of the total convertible bond issued on the basis of convertible probability, other than IAS32 viewing the value of straight bonds as the value of debt, whether on the date of issuance or right after. Improved Approach The potential value for convertible bonds varies with the market price of the underlying stock and the probability of conversion, which is new dynamic notion.The expected value method has used this notion. For convertible bonds, sometimes it is equity, sometimes debt, and sometimes both of them.No matter what the situation is, it always has the feature of option before it is converted.And these probabilities shall be embodied on the way of recognition and measurement of the potential share value of conversion for convertible bonds. Firstly, if there is no conversion at maturity, convertible bonds are characterized as debt, but there is still option compared with vanilla bond.So bond issue proceeds should be divided into the straight bonds value and the option value. Convertible bond issue proceeds = accrual debt value + accrual option value Accrual debt value = straight bond value = present value of principal and coupons Accrual option value = convertible bond issue proceeds-accrual debt value Secondly, if it is converted completely at maturity, convertible bond is characterized as equity.The bond holder can end up as equivalent stocks at maturity.So convertible bond issue proceeds is the summation of conversion shares value and conversion option value. Convertible bond issue proceeds = accrual equity value + accrual option value Accrual equity value = straight equity value = present value of conversion shares value Accrual option value = convertible bond issue proceeds -accrual equity value Eventually, if conversion probability is p, convertibles have "p" probability of ending up as stock and "1-p" probability of ending up as debt.So convertible bond issue proceeds can be recorded the summation of debt value, equity value and conversion option value.The method analyzed above is the improved approach.It suggests that if convertible bonds are entirely converted, convertible bond issue proceeds is the summation of option value inherent in a convertible bond and straight equity value which here is a counterpoint of present value of the coupons in the excepted value method. Therefore different understanding of economic reality of convertible bonds above results in different ways to bifurcate issue yield.Then different approaches will influence accurate estimates of debt, equity, diluted EPS and capital structure. Test of Different Methods Next, we will test and analyze the differences among methods above taking China Merchants Bank's convertible bonds as an example. China Merchants Bank founded in 1987, with the stockholding reorganization in May, 1994, has become the first joint stock commercial banks in China and the largest among listed banks based on total asset size.On November 10th, 2004, it issued a 5-year convertible bond with a face value of 6.5 billion Yuan and a coupon rate of 1.75%, which is 3.75% below the straight bond's 5.5% because of embedded option.Each 1000 Yuan convertible bond can be converted into 107.066381shares of common stock at the bond's maturity, i.e. conversion price is 9.34 Yuan.The issuer cannot call the bonds back.The current price of the stock is 9.1 Yuan.The stock pays no dividend.The expected price volatility is 17.52% based on the last year's (250 working days) closing price of convertible bonds of China Merchants Bank.The risk-free interest rate of 2.25% is from the bank's 5-year loan interest rates of the same term.The income tax rate is 33%.The net income is 3.144 billion Yuan. We have purposely simplified the example by assuming that: 1) The issuer cannot call the bonds back;2) There are no embedded options except for the call options represented by the conversion feature; 3) The call is European style; 4) The exercise date is the same as the bond's maturity date.Based on these assumptions Black-Scholes model can be used. Tab.1, Tab.2 and Tab.3 details the company's convertible bond issue, its embedded option and initial capital structure.The company issued a 5-year convertible bond with a face value of 6.5 billion Yuan and a coupon rate of 1.75%, which contain 695.93 million options and valued at 0.55 Yuan each calculated by using the Black-Scholes model based on the parameters shown in the second column-total option value 0.38 billion Yuan.The present value of 6.5 billion Yuan straight bond without embedded option is 5.46 billion Yuan based on its principal and coupons, which is displayed in the first column.To the calculation of expected numbers of shares, expected value of debt and equity, the conversion probability is a key ingredient.If the bond has no other embedded options, such as issuer calls, and can be converted only at maturity, in the meanwhile the stock does not pay dividends, investors can be assumed to be rational, N(d2) in the Black-Scholes model-the probability of the option being in the money at the exercise date-is the conversion probability in the case of a convertible bond. Therefore Chinese practice now requires bifurcation of convertible bonds into debt and equity and earnings per share after dilution-the separating method following international accounting standard.Therefore there are four methods including straight debt method, separating method, expected value method and improved approach in this paper which are analyzed in Tab.4 to show how the calculation of diluted shares and EPS are affected.Because the straight debt method and the separating debt method have not considered the conversion probability while expected value method and improved approach have done, four methods are divided into two kinds in Tab.4: No.1 is the straight debt method and the separating debt method without consideration of conversion probability and No.2 is expected value method and the improved approach with consideration of conversion probability. Tab.4 shows that owing to the conversion probability p 0.864 results in the expected conversion shares of stock 601284797 shares in No.2 method other than 695931477.52conversion shares of stock in No.1 method.Conversion probability is not considered and assumed it is converted completely so that No.1 method's percentage dilution is 10.16%, which is higher than the percentage dilution 8.78% with the consideration of conversion probability 0.864.To illustrate the impact on diluted EPS, net income is firstly recalculated by adding the interest associated with the convertible issue, after taxes, to net income.That is to say, in No.2 method, only the portion of the after-tax interest associated with the probability of conversion is added back while all the after-tax interest is added back in No.1 method.The result is a diluted EPS of 0.427 (3.22/7.54)for No.1 method.In contrast, diluted EPS under the second kind method is 0.431(3.21/7.45).Apparently, the diluted EPS of No.1 method is less relatively and is understated.Besides, compared to the Basic EPS 0.46, both EPS are lowered and the effect of dilution is very obvious. From the above analysis we can see that No.1 method is based on the assumption of "converted entirely".Especially in straight debt method, convertible bonds are totally treated as debt, which obviously can not portray economic reality of "converted entirely".In contrast with them, No.2 method can do it well. Tab.5 illustrates how the leverage calculation is affected by four methods-the straight debt method (No.1), the separating debt method (No.2), the expected value approach (No.3) and the improved approach (No.4).The calculation of most data is figured in Tab.1,2 ,3, but two data will be especially explained as follows: Firstly, in the expected value method, the expected debt value = (1-0.864)×5.46+0.864×0.49=1.16 billion yuan. Namrly, in the improved approach, convertible bond issue proceeds 6.5 billion is divided into three parts of debt, equity and option, 0.74, 4.30 and 1.46 billion yuan respectively.However, both equity and option increase in owner's equity, which must be credited to owner's equity accounts and may affect the leverage and profitability. In Tab.5, the debt to equity after issue becomes 9.44, 9.27, 8.62 and 8.56 respectively because of the different methods while the debt to equity before issue is the same.It is because with the straight debt method, the convertible bonds are totally viewed as debt, without reflecting the shares and embedded option value.And the separating debt method regards the straight bond value as the debt value while the expected value method regards the expected debt value as debt value.Because the straight debt value is always higher than the expected debt value, the debt value in separating debt method is always higher than that in expected value.With the improved approach, the separated debt value 0.74 billion Yuan is the smallest among the four methods, which make the highest equity value 5.76 billion Yuan, so the ratio of debt and equity is also smallest.The straight debt method overrates the ratio of debt and equity, but separating debt method, the expected value method and the improved approach portray the economic reality by the conversion probability gradually. Conclusion and its policy significances There is a focus on the improved approach whose views are convertible bond issue proceeds can be separated into accrual debt value, accrual equity value and accrual option value according to the characteristics of debt, equity and hybrid securities with embedded option of convertible bonds.It is concluded that the improved approach is a more accurate evaluation method of capital structure so that it can reflect more fully the economic reality of convertible bonds.Therefore there are some policy significances for Chinese Accounting Standards. Current accounting rules must reflect more fully the economic reality of convertible bonds Just like The International Accounting Standards, Chinese Accounting Standard for Business Enterprises No.37 requires the division of convertible bonds into debt and equity.But it doesn't value the option so that it doesn't reflect fully the economic reality of convertible bonds. The definition of liability must include convertible bonds Chinese Enterprise Accounting Basic Standards defines a liability as probable future sacrifices of economic benefits arising from present obligations of a particular entity to transfer assets or provide services to other entities in the future as a result of past transactions or events.But convertible bonds , viewed as an equity transantion and unissued stock, are not considered an asset and economic benefits.Then convertible bonds can not be "future sacrifices of economic benefits" to redeem debt.So there is a conflict between the definition of liability and convertible bonds characteristias. The diluted EPS must be computed based on the probabilities of conversion The potential shares are included in calculating diluted EPS according to Chinese Accounting Standard for Business Enterprises No.34.This is a big step forward.But it is much better if the probabilities of conversion are its reasoning.Si, Zhenqiang, & Zhao, Xia. (2004).Research on accounting for convertible bonds in China.Modern Accounting, 1, 15~17. Convertible bond issue proceeds = accrual debt value + accrual equity value + accrual option value Accrual debt value = (1-p) * straight bond value Accrual equity value = p * present value of straight equity Accrual option value = convertible bond issue proceeds-(accrual debt value + accrual equity value)
3,966.2
2009-10-19T00:00:00.000
[ "Business", "Economics" ]
Homogenization of the boundary value for the Dirichlet Problem In this paper, we give a mathematically rigorous proof of the averaging behavior of oscillatory surface integrals. Based on ergodic theory, we find a sharp geometric condition which we call irrational direction dense condition, abbreviated as IDDC, under which the averaging takes place. It should be stressed that IDDC does not imply any control on the curvature of the given surface. As an application, we prove homogenization for elliptic systems with Dirichlet boundary data, in $C^1$-domains. H. Shahgholian has been supported in part by Swedish Research Council. K. Lee has been supported by Korea-Sweden Research Cooperation Program. This project is part of an STINT (Sweden)-NRF (Korea) research cooperation program. 1 1. Introduction 1.1. Background. In this paper we consider Dirichlet and related problems, with rapidly oscillating data. Although we treat the case of Laplace equation, our analysis straightforwardly extends to general cases for equations that admit a Poisson/Green representation. Such integral representation, in turn, reduces the study of the problem to the corresponding integral equation, and hence the analysis of the integral of rapidly oscillating functions becomes central. To fix ideas, let Γ be a C 1 surface in R n (n ≥ 2), not necessarily bounded. We assume g(x, y) is integrable in both variables over Γ, and 1-periodic in y-variable i.e., g(x, y + k) = g(x, y) for k ∈ Z n . In this paper, we shall study possible limit behaviors of the integral (1) lim ε→0 Γ g y, y ε dσ y , and we shall prove that under mild conditions on the surface Γ there is an effective limit as ε tends to zero. In general we show that the above integrals stay within the bounds of the interval Γ g * (y, ν y ), Γ g * (y, ν y ) where g * , g * are defined below, as the infimum respectively supremum of the average-integrals of g(y, ·) over closed loops of the plane {x : x · ν y = 0} on the torus; here ν y denotes the normal vector of Γ at y. To illuminate the application of this to the Dirichlet problem, let us consider a bounded C 1 -domain Ω ⊂ R n (n ≥ 2). Let further g(x, y) and f (x, y) be continuous in both variables, and periodic in y. Let u ε be a solution to the Dirichlet problem (P ε ) u ε = −µ ε in Ω, u = g(x, x/ε) on ∂Ω, where µ ε = f (x, x/ε)χ Γ 0 dσ x , and Γ 0 is a C 1 surface compactly in Ω. Since solutions to this problem can be represented by surface integrals of g and f (through Poisson and Green functions), we may directly apply our results. Hence, a by-product of our surface integral homogenization, is nontrivial limit scenarios, as ε tends to zero, of equation (P ε ) and its solution. It is noteworthy that, through integral equations of Fredholm type, or just standard functional minimization, our technique applies to homogenizations with oscillating Neumann data (see Section 6.3). For fully nonlinear operator the Neumann problem has been treated in [CKL], see also [BDLS]. Another issue that occurs along such analysis is the study of the speed of convergence. In problems, where governing partial differential equations have oscillating coefficients, one uses the standard method of expansion in an appropriate space. Here u bl, is the so-called boundary layer term, and finding it is part of the problem. The function u bl, will then solve a Dirichlet problem with oscillating boundary data. We refer to [AA] for some backgrounds and to [GM] for recent developments in the topic. Remark 1.1. It is noteworthy that our approach applies to general equations of the type div(A(x/ε)∇u) = f (x/ε) with oscillating Dirichlet/Neumann data. This, even though straightforward, becomes quite technical and is therefore outside the scope of this paper. Thus for clarity of the exposition we shall treat the Laplacian case, only. Heuristics. Let us set , and y − x ∈ Z n , and adopt the standard notation from homogenization, as well as that of ergodic theory. For clarity of the ideas, let us also deal with problem (P ε ) in the case f ≡ 0, and see how it reduces to the integral averaging, and then how the limit integral is obtained. Since harmonic functions can be represented using Poisson kernel P(x, y), u (x) = ∂Ω P(x, y)g(y, y/ ) dσ y , we must then analyze the behavior of this integrals as ε tends to zero. For x ∂Ω, P(x, y) is continuous and we can rewrite the integral where r j is small enough, independent of ε, and Q r j (y j ) is a cube with size r j and center y j . This obviously brings us to integral (1). Observe that at this stage we cannot replace the second variable of g with y j , due to rapid oscillation for ≈ 0. Further assume that ∂Ω ∩ Q r j (y j ) is so flat that at -scale it is approximately 2 -away from its tangent plane; this requires a C 2 -graph locally, but our techniques/proofs work for C 1 -surfaces. The idea now is to cover the boundary ∂Ω ∩ Q r j (y j ) with finite many, and small enough cubes, so that each part of the surfaces is as flat as we want to. In particular we have where ε i is to be chosen small enough. This part is slightly more delicate, and needs extra care. We would also prefer to take ε i = √ ε. The trouble is that this might not work, as the points are changing, and we may loose lots of information about the behavior of the surface ∂Ω ∩ Q ε i r j (y i,j ). Indeed, an scaling of the integral gives ∂Ω∩Q (ε i /ε)r j (y i,j ) g(y j , y) dσ y , and the question is whether this integral will converge to the mean Q + 1 g(y j , y)dy. This would happen exactly when the normal vector of the surface ∂Ω at y i,j is irrational. In other words the surface ∂Ω ∩ Q (ε i /ε)r j (y i, j ) (which is almost a plane) will foliate the n-dimensional torus, as ε tends to zero, provided we have chosen ε i /ε → ∞, and the normal vector of the surface ∂Ω at y i, j is irrational. In particular g(y j , y) dσ y = g(y j ). If the set of points on ∂Ω with rational direction has zero surface measure (this may fail if there are so-called flat spots with rational directions) then we could cover the boundary with small cubes centered at points with irrational normals. From here combining (2)- (3) we obtain It is apparent that if there is a flat portion of the boundary with rational normal, then a full foliation cannot take place at such portions. Consequently, the resulting (mod 1) surface will be a close simple curve over the n-dimensional torus. Hence different sequence of may give different shifts of this close loop, and hence the possibility of a parameter family of values, in between [g * (y, ν y ), g * (y, ν y )]. Remark 1.2. A word of caution: As ε tends to zero, one may obviously rescale the integral, by a change of variables, as we did in our heuristic explanations above. This scaling makes the surface to be scaled (we assume the origin is NOT on the surface) so as to disappear in the limit. This naturally would make it impossible to compute the limit integral. However, the periodicity of the function g in its second variable, implies that we can bring back the surface so as it passes through the fundamental cube Q + 1 using (mod1) argument. In particular this means that despite the integration is on a fixed surface, the surface itself will start jumping forth and back due to the variable ε. It would be a good idea for the reader to consider simple examples such as integration over a line segment in the plane, by varying the normal direction of the plan from rational to irrational. 1.3. Plan of the paper. In the next section we shall introduce all definitions and notations. We take care of some technicalities in Section 3. We shall formulate our main results concerning surface integration in Section 4, and its generalization to other type of functions, such as layered-densities, almost periodic functions will appear in Section 5. Several interesting applications to PDE are mentioned in Section 6, and several Examples are also given in Section 7. Averaging and Ergodic theory. Definition 2.1. Let ν be a vector in R n , z a fixed given point, and Q r (z) = {x : |x i − z i | < r}. Let g(x, y), be integrable in both variables over the plain {(y − z) · ν = 0}, and periodic in y-variable. We define Later we shall consider cases where ν = ν z , is the normal vector at z on a given surface Γ. We also define the average of g as Henceforth we shall assume all vectors have length one, unless otherwise stated. We shall also without loss of generality assume that the surface is orientable, and fix a consistent choice of normal in clockwise direction (this choice is obvious if Γ is the boundary of a domain). It should be noted that if the set {(y − z) · ν = 0}(mod 1) foliates the cube Q + 1 then the integral converges to the average g(z), and this happens exactly when ν is irrational direction (see Lemma 3.2). When the set {(y − z) · ν = 0}(mod 1) does not foliate the cube Q + 1 (or the n-dimensional torus) then we shall get the limit as an integral over a closed loop flow on the torus. In particular the value of the integral exists, and depends on ν and . For different values of this loop translates over the torus and will give rise to supremum respectively infimum values of the integrals, as defined above by g * , g * , respectively. Remark 2.1. It should be remarked that in the definitions of the above averages g * , g * we could have replaced for any domain D containing the point z. The reason for this is that the set T = {(y − z) · ν = 0}(mod 1) either foliates the whole n-dimensional torus, or it is a closed simple curve on the torus. In either case, when tends to zero, the piece of plane {(y − z) · ν = 0} ∩ D will have the same effect as T. Since the direction of the normal of the plane {(y − z) · ν = 0} will play a crucial role in our analysis, we introduce proper definitions well-known in Ergodic theory. For readers' convenience we also prove some of these well-known results here. for the surface measure σ Γ . Here ν x denotes the normal to Γ at x. Technical preliminaries In this section we shall recall some standard facts from Ergodic theory, and also state and prove some averaging results that will be needed for the proof of the main theorems. The first lemma is a version of Weyl's Lemma and is well-known fact about uniform distribution. That (1, ν 1 , · · · , ν d ) is an irrational direction, means that at least one of the numbers ν j is irrational. In this paper d = n − 1 is the only case that is used. to be the sum of Dirac delta functions. First, let us consider the case h = h m = e 2iπmt and m 0. By the assumption there is ν l , which is an irrational number, and hence since mν l cannot be an integer i.e. 1 − e 2iπmν l 0. Now by Fourier expansion (extending h as 1-periodic function), for some b m ∈ R, which in combination with (4) results in Lemma 3.2. Let g, g * , g * , and g be as in Definition 2.1. Then the following hold: (ii) The following inequalities always hold Proof. First note that we can replace the integral in the definition of g * , and g * by with N positive integers, and N → ∞. This simplifies the matter slightly. We may without loss of generality assume ν = (1, ν ). Since ν is irrational direction, Π(mod 1) will foliate the unit cell, according to standard foliation theory and results in integral average over Q + 1 . To see this we assume (by using periodicity) that x ∈ Q + 1 and that the plane Π cuts the x 1 -axis (or some other axis). let a 0 be the point of intersection between the x 1 -axis and this plane, so that Π(ν, x) = {x : x 1 = a 0 − ν · x }, and (1, ν ) is irrational (by the assumption), and 0 ≤ a 0 < 1 due to periodicity of g(x, y) in y. Having fixed ν, we shall now use the notation Π t = Π(ν, te 1 ), the plane with normal ν through the point (t, 0 ). Let N > 0 be a large number, and I N = {k ∈ Z n−1 : |k i | ≤ N}. Then by periodicity of g(x, ·) Set now w(a k ) = w(x, a k ) := Π a k g x, y dσ y . Then Since ν is irrational, from Lemma 3.1 we conclude whereΠ t = Π t ∩S 0 (mod 1). Therefore lim r→∞ {(y−x)·ν=0}∩Q r g x, y ε dσ y = g(x) which is independent of ε. Now we have conclusion (i) from the definition of g * (x, ν) and g * (x, ν). To prove (ii) let us suppose that ν is a rational direction, otherwise the conclusion follows by the equality in (i). Since ν is rational, it is not hard to see that the restriction of g(x, y) on this hyperplane will be periodic with period T, say; for g(x, y/ε) the period will be εT. In particular the integral can be seen as integration over a spiral-like plane on the torus (a closed loop). The two dimensional case can be illustrated by a curve on the torus that loops over itself. In particular the limit integral (w.r.t. r) in the definition of g * , g * can be replaced by An observation here is that the supremum value of the mean integral w.r.t. actually is taken for some value 0 , and naturally for many other values, due to periodicity in . One can think of the situation as parallel planes in R n with fixed distance 1 from each other are moving, simultaneously by keeping a fixed distance between them, in the orthogonal direction of the plane (mod 1), when changes. Obviously, these planes foliate Q + 1 once ranges [0, 1). From here one deduces In our analysis of the limit behavior of the integral (1) we will use Definition 2.1 in a slightly different way. Next lemma will give us a hint in that direction. Lemma 3.3. Let g be as before, Π(ν, z) = {(y − z) · ν = 0}, and z ∈ Γ. Then for any R ε ∞ (as ε 0) we have and lim inf ε→0 Π(ν,z)∩Q εRε (z) g(z, y/ε)dσ y ≥ g * (z, ν). We leave the the reader to verify this simple fact. We shall next make the previous lemma even more general by letting the plane be replaced by very smooth surfaces. Let Γ be a smooth surface, with module of continuity τ = τ Γ for its C 1 norm. Define Lemma 3.4. Let g be as before, and Γ a smooth C 1 surface, with module of continuity τ = τ Γ for its C 1 norm. Let further ρ ε = M where M is as in (6). Then, for z ∈ Γ, and η > 0 there exists ε ν z ,η such that for all ε ≤ ε ν z ,η we have Proof. Set z ε = z ε (mod 1), and Γ z := {x : (x + z) ∈ Γ}. Then we have If we set Π ν z = {x : x · ν z = 0}, then by continuity of g(z, .), and that for g z, z ε +ỹ − g z, z ε + y dσ y . Hereỹ = y + ν z |y|s ε with s ε = τ(εM ε ). Estimating I we obtain |I| ≤ Cτ g (|y|s ε ) ≤ Cτ g (M ε s ε ) < η/2, for ε small enough. Here τ g is the module of continuity for g From here, and Lemma 3.3 the statements in the lemma follows, provided we have taken ε small enough depending on ν z . Surface integrals of oscillating functions Theorem 4.1. Let Γ be a C 1 surface in R n , and g(x, y) be integrable in x-variable over Γ, and continuos and 1-periodic in y in R n . Then Moreover if Γ satisfies IDDC then an effective limit exists and we have lim ε→0 Γ g y, y ε dσ y = Γ g(y)dσ y . Proof. We shall prove the limit superior estimate only. The limit inferior estimates follows in a similar way. The last statement follows in an obvious manner. Let us fix a small positive constant η > 0, to be decided later. Without loss of generality, we may assume Γ is bounded, and g(x, y) is uniformly continuous function on Γ × R n . As η 0 was arbitrary we have the two main estimates in the statement of the theorem. Putting these together along with the IDDC we shall have the third statement. Indeed, due to IDDC we have that the set {x ∈ Γ : ν x rational } has zero surface measure, and the integral over this set is zero. For the rest of Γ we have irrational normals only, and hence the full averaging (Lemma 3.2) takes place and we obtain g * = g * . The case of Layered densities, almost periodicity, and ergodicity In this section we shall deduce, similar results of that in Theorem 4.1 while replacing the periodicity assumption with layered materials/densities, almost-periodic and ergodic case. Nevertheless we shall only mention the results without deepening much into the analysis. The reader may easily verify the statements. 5.1. Layered Materials. If we assume the function g(x, y) is independent of (y k+1 , · · · , y n ) and is 1-periodic in (y 1 , · · · , y k ) then one may naturally obtain results reminiscent of that of layered materials in homogenizations for PDE. Indeed one can obtain the following obvious result: If the surface Γ does not have any flat parts in directions e i (i = 1, · · · , k), then the averaging takes place. Almost Periodic Case. In the case of almost periodic functions (see [Bohr]) one obtains similar results by replacing the average integral g(x) withĝ (y) = lim r→∞ Q + r g(x, y)dy. The obvious details are left to the reader. Ergodic Case. The results of our main theorem can be generalized further to the case of functions with ergodic properties (see [JKO]). Indeed, if we assume g(x, y, ω) is defined on Γ × R n × D, for some D ⊂ R n and that g(x, y, ω) is statistically homogeneous field in (y, ω)-variable, i.e., g(x, y, ω) = h(x, T y ω) for some random variable h(x, z) (random w.r.t. second variable) with underlying probability space and an ergodic n-dimensional dynamical system T y . Hence h(x, T y ω) admits an averaging One may now deduce similar results as that in Theorem 4.1 with g(z) = Exp(g(z, ·, ·)). 6. Applications to Partial Differential Equations 6.1. Dirichlet problem: Elliptic Case. Let Ω be a bounded domain in R n , with piecewise smooth boundary. Let g(x, y) be as before, and f (x, y) have the same property as g. For simplicity, we shall assume f, g are continuous in both variables (actually L 2 (∂Ω) would suffice). Let further Γ 0 ⊂ Ω be a piecewise C 1 -curve, and define Then the following result hold. Theorem 6.1. For a solution u ε of (P ε ), we define (in Ω) Then the following hold (in the weak sense): Moreover for any sequence of u ε , there is a subsequence converging to a function u in Ω, satisfying −µ * ≤ ∆u ≤ −µ * in the weak sense in Ω, Proof. By Green's and Poisson's representations we have where P and G are Poisson respectively the Green functions of the domain. By Theorem 4.1 which implies the first two statements in the theorem. Now the last statement follows from the above inequalities, in an obvious way. To obtain an effective limit, in the above theorem, one needs to consider domains satisfying IDDC. In particular, using the above theorem and Lemma 3.2 (ii) we have the following result. Corollary 6.2. Let ∂Ω and Γ 0 satisfy IDDC, and u be a solution to problem (P ε ). Then, u ε converges to a function u in Ω, satisfying The general nature of the method employed here suggests that we can apply this to situations where integral representations are possible. This can naturally go beyond the Dirichlet problem, or the Laplace operator, and as general as to systems, and equations of higher orders. We state explicitly that once one has an integral representation of any function, through kernel functions, then one may conclude similar statement as that of Theorem 6.1 and Corollary 6.2. We leave it to the reader to apply this to their favorite scenarios. Here one may consider different cases, such as Ω = D × (0, T), with D a domain in R n , or Ω ⊂ R n × R and time varying. Also Γ 0 can be take to be either time independent or varying in time. Now a similar argument, using Poisson and Green representations, as in Section 6.1 can give us various type of results. In the case Ω = D × (0, T) one obtains same type of results as that of Theorem 6.1. To obtain results of the nature of Corollary 6.2 one needs • either to assume Ω = D × (0, T), and D has IDDC condition along with f , and g being independent of their fourth variable, i.e. f ε (x, t) = f (x, x/ε, t), g ε (x, t) = g(x, x/ε, t) • or Ω and Γ 0 , both have IDDC condition in R n+1 . 6.3. Neumann problem. As mentioned at the end of the previous section, one can apply the technique of this paper to far reaching scenarios and problems, involving integral representations. The Neumann problem naturally fits into this category through Fredholm's alternative, and an integral representation. Indeed, let u ε be a solution to the problem with g satisfying compatibility condition ∂Ω g(y, y/ε)dσ y = 0 for all ε > 0. Then it is well-known that where F is the Fundamental solution for Laplace operator (or the corresponding operator), and φ solves the Voltera integral equation of second kind, i.e., As ε tends to zero, φ ε tends (weakly in L 1 (∂Ω)) to a limit φ 0 solving in a weak sense over the boundary of Ω. This happens exactly when the boundary of Ω has IDDC. More accurately, the kernel of the bounded operator acting on L 1 (∂Ω) space, is upper semi-continuous and has a unique element. In particular lim ε ker(T ε ) ⊂ ker(T 0 ), where By uniqueness of the solutions to the Fredholm operator this kernel must have only one element, and hence φ ε → φ 0 , with φ 0 ∈ Ker(T 0 ). In other words φ 0 solves the Voltera equation above for the function g(x). From here it follows that u ε converges to u 0 = ∂Ω F(x, y)φ 0 (y)dσ y , with φ 0 solving φ 0 (x) = ∂Ω ∂ ν F(x, y)φ 0 (y)dσ y + g(x). Hence u 0 solves the averaged/effective Neumann problem A different way of analyzing this is to consider the solution of the Neumann problem in the weak form Ω ∇u ε (y) · ∇φ(y)dy = ∂Ω g(y, y/ε)φ(y)dσ y , where φ is a test function in a reasonable class. Letting ε → 0 we see that the integrals converge to Ω ∇u 0 (y) · ∇φ(y)dy = ∂Ω g(y)φ(y)dσ y . The latter in turn solves the Neuman problem with g(y) as the amount of flux at each boundary point. Examples and illustrations The behavior of the limit integrals in 2.1 are directly related to foliation of the fundamental cell Q + 1 . To illustrate this (in R 2 ) consider a sequence p j = (− √ 2/j, 1). For a ∈ [0, 1), let l a j be the line through the point (0, a) and orthogonal to p j . Then, due to the fact that p j is rationally independent Lemma 3.2 implies lim ε→0 l j ∩Q ρε g(x, y/ε)dσ y = g(x). On the other hand l j → l 0 which is a line through (0, a) and parallel to the x 1 -axis. For the limit of the average for this line we then have lim ε→0 l 0 ∩Q ρε g(x, y/ε)dσ y = g(x, ·, a). Let us set L a j = l a j ∩ Q ρ ε 0 /ε 0 (mod 1). Then one readily verifies that for ε 0 fixed, and very small where M ε 0 = ρ ε 0 /ε 0 . In particular, for j large L j will never foliate the unit cell, and hence it is impossible to approximate the integral over ∂Ω by any covering, however small. 7.1. Example 1. We consider the case when g is periodic only in x 1direction and when the domain is a slab with a unit normal direction ν. For a ν ∈ S n−1 , set Ω = {x : −R 1 < x · ν < R 2 }. Let g(x) = g(x 1 ) be independent of (x 2 , · · · , x n ) and 1-periodic, i.e. g(x + k) = g(x 1 + k 1 ) = g(x 1 ) = g(x) for k ∈ Z n . Now let u ε be a solution of the following equation We discuss three possible limits of u ε whose homogenized equation can be found. Namely, Figure 2). It turns out that u * , u * , and u don't follow simple homogenization whose boundary data is a simple average g. It means there are nontrivial homogenization processes for each different limits. For R 0, set g * (R, e 1 ) = max g, g * (R, e 1 ) = min g, If R = 0, let g * (0, e 1 ) = g * (0, e 1 ) = g(0, e 1 ) = g(0). Proposition 7.1. For the particular choice ν = e 1 , in equation (10), the limit functions u * , u * , and u will satisfy Proof. Select ε i such that g x 1 ε i = max g and then u ε i = u * since all u ε i have the same boundary values. Hence it is clear that lim sup ε→0 u ε = u * . Similar argument can be applied to u * and u to have the conclusion. Letting tend to zero and using first Lemma 3.3, and then Lemma 3.2 we shall have lim →0 x·ν=R 2 P(x − y)g y 1 ε dσ y = x·ν=R 2 P(x − y)gdσ y , which implies the conclusion. The next interesting question is to find the limit equation for the general converging sequence. In the following lemma, we will show there is a converging subsequence whose limit takes any value between the supremum of g and its infimum on x · e 1 = R 2 . Proposition 7.3. For any A such that min g ≤ A ≤ max g, there is a sequence {u ε i } converging uniformly to u such that its limit u satisfies in Ω, u(x) = M on x · e 1 = −R 1 , u(x) = A on x · e 1 = R 2 . Proof. There are ε i → 0 such that g x 1 ε = A since g is 1-dimensional. Therefore all u ε i have the same boundary values, implying u ε i = u(x). 7.2. Example 2. In this example we confine ourselves to R 2 . We consider the case when g is periodic only in x 1 -direction and the domain Ω is convex with two parallel flat parts of boundaries, orthogonal to ν = e 1 . For exactness we consider the following stadium like domain Ω = x : |x 1 | < R, |x 2 | ≤ 1 + R 2 − x 2 1 . Let g(x) = g(x 1 ) be 1-periodic, and independent of x 2 -direction. Now let u ε be a solution of the following equation then u * and u * (see (11)) are sub-and super-solutions of (18) respectively. In general, they are not solutions. In the next result we state that the homogenized boundary data may not be continuous even though g(x) is smooth. Proposition 7.4. There is a a smooth 1-periodic function g(x 1 ) of one variable x 1 and a subsequence of solutions {u ε i } to equation (18), converging to u such that u is not continuous on ∂Ω. Proof. If x 1 ±R, then x = (x 1 , x 2 ) ∈ ∂Ω satisfies IDD condition and then any limit will satisfy the boundary condition u(x) = g. Now we select ε i so that g x 1 ε i = max g g. Then u ε i has a converging subsequence to u which will be discontinuous at x = (R, 1).
7,245
2012-01-31T00:00:00.000
[ "Mathematics" ]
Cation Involvement in Telomestatin Binding to G-Quadruplex DNA The binding mode of telomestatin to G-quadruplex DNA has been investigated using electrospray mass spectrometry, by detecting the intact complexes formed in ammonium acetate. The mass measurements show the incorporation of one extra ammonium ion in the telomestatin complexes. Experiments on telomestatin alone also show that the telomestatin alone is able to coordinate cations in a similar way as a crown ether. Finally, density functional theory calculations suggest that in the G-quadruplex-telomestatin complex, potassium or ammonium cations are located between the telomestatin and a G-quartet. This study underlines that monovalent cation coordination capabilities should be integrated in the rational design of G-quadruplex binding ligands. Introduction The formation of G-quadruplex folds by telomeric DNA is thought to play a role in telomere regulation. It has been shown that G-quadruplex ligands binding specifically to the telomeric G-quadruplex structure effectively alter telomere capping and cause the senescence or apoptosis of cancer cells [1][2][3][4][5]. A variety of ligands have now been described as Gquadruplex binders, but a key issue in ligand design is often the specificity for G-quadruplexes over duplex sequences [4,[6][7][8]. Identifying binding modes that make a ligand a specific and highly active G-quadruplex binder is crucial for the rational design of novel molecules. Telomestatin ( Figure 1) is one of the most emblematic G-quadruplex ligands. The molecule was first extracted from Streptomyces anulatus 3533-SV4 [9]. It is highly specific for G-quadruplexes, with no significant binding to duplexes [10][11][12]. Telomestatin was found to effectively inhibit the DNA binding of telomere-associated proteins such as telomerase [13], POT1 and TRF2 [14], and even Topo III in ALT cell lines [15]. It therefore induces telomere shortening and apoptosis [16][17][18][19] not only via telomerase inhibition but also via telomere uncapping, and therefore has potential activity against many cancer cell types. Telomestatin binds to G-quadruplexes, among which is the human telomeric G-quadruplex, by external stacking [12]. One G-quadruplex unit can therefore accommodate two telomestatin ligands, one on each end. A recent modeling study showed that telomestatin has a tendency to capture a potassium ion, either from the G-quadruplex itself or from the solution [20]. Here we show that mass spectrometry provides experimental evidence for the accommodation of one extra cation when a telomestatin molecule is bound to a G-quadruplex. This will be illustrated for three typical Gquadruplexes: the tetramolecular [TG 4 T] 4 quadruplex, the 4-repeat telomeric sequence (T 2 AG 3 ) 4 , and the Pu22myc promoter sequence GAG 3 TG 4 AG 3 TG 4 A 2 G. The typical folds adopted by each of these three G-quadruplexes in ammonium acetate were studied previously [21][22][23] and are summarized in Figure 1. Figure 1: Chemical structure of telomestatin and folding pattern of the three G-quadruplexes studied here. [TG 4 T] 4 adopts a parallel fold and can incorporate three ammonium ions between its four G-quartets [21], the 4-repeat telomeric sequence (T 2 AG 3 ) 4 adopts an intramolecular antiparallel fold in ammonium acetate and incorporates up to two ammonium ions [22], and the Pu22myc promoter sequence GAG 3 TG 4 AG 3 TG 4 A 2 G adopts a predominantly parallel fold in ammonium acetate and incorporates two ammonium ions [23]. Experimental Section 2.1. Materials. All oligonucleotides were purchased from Eurogentec (Seraing, Belgium), solubilized in water doubly distilled in house, and the 400 μM stock solutions were stored at −20 • C. The solvents used include methanol (absolute, HPLC grade, Biosolve, Valkenswaard, The Netherlands), bidistilled water, and aqueous ammonium acetate (5 M stock solution from Fluka, diluted with bi-distilled water). KCl for the evaluation of the complexation of telomestatin alone was puriss, p.a., ≥99.5% (T) (Fluka). G-quadruplexes were formed by annealing (heating the oligonucleotides for 5 minutes at 85 • C, followed by slow cooling to room temperature) in 150 mM ammonium acetate. The G-quadruplexforming oligonucleotides were dTG 4 T (annealed at 200 μM single strand to form 50 μM tetramolecular G-quadruplex); the telomeric sequence (T 2 AG 3 ) 4 and the Pu22myc promoter sequence GAG 3 TG 4 AG 3 TG 4 A 2 G (annealed at 50 μM single strand to form intramolecular G-quadruplexes). Telomestatin was isolated and purified as described elsewhere [9,24] to obtain a 1 mM stock solution in DMSO. For the electrospray mass spectrometry analysis of the complexes, the final injected solutions were 5 μM in G-quadruplex and 5 to 10 μM in telomestatin (only 10 μM results are shown), in 80/20 (v/v) aqueous ammonium acetate (150 mM)/methanol. Mass Spectrometry. Electrospray mass spectrometry experiments were performed on a Q-TOF Ultima Global (Waters, Manchester, UK). The spectra of the intact G-quadruplexes and their noncovalent complexes with telomestatin were recorded in the negative ion mode (capillary voltage = −2.2 V, source and desolvation temperatures = 70 • C, cone = 100 V, RF Lens1 Energy = 45 V, source pirani pressure = 3.94 mbar, collision energy = 10 V), smoothed (mean function, 3 × 20 channels) and subtracted (polynomial order 99, 0.1% below curve). The spectra of telomestatin in the absence of G-quadruplex were recorded in the positive ion mode (capillary voltage = +2.8 V, source and desolvation temperatures = 80 and 100 • C, resp., cone = 100 V, RF Lens1 Energy = 50 V, source pirani pressure = 3.33 mbar and collision energy = 7 V) and were not subjected to smoothing or background subtraction. Calculations. For the [telomestatin + cation] binary complexes, the ammonium, potassium, and sodium cations were manually docked within the telomestatin ring, and the resulting structures were optimized using density functional theory (DFT) with the hybrid functional B3LYP and the 6-31G(d,p) basis set. For the ternary complexes between [telomestatin + cation + one G-quartet], the telomestatin was manually docked on top of an optimized structure of a G-quartet coordinated with the cation (ammonium, potassium, sodium). The ternary complexes were then optimized using DFT B3LYP at the 6-31G(d,p) level. All electronic structure calculations were performed using the Gaussian 03 rev. D.02 software suite [25]. Comparison with a larger basis set (6-311 + G(d,p)) was performed for one of the complexes (G-tetrad + K + telomestatin), and the results were similar in terms of both energies (0.8 kcal/mol) and geometries (RMSD 0.16 Å ). 6-31G(d,p) basis set was therefore used for all calculations. Other hybrid functional BHandHLYP and new meta-GGA hybrid MPWB1K [26] have also been tested for comparison with B3LYP. Results and Discussion When operated in soft source conditions, electrospray mass spectrometry allows detecting intact noncovalent complexes [27][28][29][30]. In the analysis of quadruplex-ligand complexes, it therefore allows determining the number of strands, the number of ligands, and the number of cations in each complex. Electrospray mass spectrometry of nucleic acid noncovalent complexes is typically performed in ammonium acetate solution in order to obtain clean spectra [31]. Ammonium ions present in the counter-ion shell around phosphates are lost during the final stages of desolvation in the electrospray source, even in soft conditions (low acceleration voltages). In contrast, ammonium ions bound sufficiently tightly to the G-quadruplex, such as the ammonium ions trapped between the G-quartets, will persist at higher acceleration voltages in the source than the nonspecifically bound ones [21]. Figure 2 shows the electrospray mass spectra of three typical G-quadruplexes: (a) [TG 4 T] 4 , (b) the telomeric sequence (T 2 AG 3 ) 4 , and (c) the Pu22myc promoter sequence GAG 3 TG 4 AG 3 TG 4 A 2 G and on their 1 : 1 complexes with telomestatin (d-f, resp.). The injected mixtures are 5 μM in each G-quadruplex 10 μM in telomestatin, in 80/20 aqueous ammonium acetate (150 mM)/methanol, and the spectra were recorded using soft source conditions to preserve the specifically bound ammonium ions. The free [TG 4 T] 4 Gquadruplex (Figure 2(a)) contains three ammonium ions: the 5− charge state is found at m/z = 1500.22, corresponding to [Q · (NH + 4 ) 3 ] 5− . The major peaks of the telomeric (Figure 2(b)) and Pu22myc (Figure 2(c)) G-quadruplexes at charge state 5− are corresponding to the intramolecular Gquadruplex with two ammonium ions, at (m/z) = 1521. 22 Δ(m/z) between the G-quadruplex and its complex with one telomestatin, with no change in the amount of ammonium ions incorporated, is expected to be equal to 582.5/5 = 116.5. In contrast, the observed Δ(m/z) between the major peak of the quadruplex and the major peak of its complex with one ligand is equal to 119.9 (compare Figure 2 Figure 2(f)). This corresponds to the addition of one telomestatin molecule, one extra ammonium ion, and the subtraction of one proton for the charge balance (119.9 = (585.5 + 18−1)/5). The complex with one telomestatin ligand therefore systematically contains one more ammonium ion than the corresponding unbound G-quadruplex. This extra ammonium ion is lost when the acceleration voltage is increased. In our previous report on the MS detection of telomestatin binding to telomeric DNA [11], we have missed this "extra ammonium" for two reasons. Firstly, for the 3.5repeat telomeric sequence studied previously it is difficult to preserve two inner ammoniums even in soft conditions. Secondly, soft conditions could not be used because a long duplex had to be detected simultaneously with the G-strand, and the electrospray source conditions were chosen as a compromise. Electrospray mass spectrometry is also powerful to analyze caged supramolecular complexes such as crown ethers bound to cations [32][33][34][35]. To probe whether telomestatin is able to coordinate a cation already in the absence of G-quadruplex, we used ESI-MS in the positive ion mode on telomestatin solutions. Figure 3(a) shows the spectrum obtained with telomestatin dilution in bi-distilled water. The signal-to-noise ratio of all peaks is weak, indicating low protonation and cationization efficiencies. Surprisingly, we found that the major peak was a doubly charged ion at m/z = 686.1 (base peak), whose isotopic distribution matches with that of a complex between two telomestatin ligands and one lead ion adduct. The fragment ion spectrum shows the loss of one telomestatin, and the isotopic distribution of the resulting [Telomestatin + Pb] complex confirms the presence of lead. Traces of lead come from the purification of telomestatin from Streptomyces anulatus 3533-SV4 [24]. Another weak doubly charged peak is tentatively assigned to a complex between two telomestatin and two H 3 O + ions, and the fragment ion spectrum shows only the singly protonated telomestatin. We then doped the solution with either ammonium acetate or with potassium chloride, in substoichiometric amounts, to probe if telomestatin was able to coordinate monovalent cations such as those typically coordinated to G-quartets. In the presence of ammonium ions, the lead complex totally disappears and the complex [telomestatin + NH + 4 ] is detected at m/z = 600. In the presence of potassium ions, the complex [telomestatin + K + ] is detected at m/z = 621, but the lead complex has not disappeared completely. All experimental results suggest that telomestatin has significant affinity for monovalent cations like the ammonium ion, and this could influence its binding mode to the G-quadruplex DNA. We performed DFT calculations in order to ascertain the possible coordination geometries of the monovalent cations to telomestatin. The geometries of the [telomestatin + cation] binary complexes are shown in Figures 4(a)-4(c), and the geometries of the ternary complexes between [telomestatin + cation + one G-quartet] are shown in Figures 4(d)-4(f). In the isolated [telomestatin + cation] complexes, all cations are coordinated in the plane of the telomestatin. In the ternary complexes with the Gquartet, the cation clearly moves towards the G-quartet. The structures of the complexes with K + and NH + 4 are similar, with the cation coordinated midway between the telomestatin and the G-quartet. Sodium, on the other hand, sits closer to the G-quartet than to the telomestatin. The mode of cation coordination to the telomestatin-G-quartet system is similar to the coordination mode already described for successive G-quartets [36]. In terms of coordination geometries, potassium tends to sit between G-quartets while sodium tends to fit in the middle of one G-quartet because it is smaller. K + and NH + 4 have similar ionic radii [37,38] and therefore behave similarly, and the same trend is observed for our telomestatin-Gquartet complex. The optimized geometries are similar for all functionals tested (B3LYP, BHandHLYP, MPBW1K) (see Figure S1 in Supplementary Material available online at doi: 10.4061/2010/121259), with the cation in the plane of telomestatin in the absence of G-quartet, and between telomestatin and the G-quartet in the ternary complex. However, the functionals have a greater influence on the computed interaction energies. The root mean square distances (RMSDs) for two-by-two comparisons of hybrid functional, and the interaction energies of NH + 4 , K + and Na + with telomestatin alone, telomestatin + one G-quartet are given in supplementary Tables S1 and S2, respectively. Conclusion We have therefore shown that telomestatin readily coordinates monovalent cations such as K + and NH + 4 , and that telomestatin retains this cation while binding to Gquadruplexes. The observed stoichiometry and the calculations are consistent with a cation trapped midway between the telomestatin and the G-quartet. Telomestatin therefore acts like an analog of a G-quartet. This study underlines that monovalent cation coordination capabilities should be 6 Journal of Nucleic Acids integrated in the rational design of G-quadruplex binding ligands.
3,091.2
2010-06-16T00:00:00.000
[ "Chemistry" ]
Coherent photonic Terahertz transmitters compatible with direct comb modulation We present a novel approach to coherent photonic THz systems supporting complex modulation. The proposed scheme uses a single optical path avoiding the problems of current implementations, which include: phase decorrelation, 3-dB power loss, and polarization and power matching circuits. More importantly, we show that our novel approach is compatible with direct modulation of the output of an optical frequency comb (i.e., not requiring the demultiplexing of two tones from the comb), further simplifying the system and enabling an increase in the transmitted RF power for a fixed average optical power injected into the photodiode. S1 Two line-modulation single-path transmitter S1.1 RF generation As shown in Fig. S1, in the single-path transmitter, two optical carriers, E 1 and E 2 , are modulated with the same SSB signal: E 2 (t) = A SSB (t)exp( jω 2 t) (S1) and A SSB (t) = A C + A mod (t)exp( j(Ω IF t + θ mod (t))), where A C is a constant, A mod (t) and θ mod (t) are the amplitude and phase of the baseband complex modulation, and Ω IF is an intermediate frequency (IF). When E 1 and E 2 are combined and injected into a PD, the generated photocurrent is proportional to the square modulus of the electric field: The I PD (t) term at the RF (which is assumed to be in the THz range in the main manuscript) is, thus: For the Matlab simulations we use the term 2|A SSB (t)| 2 , which is the complex baseband representation of I RF (t). The expansion of this term (which we shall call h(t) hereafter for the sake of brevity) gives: Substituting equation S5 into equation S4, one can see that the I RF (t) signal contains three terms: (a) the data-carrying signal, with an amplitude of h mod (t); (b) the beating of the two carriers, with an amplitude of 2A 2 C ; and (c) the beating of the two sidebands, with an amplitude of 2A 2 mod (t). Term (c) is the signal-signal beat interference (SSBI) which can distort the useful signal if appropriate mitigation techniques are not employed. S1.2 RF-energy normalization To compute the BER curves, the RF signal is normalized in terms of energy, giving: Figure S1. Spectrum before and after photodetection of 2-line SSB-C modulation. S1.3 Average-photocurrent normalization For average-photocurrent normalization the RF signal is divided by the average of I PD (t): S2 Heterodyne transmitter S2.1 RF generation In the heterodyne transmitter, only one optical carrier, E 1 , is modulated, whereas the other, E 2 , is kept unmodulated to act as local oscillator: The generated photocurrent in this case is: and the RF term is, thus: where A(t) = 2A C A mod (t). For the Matlab simulations we use the complex baseband representation of I RF (t), which is given by A(t)exp( jθ mod (t)). S2.3 Average-photocurrent normalization The normalization of the heterodyne signal in terms of average photocurrent gives: where it is assumed that A 2 C = ⟨A 2 mod (t)⟩. S3.1 Derivation of the gain expression at different harmonics in comb systems with SSB-C modulation From equation S2, we define the CSPR as: (S13) 2/4 For N equi-amplitude optical comb lines and an average optical power of one (i.e. | ∑ N 1 E N (t)| 2 = 1) the power of a single optical carrier (see Fig. S2 for a schematic depiction of the spectrum under consideration) is given by: Following the procedure in reference 1 , one can derive the power of the data-carrying term at the first harmonic of the repetition frequency (ω rep in Fig. S2) as: The gain over 2-line modulation at this harmonic is then: which is the same gain expression obtained in reference 1 . For generation at higher harmonics, the gain expression (i.e., G X,N ) can be calculated by noticing that the power of the Xth harmonic is given by: Figure S2. Spectrum before and after photodetection of SSB-C modulation on a comb with N optical lines. S3.2 Dispersion in comb-based systems The two sidebands of a THz DSD-C signal (i.e., lower and upper sidebands) generated with an N-line comb can be expressed as a summation of beatings (as shown in Fig. 5 (a) of the main text). If the frequency of the THz signal is equal to the X-th harmonic of the comb repetition frequency, ω rep , then, the electric field of the upper sideband, E usb , can be expressed as: where the time-dependent component of the electric field, exp j(Xω rep + Ω IF )t , has been factored out, A usb n is the amplitude of each upper sideband beating, and β 2 is the group velocity dispersion of SMF. The rest of the parameters are defined depending on whether the comb has an even or odd number of lines (i.e., whether N is even or odd). For even combs n = ±1, ±3, ±5...; m = n + 2X; N 0 = −(N − 1); N f = (N − 1) − 2X; and F = ω rep /2. For odd combs n = 0, ±1, ±2, ±3...; m = n + X; N 0 = −(N − 1)/2; N f = (N − 1)/2 − X; and F = ω rep . For the lower sideband, the electric field, E lsb , is: A lsb n exp j β 2 2 (m 2 − n 2 )F 2 − Ω 2 IF − 2nFΩ IF l = A lsb (l)exp jθ lsb (l) . (S19) For SSB demodulation, the downconverted field, E SSB , is proportional to: whereas, for DSB demodulation, the downconverted field, E DSB , is: E DSB exp jΩ IF t ∝ A usb exp jθ usb + A lsb exp − jθ lsb . 3/4
1,376.2
2021-12-02T00:00:00.000
[ "Physics" ]
Planning-related land value changes for explaining instruments of compensation and value capture in Switzerland As a public policy, planning seeks to achieve politically defined policy objectives such as sustainable spatial development. To effectively attain these objectives, it is essential to consider the impact of planning decisions on land values. A comprehensive understanding of the connection between planning and land values is imperative for making well-informed choices regarding the management of land use and spatial development sustainably and responsibly. While instruments of planning law are intensively debated within the planning community, their implicit effects on land values are rarely considered. This study contributes to the field by demonstrating the crucial connection between planning-induced land value changes and value capture instruments in Switzerland. Our analysis shows significant value changes in the planning process. It connects these to redis-tributive instruments of the Swiss planning regime, which come into play to compensate for disproportionate planning-induced advantages or disadvantages of landowners. Due to the exceptionally significant change in value while zoning, which is present in Switzerland, there are remarkable redistributive instruments - both in terms of value increase (added value capture) and value decrease (compensation). Our study shows that knowledge of planning-related land value changes can help understand redistributive mechanisms, thereby contributing to best-practice debates. Introduction Land is a commodity that can be traded between private parties at market conditions (Gerber and Gerber, 2017). Accordingly, land is attributed to a price. This value is derived from a combination of factors, ranging from local conditions (e.g., soil quality) to macroeconomic developments (e.g., financial policy, economic development) (Hong and Brubaker, 2006). One essential factor that determines land values is planning (Buitelaar and Sorel, 2010). Concretely, every planning phase, from agricultural land to a plot ready for construction, increases the land value. National planning regimesthrough defining planning phasesthus affect when land values rise, how much, and who profits from these value increases. Understanding the interdependence between planning interventions and land values is a precondition for reaching ecological and social policy goals (Dransfeld and Voß, 1993). The planning law contains two levers that regulate value development: the defined planning phases and the redistribution of planning-related value gains. 1. As a piece of agricultural land runs through the various planning phases, from zoning, via land readjustment and servicing to issuing a building permit, each phase marks a significant land value change. 2. The planning law regulates who reaps these profits by introducing public value-capture instruments. Given that these instruments are adjusted to planning-induced value developments, they contribute to sustainable spatial development. Cross-country comparisons can show how various planning regimes align these two levers. However, few studies combine an analysis of planning phases with an analysis of instruments of public value capture. Earlier research has compared forms of and shown essential preconditions for successfully implementing public value capture (Alterman, 2011). Still, such studies neglect the role of planning-induced value increases in explaining what instruments of value capture a government employs. The impact of planning on land values is extensively studied from an urban economic perspective. Individual factors and their influence are examined (Büchler and Ehrlich, 2023), and various models are applied to land markets (Rodas et al., 2018). The effects of specific regulations on certain subsectors of planning have also been studied, such as housing prices (Huang and Tang, 2012;Ihlanfeldt, 2007;Jalali et al., 2022;Lin and Wachter, 2019). Ahlfeldt and Pietrostefani (2019) synthesise the economic effects of density, including the impact of planning regulations on land values. While the paper does not focus solely on Europe, it offers valuable insights into the relationship between planning and land values in European urban contexts. The impacts have rarely been explicitly examined from a planning law perspective. A notable exception is Jaeger (2006), who applies economic models to Oregon planning law. However, these studies do not include a political science interpretation of planning law regarding the different planning phases and redistributive instruments. Against the backdrop of this gap, we aim to shed light on the interdependence between planning phases as defined in planning law, the resulting land value development, and the instruments applied to deal with such value changes. To this end, we apply a model of Bonczek and Halstenberg (1963), initially describing planning phases and their effects on land values in Germany, to the context of Switzerland. Switzerland is one of the few countries worldwide that apply a direct form of public value capture (Muñoz Gielen and van der Krabben, 2019;OECD, 2022;Scheiwiller and Hengstermann, 2022). Notably, we ask: (1) What value increases are caused by the planning phases defined in the Swiss planning law, and (2) how are these value increases treated in the planning law? Applying the model to Switzerland, we find that direct value capture is employed in a planning phase whose resulting value increase is much higher than was foreseen in the German model. This suggests that, amongst the factors observed in earlier studies, planning-induced value increases help explain what forms of value capture are chosen. Planning-induced land value changes Planning is one of several factors affecting land values (Büchler and Ehrlich, 2023). These factors were earlier divided into four categories: intrinsic factors (e.g., soil quality), external factors, public investment, and user investment by Hong and Brubaker (2006). They pointed out that the central political question is whether respective changes in value were caused by the actions or investments of the landowner or are due to developments independent of the landowner. However, Hong and Brubaker do not distinguish between public investments directly linked to the land (e.g., servicing) and public investments near the affected land (e.g., school infrastructure). Moreover, they see regulation merely as a general external factor. Here, however, a more precise distinction is necessary between general abstract regulations (e.g., national public policy) and the concrete regulations related to a specific property, whereby the latter can then be differentiated again concerning various planning phases. Planning phases and their impact on land values were extensively described by Bonczek and Halstenberg (1963). For the first time, they examined the effects of planning phases on land values (see Fig. 1). With their model ('staircase model'), they illustrated on the one hand that the public sector already captured specific value increases during land readjustment (through the reallocation advantage and the transfer of land). On the other hand, they showed that a large part of the value increases remained untouched. They, therefore, explicitly understand their model in the context of a debate on a fair and feasible regulation for the general capture of planning-related added value, as was the case in England at the time. The law was intended to ensure that landowners are neither disadvantaged nor advantaged by public planning measures. Several authors have revisited this model in recent years and applied it to analyse land value development in several countries and contexts. Davy (2018) applies the model in a 4-step version to German legislation to explain the difference between planning and land policy. Christensen (2014) uses the model with a specific focus on municipal planning in Denmark. Kalbro and Mattsson (2018) use the model both to analyse the institutional regime at the national level in Sweden and as an analytical framework for selected case studies. Finally, the model has been used as a framework for comparative research, such as the Fig. 1. Model of planning phases after Bonczek and Halstenberg (1963). five-country comparison by Dransfeld and Voß (1993) and the most recent comparison by Halleux et al. (2022), covering 29 European countries. Dransfeld and Voß (1993) compared five European countries and their land markets. They examined the extent to which the various state regulations influence the respective land market systems so that spatial development takes place in the desired locations -regarding ecological and social goals of spatial planning. They, therefore, considered the planning influence on land values as an implementation mechanism for indirectly achieving public objectives by influencing the behaviour of landowners. Halleux et al. (2022) used the model to specifically analyse the regulations dealing with value capture and compare them between 21 European countries. The work follows a series of studies that represent a renaissance of scholarly interest in public value capture, starting with Alterman (2011), who discusses preconditions for successful value capture. Her analysis of 14 countries unveils two approaches that she calls direct and indirect instruments. This division was taken up and developed further by Muñoz Gielen and van der Krabben (2019), who, in their cross-country comparison, focus on the application of (non-) negotiable developer obligations. The instrument of value-added capture is seen as a redistributive counterpart to the compensation that occurs when development rights are withdrawn (see with particular reference to the case of Belgium: Lacoere et al., 2023). Alterman (2010) has conducted a comparative study that shows compensation mechanisms in various countries. As much as they differ in detail, the study reveals that compensation mechanisms are much more common than value-added compensation mechanisms. However, the findings are not linked to planning phases. Overall, the literature review shows that planning phases, their effects on land values and redistribution of value changes have been discussed in their parts but not considered in their entirety. This study addresses these interdependencies within the Swiss planning regime, which represents an interesting case due to its very high land prices and rigorous planning system. Planning phases in Switzerland In the subsequent section, we will apply Bonczek's model of planning phases to the Swiss context. The planning phases in Switzerland are derived from the Swiss Spatial Planning Act (SPA). Unfortunately, we cannot utilise nationwide land value data due to its restricted accessibility (see Section 5.3). As a result, the depicted price jumps in the graph are indicative and rely on case studies, Swiss planning practitioners' journals, and newspaper articles for reference. The Swiss planning system distinguishes buildable and nonbuildable zones (art. 1 SPA). While construction is generally permitted in the buildable zone unless there is an explicit rule to the contrary (negative planning), development is generally not allowed in the nonbuildable zone unless there is an explicit exception (positive planning) (Griffel, 2017). This stringent restriction limits growth but does not prohibit further development, as agricultural land can also fall within the buildable zone. In fact, between 2009 and 2018, the settlement area in Switzerland expanded by 6% (FSO, 2022). We distinguish between seven planning phases (see Fig. 2). The phases may contain further sub-steps, which cannot always be precisely demarcated from each other and are, therefore, not shown by us as separate phases. The model is based on the classical linear land development sequence-from agricultural area to the issuing of a building permit. Possible variants arising from less linear processes in practice or deviating situations (e.g., brownfield development) are not considered. Agricultural use and expected development area The first tier comprises land in the non-buildable zone, mostly land used for agricultural production, which is why this land is also referred to as an agricultural zone (Art. 14 para. 2 SPA). In addition, this category includes land that is important for the landscape or is ecologically valuable (Art. 16 para. 1 SPA) (Ruegg and Letissier, 2015). Land value within the agricultural zone is measured based on agricultural profitability, depending on factors such as soil quality, shape and location. A unique characteristic is that the agricultural land value in some Alpine regions can even be negative, as the cultivation produces more costs than direct profits. In these cases, cultivation only makes sense because of the external effects, e.g., to reduce natural hazards caused by landslides. In such cases, the public sector finances the management or ownership of such areas. In addition to the Spatial Planning Act, agricultural land is subject to further legal provisions that influence land value, particularly by eliminating speculation on future developments. The most important legal source is the Peasant's Land Act (BGBB), which contains two relevant regulations (Braun, 1983). First, this act limits speculations on land value by prohibiting land transfers with more than a +5-15% value increasecalculated in relation to the adequate price for agricultural land (Art. 66 BGBB). Second, it outlaws non-agricultural persons' purchase of agricultural land (Art. 61 para. 1 & 2 BGBB). The regulations result in the market for agricultural land being severely restricted. On the one hand, the circle of potential purchasers is heavily limited. On the other hand, price fixing is subject to state control and relies on agricultural land use. All in all, this means that the land value of agricultural land is entirely determined by the agricultural sectornot by potential future development. Land speculation known from other countries regarding future building developments is mainly absent. Even if cantonal structure plans designate corresponding land as future development areas (art. 8 para. 1 lit. a SPA), the Peasant's Land Act prevents speculation. Designated building land The next formal planning phase is initiated by zoning (art. 15 SPA). According to Swiss law, basic building right is granted at this stage, even if further steps are necessary until the site is ready for a building permit (art. 22 SPA) (Aemisegger et al., 2016;Griffel, 2017). From then on, the owner has the right to use their land for construction if no public interests are opposed (Aemisegger et al., 2016). Transferring a specific plot of land from the non-buildable to the buildable zone is referred to as zoning and requires a change in the plot's allocation in the zoning plan (art. 15 para. 4 SPA). Since this regulation is binding for everyone and equals a law, this decision must be presented to the electorate for approval. In the Swiss planning system, zoning marks the highest increase in value. The scope of this increase is difficult to estimate because land value data is not publicly available in Switzerland. It can be assumed that including land in the buildable zone increases its value from 5 to 10 CHF/m 2 (for agricultural land) to between 300 CHF/m 2 in less attractive regions to more than 5000 CHF/m 2 in the most attractive regions (for comparable values see Müller-Jentsch, 2013, p. 7). Suitable designated building land Land readjustment marks the following planning phase (art. 20 SPA). This step is intended to ensure that plots of land are arranged according to their future land use. A building permit can only be issued if each plot of land is serviced (Art. 22 para. 2 lit. b SPA). Due to the change from agricultural to residential building land, these development requirements change in plot layout. Readjusting takes these new requirements into account. In addition, the plot layouts are optimised regarding aspects of construction or aspects of marketing. Cantonal laws regulate the exact procedure for building land readjustment, which differs accordingly. Planning law includes the possibility of land reallocation being ordered ex officio (Art. 20 SPA), i.e., against the will of the landowners. This occurs very rarely in Switzerland. More often, developers buy several parcels and do the readjustment in an internal procedure (Shahab and Viallon, 2021). Serviced building plot The following planning phase begins with servicing a plot of land. Land is considered serviced if there is sufficient transport access for the use in question and necessary water, energy, and sewage systems have been built (Art. 19 para. 1 SPA). Swiss planning law defines servicing as technical infrastructure only (Ruegg, 2022). The municipality must provide the servicing no later than 15 years after zoning (Art. 15 para. 4 lit. B SPA). Usually, municipalities issue a servicing programme that provides for staged servicing of all building plots within the zoning plan's 15-year planning horizon. The stages provided in this programme determine the land value within this planning phase. The closer to the expected date of full servicing by the municipality, the sooner the land's valorisation and thus the higher the land valuewhich is represented in our model by the price range within a phase (ascending line). Serviced building plot (fee settled) The following planning phase is initiated by paying the servicing charge, called 'landowner's contribution'. The charges for servicing vary depending on cantonal legislation. It is usually up to 50% of the actual costs for ordinary projects (see e.g., BSG 732.123.44, 2017). In the case of large projects, an infrastructure contract is usually concluded, containing the exact technical details and the cost allocation (Lambelet and Viallon, 2019). The land value depends on whether this service fee has been paid or is still outstanding. Paying the charge causes a further increase in land values. Developable building plot The building permit initiates the next and final planning phase. Having addressed zoning, land readjustment and servicing, our analysis of planning phases ends with issuing a building permit. Swiss planning law defines that a building permit must be granted if the land is serviced and the building project complies with the legal provisions of its zone (Art. 22 para. 2 SPA). No other conditions can be imposed. This means the landowner is entitled to a building permit when these conditions are satisfied. Accordingly, the increase in land values at this stage is comparatively insignificant (Perren, 2004). Usually, developers have three years to complete construction before the permit expires (see, e.g., art. 42 para. 2 Bau/BE). Redistributive mechanisms in the Swiss planning regime Changes in land value occur in the transition between planning phases. Bonczek's planning phase model illustrates these steps, making it possible to identify how value changes are dealt with politically and legally (see Fig. 3). One can consider both value increases (from left to right) and decreases (from right to left). Land value changes are a recurring subject of political and academic debates and planning literature. In the realm of planning literature, various viewpoints emerge, including advocating for the complete capture of land value (Bernoulli, 1946), of planning-related added values and compensation for value losses, such as in cases of regulatory takings (Alterman, 2010). Applied to the Swiss planning regime, two aspects are of particular relevance: (a) The most significant value change caused by changing the land's zoning and its redistributive instruments, and (b) the differences in value determination for expropriation between agricultural land versus zoned land. Value changes due to zoning As can be seen from the model, granting (or removing) development rights is associated with the most significant value change. It is initiated by zoning, hence the initial assignment of the land to the buildable zone (from left to right), as well as by de-zoning, hence the downgrading to the non-buildable zone. Accordingly, this stage is the most interesting. Switzerland is one of the countries that has enacted planning law rules in both directions here. In Switzerland, this value change is particularly significant for two reasons: 1. The Peasant's Land Act restricts land speculation on agricultural land. This protection no longer applies as soon as the land is zoned. The value increase is particularly significant because the initial values are shallow. 2. Due to the regulatory planning system in Switzerland, building rights are generally already granted at the time of zoning. With the zoning, land values rise to a point close to the final values. The increase in value is significant because the values after the zoning are exceptionally high. Since zoning causes significant value changes, it is not surprising that the Swiss planning regime has special rules for dealing with these changes. Both cases can be distinguished: Regulations on value increase in the case of zoning (from left to right) and regulations on value decrease in the case of de-zoning (from right to left). Redistributive regulations in the case of zoning Added value capture instrument has been incorporated into Swiss planning law since 1979 (Viallon, 2018) and was significantly enhanced in the 2012 Spatial Planning Act reform. Since then, at least 20% of the planning-related value increase will be captured (Art. 5 para. 1 SPA) (Hengstermann and Viallon, 2023). Exceptions may only be granted for minimal amounts for which the administrative effort needed is not in a reasonable proportion (Art. 5 para. 1quinquies SPA) or if public land is affected ('rob Peter to pay Paul') (Viallon, 2018). Planning law does not provide an upper limit, but 60% has become established as the maximum capture rate in planning practice in Switzerland since it was approved by the Federal Court (Hengstermann and Scheiwiller, 2021). Thus, part of the value increase, which is induced by changes in the legal quality of the land (and not, for example, to services provided by the landowner), is returned to the general public. In contrast to other international examples of a betterment tax of this kind (Alterman, 2011;Halleux et al., 2022;Muñoz Gielen and van der Krabben, 2019), Swiss capturing does not serve to finance specific infrastructure projects (Scheiwiller and Hengstermann, 2022). "Such a compensation [=added value capture] corresponds to a postulate of justice and, in particular, equality under the law: the changes in land value caused by public land use planning occur without the owner's involvement in the sense of his contribution or misconduct; this effect, which cannot be attributed to the owner, is to be neutralised to a certain extent." (Riva, 2016, p. 72 Authors' translation). Hence, the instrument's political narrative in Swiss politics aims to reduce injustice, namely the unearned increment of the landowner. Redistributive regulations in case of de-zoning If the land is deprived of its buildability, this is accompanied by considerable losses in value. This happens in the case of de-zoning or material expropriation ('regulatory takings'). Like most international planning laws (Alterman, 2010), Swiss law provides for compensation in this case. According to Art. 26 of the Federal Constitution, property is guaranteed and cannot be restricted unless compensation is granted. Art. 5 para. 3 SPA specifies that this compensation must be total. Accordingly, the loss of value must also be determined for the case of de-zoning. The Swiss system provides court-hearing-like negotiations lead by a voluntary expert commission. However, the commission's task is not to determine a land value as objectively as possible (in the sense of finding the truth) but to negotiate a compromise between the parties' opposing interests (in the sense of out-of-court agreements). The land value thus arises because of arbitration and is based exclusively on the compromise of the two parties concerned in the individual case. Different value determination for expropriation Applying Bonczek's and Halstenberg's staircase to the Swiss planning regime reveals another peculiarity: the different handling of compensation for agricultural land versus zoned land in the case of expropriation. In principle, all three legal sourcesthe Federal Constitution (SC), the Spatial Planning Act (SPA) and the Expropriation Act (EA) -specify that expropriations must be fully compensated (art. 26 para. 2 SC, art. 5 SPA, art. 19 lit. a EA). However, since 2021, agricultural land is compensated at three times its market value (art. 19 lit. abis EA), whereas zoned land is to be compensated at its actual market value. This difference stems from a political demand by farmers' lobby organisations to adapt compensation mechanisms to more realistic market conditions. Initially, this entailed a demand for a six times greater value (Sibel et al., 2018). This difference in compensation for agricultural land versus zoned land in cases of expropriation may appear as mere favouritism. However, a deeper understanding can be achieved with Bonczek's adapted staircase model. Due to the Peasant's Land Act regulations mentioned earlier, the value increase for expected future development land is left out. Accordingly, the values in this phase are pretty low compared to unregulated land markets, where development speculation already occurs on agricultural land. Since expropriation compensation takes the value before a planning measure as a reference point, Swiss farmers incur low absolute values. The triple compensation is, therefore, comparatively low in absolute values, as the base value in the staircase Discussion Our results highlight interdependencies between planning phases, land value changes and the instruments that redistribute such planning gains and losses. We have illustrated that the planning phases in Switzerland differ from the original model. Of the resulting value increases, the first is more significant than foreseen in the model, while the remaining steps are more minor. Swiss planning instruments deal with the value changes caused by land use decisions. Significant increase and significant response The abrupt transition from agricultural to designated building land causes a sudden value increase, which, in Switzerland, is met by farreaching regulations on how this profit is captured by the public sector orin the reverse casehow the owner is compensated in the event of a transition back to non-buildable land. A possible explanation lies in the Swiss direct democratic system. Based on a pronounced understanding of justice, this system counteracts excessive preferential treatment of individuals (Hengstermann, 2021). The instrument of value capture then also enjoys the necessary legitimacy (Alterman, 2011) because the voting population has accepted it. It is also possible that the generally high price difference between buildable and non-buildable land in Switzerland legitimises direct forms of value capture (Scheiwiller and Hengstermann, 2022). Similarly, other countries employ public value capture, especially in regions with high land prices (Kaufmann and Arnold, 2018;Vejchodská and Hendricks, 2023). One must add that well-developed compensation schemes match the far-reaching value capture mechanisms in reaction to planning losses. In this way, redistributive mechanisms are justified by the fact that property owners should neither benefit excessively nor be disadvantaged by official state decisions. The model shows very clearly that this is particularly relevant for zoning. While in the other stages, value increases correspond to actual expenses (e.g., servicing), zoning-induced value changes are based purely on the legal quality of the land. Therefore, the political desire for equitable compensation would entail a symmetrical redistribution of unearned advantages and undeserved disadvantages. However, the system is asymmetrical. While 100% of planning losses are compensated, only 20-50% of planning gains are captured (Hengstermann and Viallon, 2023). Swiss planning phases in international comparison Our findings are fascinating compared to planning regimes in countries like Germany, where Bonczek's model was initially developed. As described above, the absence of a phase of expected buildable land is a notable feature of the Swiss planning regime, distinguishing it from other planning regimes such as the British, Dutch or Belgian (Lacoere and Leinfelder, 2022;. We see a possible explanation for this in the high esteem in which agricultural land is held in Swiss politics and society (Ruegg and Letissier, 2015). In Swiss logic, the planning system and public intervention in property rights are legitimised by the goal of preventing urban sprawl, ultimately protecting agricultural production areas (Lendi, 2008). This attitude is rooted in the collective experience of the two world wars and is intended to ensure food supply during the war (Art. 104a SC; Art. 1 para. 2 lit. d & 16 SPA). In this sense, planning was initially subordinated to the Military Department (Lendi, 1996). Compared to other countries with regulatory planning regimes (e.g., Germany), the significant value increase caused by zoning occurs early. Compared to Switzerland, German land value increases induced by the designation of land as development land in the municipal land use plan (Flächennutzungsplan) and the issuing of the detailed land-use plan (Bebauungsplan) cause less significant value increases (Hendricks et al., 2017). On the other hand, land readjustment has a more significant effect in Germany than in Switzerland. In Germany, land readjustment functions as the primary public value capture mechanism in the form of land shares and readjustment benefits . In Switzerland, by contrast, the most significant gains are already captured during zoning. Compared to countries with a discretionary planning regime (e.g., the United Kingdom), the value increase provoked by issuing the building permit is minimal (Dembski et al., 2021;Muñoz Gielen and Tasan-Kok, 2010;Valtonen et al., 2017b). Despite special land use plans for projects or areas of exceptional importance, the Swiss regime leaves no room for negotiation at this point. Therefore, the land value is hardly affected. In the UK, on the other hand, the right to build is granted only in the context of building permit negotiation (Dembski and O'Brien, 2023). Therefore, the British system has a more extended phase of land speculation and a significant increase as part of issuing the building permit (Fowles et al., 2022). Similarly, the Dutch planning regime knows extensive developer negotiations preceding the building permit (Hendricks et al., 2021). These negotiations can result in obligation and thus severely impact land values. In addition, land speculation occurs intensively before zoningdriven by private developers and the public sector (van der Krabben, 2021; van Oosten et al., 2018). Importance of land value data In general, planners should know land values and the impact of planning on land values as they play an essential role in the logic of owners and their behaviour. In concrete terms, however, the model also shows that at various points in the planning process, it is necessary to establish land values in a just and court-proof manner. For instance, the administrative decree on the amount to be paid regarding the added value capture depends on the difference between the land value before and after the zoning. Accordingly, it is important to determine both values. Likewise, it is essential to have accurate land values in the context of compensation for de-zoning and expropriation. However, a public land value reference system like in Germany (Voß and Bannert, 2018) does not exist in most regions of Switzerland. Only 2 of the 26 cantons have such an instrument (Basel-Stadt and Zürich). The remaining cantons rely on private-sector appraisals, which allow market comparison values through systematic purchase price collections. However, the exact data basis and calculation methods are not published and are subject to corporate secrecy. This is questionable from the point of view of the state of law. One plausible explanation could be that Switzerland does not have a transparency culture, as in Scandinavia (Valtonen et al., 2017a), but has traditionally cultivated a high degree of bank-client confidentiality. As a neutral and stable country, the Swiss land market is one of the premium investment markets in global real estate portfolios (Falkenbach, 2009;Oikarinen and Falkenbach, 2017). Conclusions In this study, we have applied Bonczek's and Halstenberg's (1963) planning phase model to Switzerland. In contrast to previous studies, we transferred the model and adapted its phases to Swiss planning law. These revealed differences concerning both phases and value increases. Differences in planning phases are reflected in the scope of instruments that capture planning gains and compensate for planning losses. Since agricultural land prices are strictly regulated in Switzerland, transferring a plot into the buildable zone causes a comparatively high value increase. This planning gain is encountered by remarkably far-reaching value capture mechanisms. Our findings shed light on the interdependence between planning phases defined in planning law, the resulting land value development and the instruments applied to deal with such value changes. The planning phase model of Bonczek and Halstenberg has proven viable to illustrate these interdependencies and shows potential for its application in further cross-country comparisons. Further studies are needed that employ land value data to support our findings empirically. In addition, future studies could usefully explore mechanisms dealing with value changes induced by up-and re-zoning (such as brownfield development and densification) -processes that are gaining relevance in the continued effort to achieve compact urban development. Our study has shown that knowledge of planning-related land value changes can help to understand redistributive mechanisms, thus providing an important contribution to best-practice debates. In general, planning practice and research must increasingly consider land values, because understanding the link between planning and land values is a prerequisite for making informed decisions about using and developing land responsibly and sustainably.
7,319.2
2023-09-01T00:00:00.000
[ "Economics", "Environmental Science", "Geography", "Political Science" ]
A locally deterministic, detector-based model of quantum measurement This paper describes a simple, causally deterministic model of quantum measurement based on an amplitude threshold detection scheme. Surprisingly, it is found to reproduce many phenomena normally thought to be uniquely quantum in nature. To model an $N$-dimensional pure state, the model uses $N$ complex random variables given by a scaled version of the wave vector with additive complex noise. Measurements are defined by threshold crossings of the individual components, conditioned on single-component threshold crossings. The resulting detection probabilities match or approximate those predicted by quantum mechanics according to the Born rule. Nevertheless, quantum phenomena such as entanglement, contextuality, and violations of Bell's inequality under local measurements are all shown to be exhibited by the model, thereby demonstrating that such phenomena are not without classical analogs. Introduction Quantum mechanics exhibits many peculiar and surprising phenomena. Indeterminism, wave/particle duality, entanglement, contextuality, and nonlocality are just a few of the more salient examples. Past attempts to provide a realistic local and deterministic interpretation of quantum phenomena have been frustrated by the many "no go" theorems ruling out various classes of hidden variable theories [1,2]. Most notable among these are the Kochen-Specker theorem [3], which rules out non-contextual B. R. La Cour (B) Applied Research Laboratories, The University of Texas at Austin, P.O. Box 8029, Austin, TX 78713-8029, USA e-mail<EMAIL_ADDRESS>hidden variable models, and Bell's theorem [4], which addresses local hidden variable models. This paper offers a simple mathematical model that exhibits many of these distinctly quantum phenomena, thereby demonstrating that they are not uniquely quantum in nature. Indeed, many apparently quantum phenomena have found analogs in classical mechanics and even social science [5,6]. Although rather simple in its present form, it is hoped that this model may form the basis of a more sophisticated physical theory. Such a model may also help to elucidate whether an epistemic or ontic interpretation of the quantum state is more appropriate [7,8]. The proposed model can be described succinctly as a complex random vector a for which measurements consist of threshold crossings of component magnitudes. The specific form is that of a fixed "classical" signal plus a random "quantum" noise term, defined as follows. Suppose we wish to model a given "design" state vector |ψ in an N -dimensional Hilbert space. The components in some standard basis are given by α n := n |ψ for n = 1, . . . , N . The aforementioned random vector a is then defined to be a := sα + w , where α = [α 1 , . . . , α N ] T ∈ C N is the normalized signal and s ≥ 0 is the signal amplitude. Normalization is in the usual sense that The term w = [w 1 , . . . , w N ] T is defined as a complex random noise vector whose joint distribution will be specified later. Either w or a may be construed as the "hidden variable" whose specific realization determines the outcome of a measurement. The physical motivation for this model stems from early work in stochastic electrodynamics (SED) [9]. In SED, quantum phenomena are hypothesized to arise from classical interactions of matter with a real, albeit stochastic, background electromagnetic field corresponding to the (virtual) vacuum field of quantum electrodynamics. Stochastic optics (SO), a natural extension of SED, attempts to use this same hypothesis to explain phenomena in quantum optics [10]. In the view of SO, the underlying reality corresponding to a quantum state |ψ is a real (as opposed to virtual) electromagnetic field (e.g., a plane wave of a particular mode), what one might call the "signal" that the experimenter prepares, plus a stochastic background component corresponding to what one might call the "noise" of the vacuum field. From this viewpoint, one may interpret a as giving the amplitudes and phases of the N modes of a classical electromagnetic field. While providing an intuitively appealing and qualitatively accurate description of many quantum optical phenomena, SO has also made predictions at variance with experiment [11,12]. More relevant this paper, however, is its lack of deterministic outcomes. In SO, detector probabilities are determined semiclassically as a function of mode intensity. Given a particular realization, only the probability of a detection is specified, not the outcome. In the model proposed here, we shall recover determinism by defining detections as amplitude threshold crossings. Thus, given a threshold γ ≥ 0, we shall say that a measurement in the standard basis results in outcome n if and only if |a n | > γ and |a n | ≤ γ for all n = n. Instances of a for which there are no threshold crossings are rejected as "non-detections." Likewise, instances of more then one threshold crossing are rejected through post-selection. The restriction to single-detection events may seem artificial, but it corresponds to what is commonly done in the laboratory. In quantum optics, for example, single photon production rates can be quite small [13]. Multiple detections are even rarer, and dark counts may occur even when there is no signal (corresponding to s = 0 in the present model). In such experiments, one often works in the so-called "coincidence basis," in which a detected and heralding photon are coincidently observed. This corresponds operationally to the aforementioned post-selection procedure. As will be shown later, this conditioning is key to reproducing many important quantum phenomena [14]. A final point to be defined in the model regards unitary transformations and measurements in other bases. In the proposed model, a unitary operator U which transforms |ψ to U |ψ similarly transforms a to U a for a given realization of a. Thus, if we measure an observable for which U is a diagonalizing unitary matrix of eigenvectors, then U † a is used in place of a to determine the measurement outcome. Since one may, in this manner, unambiguously determine the outcome that would have been obtained had a different observable been chosen, it follows that the model is not only deterministic but counterfactually definite [15]. As we shall see, this property plays a key role in understanding contextuality and quantum nonlocality. Interestingly, a similar threshold-based quantum measurement scheme has recently been developed by Khrennikov [16], who uses a model of a complex, vector-valued stochastic process {φ(t) : t ∈ R}. This process is assumed to be zero-mean and have a covariance of Detections are made when the amplitude of the process, suitably time averaged, falls above some threshold E d . The average time between such threshold crossings (or "clicks") for component i is found to beτ i = B ii /E d . Thus, the fraction of all clicks that are from component i is B ii /Tr(B), which is interpreted as a probability. This choice of normalization is equivalent to conditioning on single-detection events. Incorporating a background field and properly calibrating the threshold even allows violations of Bell's inequality [17]. Since classical fields are used, the model is manifestly local. It is, however, nonobjective: the values of observables cannot be assigned in advance. The outline of the paper is as follows. A mathematical description of the detection probabilities, with some simple examples, is given in Sect. 2. This description is extended in Sect. 3 to the case of probabilities conditioned on single-detection events, wherein the Born rule is recovered under certain limiting conditions. The measurement model description is completed in Sect. 4 with a discusion of unitary transformations. A key result there is that, for certain choices of noise models, the Born rule is preserved under unitary transformations. Using this result, it is shown that certain quantum states can be deduced empirically using quantum state tomography. Quantum contextuality is studied in Sect. 5 using the example of the Mermin-Peres magic square. There, it is shown that, by conditioning on single-detection events, one is able to reproduce all quantum phenomena yet remain deterministic. The question of entanglement is taken up in Sect. 6. It is shown that the proposed model is empirically equivalent to a Bell state, as may be inferred through quantum state tomography. Furthermore, it is shown that, by conditioning on single-detection events, this model is capable of producing violations of Bell's inequality in both simultaneous and space-like separated measurements. Detection Probabilities Given a vector α corresponding to a design state |ψ , let P n (α, γ ) denote the probability (given by the distribution of w, the signal amplitude s, and the threshold γ ) that a single threshold crossing of component n occurs. Similarly, let P 0 (α, γ ) denote the probability that no threshold crossing occurs. Specifically, and The probability of obtaining more than one detection will be denoted P ∞ (α, γ ); thus, The condition of having at most one detection, i.e. P ∞ (α, γ ) = 0, can be achieved asymptotically by choosing a sufficiently large threshold, since the probability of multiple detections will tend to zero in such a limit. It is possible, however, and may be of some utility, to achieve this condition explicitly. This may be done quite easily by normalizing the noise vector to some fixed value. Specifically, we have the following theorem. Theorem 1 Suppose the noise is bounded ( w ≤ σ almost surely for some σ ≥ 0), the detection threshold is sufficiently high (γ ≥ σ ), and the signal strength is sufficiently low (s ≤ ( √ 2 − 1)σ ). Then a detection occurs for at most one value of n. Proof Suppose |a n | > γ and |a n | > γ for some n = n . Then a 2 ≥ |a n | 2 +|a n | 2 > 2γ 2 ≥ 2σ 2 , so a > √ 2σ . But, by the triangle inequality, We thus arrive at a contradiction and conclude that two or more detections cannot occur. Although this is a two-dimensional model, one may consider it to be fourdimensional by taking [a 1 , a 2 , a 3 , a 4 ] = [a 1 , s + σ e iθ , s − σ e iθ , a 4 ] T / √ 2 and |a 1 | = |a 4 | ≤ σ . This may be construed as an entangled state of, say, a single photon with the vacuum state. Thus, even in this very simple example, we find evidence of quantum entanglement in what may be considered a classical model. Since cos θ and sin θ have the same distribution, P 1 (α, γ ) = P 2 (α, γ ) > 0. Conditioned on there being a detection, then, it is equally likely to be n = 1 or n = 2. Furthermore, since P ∞ (α, γ ) = 0, by Theorem 1, simultaneous detections cannot occur, although the possibility remains of there being no detections at all. Although it is not obvious, the stochastic models of Eqs. (8) and (9) are related by a Hadamard transform as well. Furthermore, it can be shown that w has the same distribution as σ z/ z , where z is a standard complex Gaussian random vector (i.e., with mean vector E[z] = 0 and covariance matrix E[zz † ] = I , where I is the identity and E[·] represents an expectation). Indeed, from this very fact one can show that the two equations are so related. This example was first introduced by Marshall and Santos [19] in the context of SO to explain certain quantum optics effects, such as the wave/particle duality of light, photon antibunching, and experimental tests of Bell's inequality. In their interpretation, a is the transverse electric field of a classical plane wave and w represents the component of that field due the zeropoint vacuum. The present model differs from that of Ref. [19] in that they assumed a detection probability of the form P n (α, γ ) ∝ max(0, 2E[|a n | 2 ] − γ ). Here, we assume only that the detection probabilities are determined by the frequency of threshold-crossing events. Conditional Detections Theorem 1 shows that, under suitable conditions, P ∞ (α, γ ) = 0 for all α. A similar result may effectively be obtained by simply increasing the threshold. This has the reciprocal effect of reducing the number of single detections, of course, but we may then condition (or post-select) on just these events. Let us, then, define the conditional probability p n (α, γ ) that a single detection of n occurs as follows: p n (α, γ ) := P n (α, γ ) P 1 (α, γ ) + · · · + P N (α, γ ) A key result is the following theorem. The class of possible distributions for w satisfying Theorem 2 is quite general. In particular, it includes the case of independent and identically distributed Gaussian noise (i.e., w = σ z). A proof is given in the appendix. The theorem may be modified to apply to cases in which the noise is bounded. In particular, we have the following. Theorem 3 If α is such that all nonzero components have an equal magnitude of 1/ √ K for some K > 0, then, provided γ < s 2 /K + σ 2 , we have p n (α, γ ) = |α n | 2 for all n = 1, . . . , N (i.e., the Born rule holds exactly), provided that the statistical distribution of w has the following properties: Proof The proof proceeds initially as for Theorem 2, with P k (α, γ ) > 0 and P m (α, γ ) = 0, provided σ ≤ γ < s 2 /K + σ 2 . It is then clear that p k (α, γ ) = 1/K , while p m (α, γ ) = 0. Property 3 of Theorem 3 is satisfied by any w such that w ≤ σ . Properties 1 and 2 would be satisfied, for example, by w = σ z/ z with γ = σ and s = ( √ 2 − 1)σ . Although limited in scope with respect to the applicable values of α, Theorems 2 and 3 cover a broad range of interesting quantum states, including the standard basis states and several maximally entangled states, such as the Bell states, to be discussed later, and Greenberger-Horne-Zeilinger (GHZ) states [20]. Furthermore, even under conditions that do not satisfy the theorem assumptions, approximate quantitative agreement with the Born rule is nevertheless achieved. In the following sections, it will be shown that this allows us to reproduce many interesting phenomena that are otherwise thought to have no classical interpretation. Unitary Transformations A quantum state |ψ is transformed to the state U |ψ via a unitary operator U representing the dynamics of the system, say, or an act of measurement in a particular basis. Representing the state by the complex amplitude vector a, we may perform a similar transform to the vector U a. The question at hand now is whether U a is a faithful statistical representation of U |ψ . To begin to answer that question, we consider the following. Lemma 1 If w = σ z is a complex Gaussian random vector with mean 0 and covariance σ 2 I and U is a unitary matrix, then U w has the same distribution as w. Proof Since U w is a linear transformation of w, it is also a complex Gaussian random vector, defined uniquely by its mean and covariance. By linearity, the mean is Corollary 1 If w = σ z/ z and U is a unitary matrix, then U w has the same distribution as w. Proof Note that U w = σ U z/ z = σ U z/ U z . Since U z has the same distribution as z, the same is also true of U w and w. We are now ready to introduce the main result of this section, regarding the relationship between U a and U |ψ . Theorem 4 Let U be any unitary matrix. If a = sα + w and either w = σ z or w = σ z/ z , then the detection probabilities for U a are given by P n (U α, γ ) for n ∈ {0, 1, . . . , N , ∞}. Proof The result follows directly from Lemma 1, Corollary 1, and the linearity of U . An important consequence of Theorem 4 is that, if the Born rule holds for all α in the standard basis, then it holds for measurements in any basis, since they are related solely by a unitary transformation. In a more restrictive sense, if the Born rule holds for only a subset of all possible states and the unitary transform used to produce that particular measurement keeps the state within that subset, then the Born rule still applies for the new measurement basis. To perform a measurement of an observable represented by a matrix A, we identify an associated unitary matrix U such that U † AU = = diag([λ 1 , . . . , λ N ]) is diagonal. Let A : C N → R be an associated random variable (i.e., measurable function) on the Borel subsets of C N such that, given a complex amplitude vector a, the outcome of the measurement is A(a), which we define as follows [21]. Given a ∈ C N , if |(U † a) n | > γ and |(U † a) n | ≤ γ for all n = n, then we say that A(a) = λ n ; otherwise, A(a) is left undefined. Example of Measurements in Orthogonal Bases Using the example of Eq. (8), let us consider measurements of the Pauli spin operators I , X , Y , and Z , where A corresponding set of unitary matrices for diagonalizing X , Y , Z are H , V , I , respectively, where Note that H is the Hadamard matrix, representing the action of a beamsplitter. The matrix V may be interpreted similarly, albeit with a different phase convention. For definiteness, suppose γ = σ = 1, s = √ 2 − 1, and w is drawn from z/ z . Specifically, supposed w = [0.2197 − 0.7169i, −0.5290 + 0.3974i] T is a particular realization. This choice of values allows us to use Theorem 1, so that we are guaranteed that at most one threshold-crossing event occurs. To measure Z , say, we (trivially) apply I to a and examine the component magnitudes. In this case, |a 1 | = 0.9570 and |a 2 | = 0.6616, so, as it turns out, there is no detection and, hence, no measurement outcome. In other words, Z (a) is, in this case, undefined. Now suppose instead that we have w = [0.5186 + 0.3818i, −0.6876 + 0.3354i] T . In this case, |a 1 | = 1.0079 and |a 2 | = 0.7650, so there is a single detection indicating that Z (a) = +1. (Indeed, this is the only possible outcome, given that there is a detection.) If a measurement of X had been performed, we would have applied the In each case the measurement outcome is uniquely and counterfactually determined by a. Quantum State Inference Now consider computing the expectation values of X , Y , and Z , conditioned on a single detection, when α is one of Note that the application of the corresponding unitary transformations H , V † , and I will transform α into a vector such that, again, each component is either zero or of the same magnitude. For Provided that w = σ z/ z and σ ≤ γ < s 2 /2 + σ 2 , the results of Theorem 3 will then hold and the conditional probabilities will match those of the Born rule. Consequently, the observed expectation values will be E Now, any two-dimensional operator can be written as a linear combination of I , X , Y , and Z . In particular, the quantum state operator ρ may be written as Any quantum state, pure or mixed, may be written in this manner. For a pure state ρ = |ψ ψ|, so Tr(ρ A) = ψ| A |ψ . Statistically, this corresponds to the expectation value E[A]. Let us therefore define the inferred quantum state operator By using sample means to estimate the expectation values, the above expression allows a method for empirically deducing the quantum state from measurement, a process known as quantum state tomography (QST) [22,23]. As might be inferred from QST, then, the classical random vector a is statistically equivalent to the quantum state |ψ , sinceρ = ρ for these three choices of α. QST is not the whole story, though. It has been noted that QST is inadequate to uniquely identify the underlying quantum state [24], and this example illustrates that fact. Consider a measurement of the operators B ± = ∓(X ± Z )/ √ 2, which are diagonalized by In this case, we find that for α = [1, 0] T , say, Thus, even though the inferred quantum state is correct, the model does not predict exactly the right statistics for all observables. This example underscores the difficulty in verifying, empirically, that a given quantum state has, indeed, been correctly prepared. Projective Subspace Measurements If the observable to be measured is not resolvable into a nondegenerate eigenvector basis, then we must define its measurement more generally as a set of projections onto two or more subspaces within the larger Hilbert space. Let 1 , . . . , M be such a set of projections, where M ≤ N and 1 + · · · + M = I is the identity. To perform a measurement, we project the vector a, representing a particular realization, onto this set. A detection of projection m is said to occur if m a > γ while n a ≤ γ for all n = m. If m = |m m|, this reduces to the previous definition of measurement. Projective subspace measurements may be used to describe spacially separated measurements. Given 1 , 2 , and a as defined above, let 1 a be the portion associated with particle 1 and 2 a that of particle 2. We may measure an observable X , say, on particle 1 by applying H to the projected state 1 a and observing if one of the two resulting component amplitudes exceeds the detection threshold. A similar procedure may be applied to particle 2, independent of particle 1. This approach will later be used in Sect. 6.2 to provide a classical analog to experimental tests of quantum nonlocality. Quantum Contextuality Quantum contextuality refers to the apparent dependence of measurement outcomes on what other, compatible, measurements one happens to choose to perform. It is closely connected to quantum nonlocality [25] and is believed to be important in the efficacy of certain quantum computing algorithms [26]. Recently, it has also been the subject of several experimental tests [27][28][29]. The concept is perhaps best understood in terms of the following example. Suppose we have a set of nine operators, arranged in a square as follows: These operators constitute the famous Mermin-Peres "magic square" [30]. Using the fact that the Pauli matrices are involutions and that XY = i Z, it is readily verified that the product of the three operators in each row as well as that in the first two columns, is I ⊗ I . For the third column, however, we note that According to the Kochen-Specker theorem, it is impossible to replace each of the nine observables with a definite value of either +1 or −1 (their two eigenvalues) in a consistent manner such that these same product relations hold. (This is readily proven or can be verified directly by simply trying all 2 9 possible assignments.) From the perspective of quantum mechanics, this may seem odd: Since the three operators in each row and column are mutually commuting, they may be measured simultaneously. From the aforementioned product relations, the product of outcomes for each row measurement, and for the first two column measures, will always be +1, while that of the third column will always be −1. So, measurement reveals definite values that are consistent with the product relations, but no consistent assignment can be made across the square. Furthermore, this result holds independently of the prepared quantum state, or even of whether it is pure or mixed. As pointed out in [31], the resolution of this paradox lies in the fact that, according to quantum mechanics, different probability measures apply to the six different choices of measurement bases. From a deterministic or hidden-variable perspective, one interpretation of this fact would be that the physical process of measurement induces a dynamical change in the hidden variable state such that the resulting postmeasurement distribution is changed. An alternate, and perhaps simpler, interpretation is possible if one considers measurement to be the threshold detection process considered here. In that case, the six post-measurement probability distributions are just the conditional probabilities, given that a single detection (for each observable) has occurred. It remains, then, to verify that this scheme does, in fact, work. To that end, it may be insightful to illustrate these properties explicitly via numerical simulations. Monte Carlo Verification A Monte Carlo scheme was devised to verify numerically that the detector model satisfies the properties of the magic square. For each Monte Carlo run, a random quantum state of the form α = z/ z , with N = 4, was drawn. Next, a random realization of M = 2 20 = 1 048 576 independent noise vectors of the form w = σ z/ z , with σ = 1, was drawn. From this, a set of M complex amplitude vectors of the form a = sα + w, with s = ( √ 2 − 1)σ , was created. For each of the M realizations of a, six measurements were performed, corresponding to the three rows and three columns of the magic square. For each of the six measurements, a common unitary matrix U was constructed that diagonalizes all three observables. The quantity a = U † a was then computed. For example, in the case of Row 1, it suffices to use U R1 = H ⊗ H , while for Row 2 U R2 = V ⊗ V diagonalizes the observables. For Columns 1 and 2 we may use U C1 = H ⊗ V and U C2 = V ⊗ H , respectively. In the case of Column 3, where the three observables are X ⊗ X , Y ⊗ Y , and Z ⊗ Z , the construction of U C3 is not as straightforward. Each is diagonalized by U = H ⊗ H , V ⊗ V , and I ⊗ I , respectively; however, none of these will diagonalize all three. Rearranging the columns of U to produce U = [U (:, 4) U (:, 1) U (:, 2) U (:, 3)], where U (:, n) is the n th column, produces an alternate unitary matrix which also diagonalizes X ⊗ X . Finally, applying a second transformation yields U C3 = U (I ⊗ H ), which diagonalizes with respect to Y ⊗ Y and Z ⊗ Z as well. A similar procedure is required for Row 3. In summary, the following unitary matrices were used for the rows and columns If exactly one component of a fell above the threshold γ = σ , then a detection was deemed to have occurred and the observable was assigned the value of the corresponding eigenvalue (either +1 or −1). If there was no detection, then no measurement was reported. By Theorem 3, we do not expect more than one component of a to fall above the threshold. Thus, for each of the six sets of observables, there were K ≤ M reported triple values [g 1 (m k ), g 2 (m k ), g 3 (m k )], for k = 1, . . . , K , and an index (of length K ) of which of the M realizations resulted in a detection. These indices will be denoted R 1 , R 2 , R 3 for the three rows and C 1 , C 2 , C 3 for the three columns. According to the Kochen-Specker theorem, we expect Furthermore, we expect that, say, for all m ∈ C 3 (i.e., for all detections when Column 3 is measured) while for all m ∈ R 1 We expect the latter result to hold for R 2 , R 3 , C 1 , and C 2 as well. A total of 2 16 = 65 536 Monte Carlo runs were performed, each with a different random quantum state, and the above properties were examined. In all, cases, the required conditions were satisfied exactly. Understanding the Magic Square Let us consider these results more deeply. Each of the six unitary transformations diagonalizes the three observables in the corresponding row or column. For Row 3, this means while, for Column 3, we have Suppose we measure Row 3 and obtain an outcome of +1 for Z ⊗ Z . Since measurement outcomes are based on amplitude threshold crossings, and we have conditioned on single-detection events, we know that, for a = U † R3 a, either |a 1 | > γ or |a 3 | > γ . The outcomes for the other two observables, X ⊗ Y and Y ⊗ X , could then be either +1, +1 (if |a 1 | > γ ) or −1, −1 (if |a 3 | > γ ), respectively. The outcome of, say, X ⊗ Y therefore uniquely determines the outcome of the other, and the product of all three is thereby guaranteed to be +1. Had we instead measured −1 for Z ⊗ Z , a similar outcome would have been obtained. Now suppose, using the same value of a, we had measured Column 3 instead. It is possible that there are no detections, but we may suppose that there is one. Let us suppose further that the outcome of measuring Z ⊗ Z is +1 as well. This means that, for a = U † C3 a, either |a 1 | > γ or |a 3 | > γ . For the other two observables, X ⊗ X and Y ⊗ Y , the possible outcomes are +1, −1 (if |a 1 | > γ ) and −1, +1 (if |a 3 | > γ ), respectively. Again, the outcome of one observable, either X ⊗ X or Y ⊗ Y , uniquely determines the outcome of the other, and, in either case, the product of all three is −1. Measuring −1 for Z ⊗ Z would, of course, have led to a similar outcome. Entanglement A quantum state |ψ ∈ H = H 1 ⊗ H 2 , where H 1 and H 2 are Hilbert spaces and H is their tensor product space, is said to be entangled (with respect to H 1 and H 2 ) if there do not exist substates |ψ 1 ∈ H 1 and |ψ 2 ∈ H 2 such that |ψ = |ψ 1 ⊗ |ψ 2 . A state which is not entangled is said to be separable. This is a mathematical definition concerning properties of vector spaces which, taken at face value, suggests that there are plenty of classical systems that are entangled. For example, the acoustic field of a general, coupled mode solution to sound propagation in the ocean is, in this sense, an entangled state (though perhaps nonseparable would be a more accurate description). The term classical entanglement has been suggested to describe certain (classical) coherent optical states that exhibit properties similar to that of entangled quantum systems [32][33][34], though the lack of actual statistical outcomes makes this association somewhat dubious. Classical entanglement, as it is construed in these works, refers only to nonseparability. In the context of quantum mechanics, and in keeping with its original use by E. Schrödinger, entanglement connotes statistical correlations [35]. In what follows, we will consider statistical manifestations of entanglement arising from the proposed detector-based model. Given an entangled (i.e., non-separable) design state |ψ and corresponding random vector a, we would like to know whether the latter is entangled, and in what sense. Rather than appeal to vector space properties, we look instead to an equivalence of statistical predictions. In particular, we will examine the statistics of "detector clicks" for various observables. A bit of notation may help. Let |↑ = [1, 0] T and |↓ = [0, 1] T denote the eigenvectors of Z , with eigenvalues +1 and −1, respectively. These may be interpreted as orthogonal polarizations of a single photon. The states |↑ and |↓ may also be viewed as the qubit states |0 and |1 , respectively, in the computational basis [36]. This four-dimensional (i.e., two-qubit) space is the one we will investigate. Bell States The Bell states are a set of four maximally entangled states that also form an orthonormal basis for our four-dimensional Hilbert space, H. They are given by Suppose |ψ = |φ 2 , so the component vector in the standard basis is α = [0, 1, 1, 0] T / √ 2. Now consider the 16 Hermitian matrices composed of pair-wise tensor products of the four Pauli matrices I , X , Y , and Z ; i.e., I ⊗ I, I ⊗ X, . . . , Z ⊗ Z . It is readily verified that the Hilbert-Schmidt inner product between any two different pairs is zero, as this property holds for the Pauli matrices themselves. Between themselves, they take on the value 4. Hence these operators, when suitably normalized, form a tomographically complete quorum set, which may be used to deduce the quantum state from their expectation values using QST. Going further, one finds that the application of each member of the quorum on α results in a vector that, like α, has components that are either zero or of equal magnitude. With suitable choices of s, γ , and w, then, Theorem 1 or 3 holds, depending upon the choice of distribution for w, and the distribution of detected outcomes follows the Born rule. Consequently, the expectation values of the random variables corresponding to each observable match the quantum predictions exactly (in the case of Theorem 3) or asymptotically (in the case of Theorem 1). Tomographically, then, the random vector a = sα + w is equivalent to the entangled state |ψ . It is straightforward to show that a similar result holds for the other three Bell states. For example, let s = ( √ 2 − 1)σ , γ = σ > 0, and w = σ z/ z . Now suppose we measure Z on both the first (left) particle and second (right) particle.. For the first particle, we will obtain either a projection onto the subspace Span{|↑↑ , |↑↓ }, if the result is +1, or Span{|↓↑ , |↓↓ }, if the result is −1. On the other hand, for the second particle we will obtain a projection onto either the subspace Span{|↑↑ , |↓↑ }, if one measures +1, or Span{|↑↓ , |↓↓ }, if one measures −1. Of course, it is also possible that neither measurement yields a detection, but let us suppose that they both do. Since the random amplitudes of outcomes |↑↑ and |↓↓ are |a 1 | ≤ σ and |a 4 | ≤ σ , respectively, these outcomes will never occur. Furthermore, since E[|a 2 |] = E[|a 3 |], the outcomes |↑↓ and |↓↑ will be equally likely. Since, by Theorem 3, no more than one detection is possible, we conclude that, if we obtain |↑ (i.e., +1) for particle 1, then we must obtain |↓ (i.e., −1) for particle 2, and vice versa. The detected outcomes are thus perfectly anti-correlated, as one might expect for an entangled pair. Violations of Bell's Inequality Another hallmark of entanglement is the possibility of violating Bell's inequality. More precisely, the CHSH inequality is given by [37] where A, A , B, B are random variables bounded by unity and E[ · ] is the expectation with respect to some probability measure P. It is important to note that, in order for this inequality to hold, the same probability measure P is used for all four expectation values. Thus, The analogous expressions for quantum mechanics replace E[AB], say, with AB = ψ| AB |ψ for a fixed quantum state |ψ . In particular, if |ψ is the Bell state |φ 2 , and the four observables are How can we reconcile this result with the CHSH inequality? As we have noted, the theorem applies to expectations that are with respect to the same probability measure. If the probability measures differ for each pair of observables, then the inequality need no longer hold. Now, in the model proposed here, expectations are with respect to conditional probability distributions, conditioned, that is, on single detections. Let these be denoted E 1 and E 4 [A B ]. This means that Bell's theorem does not apply and violations of the inequality are possible. It remains to ask whether can ever be greater than 2. A numerical study was performed to investigate this possibility. As before, a random realization of M = 2 20 independent noise vectors of the form w = σ z/ z , with σ = 1, was drawn. Using α = [0, 1, 1, 0] T , a set of M complex-valued vectors of the form a = sα + w, with s = ( √ 2 − 1)σ , was created. A detection threshold of γ = σ was used. To this set the Hermitian conjugates of the unitary matrices U 1 = I ⊗ W + , U 2 = I ⊗ W − , U 3 = H ⊗ W + , and U 4 = H ⊗ W − were applied separately to a for the observables AB, AB , A B, and A B , respectively. For each of the four measurements, the diagonalized matrix of eigenvalues is Of the M realizations of a, typically only about 5 % resulted in a detection, though it was a different 5 % for each of the four observables. (This should not be confused with the detector efficiency, which is a measure of coincidence rates and no meaning in this context.) Let I 1 denote the set of values of a, a subset of all M realizations, that resulted in a detection for the observable AB. Define I 2 , I 3 , and I 4 similarly for AB , A B, and A B , respectively. Furthermore, let I i j denote the subset of I i for which the j th component exceeded the threshold. Note that I i1 , I i2 , I i3 , and I i4 are mutually exclusive, and their union is I i . Finally, let n i j denote the cardinality of I i j and n i the cardinality of I i . The results of the numerical simulation are summarized in Table 1. where the uncertainties of the four correlations were added to arrive at the final uncertainty in S D . This is clearly greater than 2 and, in fact, greater than the Tsirelson bound of 2 √ 2 = 2.8284, which is an upper bound on quantum violations of the CHSH inequality. A similar numerical study was performed using w = σ z, with σ = 1, s = σ , and γ = 3σ . The results of this study are summarized in Table 2 Although the correlations are not as strong, we do find again that Bell's inequality is violated. These results demonstrate that a (classical) deterministic model can violate Bell's inequality. Such violations are made possible by the fact that the model is contextual, and this contextuality is, in turn, a consequence of our conditioning on single-detection events. In some cases, these violations can be larger than those predicted by quantum mechanics. This is so despite the fact that the Born rule is not perfectly reproduced for all observables concerned. Local Realism The notion that quantum mechanics is at odds with local realism first arose in the context of the Einstein-Podolsky-Rosen (EPR) paradox [38] and later by Bohm in terms of discrete states [39]. This paradox was recast by Bell [4] into an inequality that, he concluded, no local realistic theory could violate. A variation of this inequality was first tested by Clauser [40] and, in a later landmark experiment, by Aspect [41], with results in agreement with quantum predictions. This is generally regarded as conclusive evidence that quantum mechanics, and hence nature, is fundamentally nonlocal, despite the fact that it has been known for some time that violations of Bell's inequality are possible under the so-called detection loophole [42][43][44]. The analysis of the previous section reconfirms this through a specific example, but it did not address local realism, as the two observables were effectively measured as one. Although recent experiments, with detector efficiencies over 70 %, claim to have closed the detection loophole for photons, these results do not address violations of Bell's inequality but, rather, a little-known inequality due to Eberhard used to test local realism while accounting for detector inefficiencies [45][46][47]. Let us, then, consider an alternate scheme whereby the two observables are measured separately and independently. This corresponds to the usual sense of local realism in the context of Bell's inequality, namely, that the choice of A or A and its outcome are not influenced by (nor do they influence) the choice of B or B and its outcome. As is common in such discussions, we will describe this situation in terms of two familiar actors. Now suppose Alice and Bob play this game many, many times. For each instance of sα + w they record which measurement they performed and whether they obtained "+1," "−1," or "NaN" as an outcome. When the game is over, they compare notes. All items on the list in which either Alice or Bob recorded "NaN" are struck out. Next, the results are grouped into four categories, corresponding to the four measurement combinations. Finally, the correlation is computed for each group. A numerical study was performed along these lines, with M = 2 20 realizations of sα + w for each of the four measurement choices. In each case, about 30 % of the M realizations resulted in a single detection for either Alice or Bob, and about 10 % of the M realizations resulted in a single coincidence detection. This corresponded to a detector efficiency of about η = 0.33 (the ratio of coincident to single detections), which is comparable to that of a good quantum optics experiment. Table 3 summarizes the results of this study. For example, the number of times Alice obtained ↑ (+1) and Bob obtained ↓ (−1) when she measured A and he measured B was 12069. The total number of coincidences for this pair of observables was 118251, resulting in a mean correlation of E 1 [AB] = 0.5833. Computing these four mean correlations allows us to compute S D , which was found to be This result is, of course, larger than 2 and, so, violates Bell's inequality. As before, this was made possible by the fact that not all measurements resulted in a single coincidence detection for both Alice and Bob. This example, however, shows that local measurements made within a fully deterministic (i.e., classical) model can still violate Bell's inequality. Interestingly, the violation is not as large as was found in Sect. 6.2. It seems, therefore that separated measurements have weaker correlations than joint measurements. Furthermore, if one uses w = z instead of w = z/ z , as was done above, no violation is observed. Introducing correlations in the initial noise term therefore seems to have the effect of strengthening the correlations in the measured outcomes. This suggests that a different choice of noise distribution could lead to a higher efficiency and an even larger violation. Conclusion This paper introduces a simple deterministic model of quantum systems and quantum measurement that is capable of reproducing many phenomena typically regarded as having no classical analogue. The model associates a given pure quantum state |ψ with a complex random vector a that is composed of a scaled version of the state's complex components, sα, and an additive complex noise term w. Although not addressed here, mixed states may be modeled similarly using an ensemble of pure states. A measurement is taken to be a single-threshold-crossing event (|a n | > γ , |a n | ≤ γ for all n = n); all other events are ignored. Taking the noise to be either a vector of independent complex Gaussians (w = σ z) or its normalized counterpart (w = σ z/ z ), it was shown that, for suitable choices of s and γ relative to σ , the Born rule is recovered for states that are such that the components are either zero or equal in magnitude. Measurements in other bases are performed by applying the corresponding unitary transformation to the vector a. Using these properties, one can use quantum state tomography to deduce the equivalent quantum state from the statistics of singledetection events. Partial measurements over a complete set of projection operators are defined similarly, with the amplitude of the projection onto each subspace being used for threshold detection. In the case that the projections are formed from a complete orthonormal basis, this reduces to the above prescription for full measurements. This model has been shown to be capable of reproducing several aspects of quantum contextuality and entanglement, including local violations of Bell's inequality. Common among these varied phenomena is the dependence of the underlying statistics on different, noncommuting sets of observables. This dependence has previously been known to give rise to contextuality. Here, this contextuality arises from the fact that we have conditioned on single-threshold-crossing events for our definition of measurement. Although this model does exhibit many of the more interesting features of quantum phenomena, it is by no means complete. The Born rule is reproduced exactly in only a limited set of quantum states and is elsewhere only an approximation. Bounded Gaussian noise models appear to work better that unbounded ones, although both are capable of reproducing quantum phenomena. While something of a mathematical contrivance, it is hoped that this model may form the basis of a more physical theory of quantum measurement. Further extensions of this work would include improving the noise model and associating the random complex vector to a particular physical model of some classical system, such as a stochastic electromagnetic field varying over space and time. Similarly, the artifice of a threshold detector is but a crude representation of light-matter interactions, which should be modeled in greater detail for a full physical theory. Proceeding in this manner, one might hope that better agreement over a broader set of quantum phenomena may be achieved. For sα i = 0 there is no closed-form solution for F i . As we are interested in the large-γ limit, however, we may consider the behavior of Q 1 (a, b) for large b. In this regime, and for a = 0, the modified Bessel function in the integrand may be approximated as [49] I 0 (ax) ≈ e ax √ 2πax . In this approximation, the Marcum Q-function becomes This still does not admit a closed-form solution, but it may be bounded. Define the lower and upper bounding functions x e −(x−a) 2 /2 dx (44) and note that, for b sufficiently large, These two integrals do admit closed-form solutions, which are of the form where erfc is the complementary error function. The function erfc itself has the following asymptotic form: Thus, From this result, we can see that Q 1 (a, b) ∼ e −(b−a) 2 /2 for large b. Thus, for large γ , Now consider P n (α, γ ), the probability that |a n | > γ is the only threshold crossing. Since the random variables a 1 , . . . , a N are independent, it may be written in terms of F i (γ ) as simply P n (α, γ ) = [1 − F n (γ )] i =n Now suppose |sα m | < |sα k | and consider the ratio P m (γ )/P k (γ ) for γ > 0. This will be given by P m (α, γ ) For large γ , the ratio F k (γ )/F m (γ ) is approximately unity and may be ignored. The remaining factors may be written in terms of Eq. (51), so that which tends to zero as γ → ∞, since λ k = |sα k | 2 > |sα m | 2 = λ m .
11,524.8
2014-09-03T00:00:00.000
[ "Physics" ]
Current status and future opportunities for serial crystallography at MAX IV Laboratory The possibilities to perform serial synchrotron crystallography at BioMAX, the first macromolecular crystallography beamline at MAX IV Laboratory in Lund, Sweden, are described, together with case studies from the synchrotron X-ray crystallography user program. Serial synchrotron X-ray crystallography Continuous developments in X-ray free-electron lasers (XFELs), more specifically serial femtosecond crystallography (SFX), have enabled data collection from micrometre-sized crystals of several membrane and globular proteins, with decreased risk of radiation damage Kang et al., 2015;Liu et al., 2013;Zhang et al., 2015;Nass, 2019). These developments have helped to study molecular dynamics of proteins down to femtosecond time resolution (Tenboer et al., 2014;Barends et al., 2015;Nango et al., 2016;Chapman et al., 2011) and have succeeded in solving several GPCR structures (Stauch & Cherezov, 2018). SFX is based on the concept that a complete diffraction dataset can be collected at room temperature from thousands of randomly oriented microcrystals, each exposed to a single very short (5-50 fs) and intense X-ray pulse generated by an XFEL (Martin-Garcia et al., 2016). SFX beam time is very precious due to the limited number of XFELs and the small number of macromolecular crystallography (MX) stations at each facility. The experiments are often complex and require substantial preparation, including optimization of experimental parameters. In contrast, the number of XFELs cannot be compared with the numerous synchrotron facilities available worldwide. In recent works, data collection using the serial crystallography approach has been performed at thirdgeneration synchrotrons (de la Mora et al., 2020;Foos et al., 2018;Owen et al., 2017;Roedig et al., 2016;Nogly et al., 2015). These facilities are more easily accessible for users, available at many more locations around the world and complement experiments at XFELs Oghbaey et al., 2016). In earlier work, one of the main concerns of performing serial crystallography at synchrotron sources was that radiation damage (Nave & Garman, 2005) would prevent the collection of diffraction data from microcrystals, since the exposure time would not be short enough to outrun radiation damage, as is the case with femtosecond pulses at XFELs. It was later shown that it is possible to reduce the radiation damage at room temperature also at synchrotron sources by increasing X-ray beam intensity and decreasing sample exposure time (Owen et al., 2012). Another difference is the peak brilliance of the synchrotrons, which is several orders of magnitude lower than any hard X-ray FEL facility. Nevertheless, serial crystallography at synchrotron beamlines with high flux densities, focused microbeams and fast detectors has been successfully implemented for the development of millisecond (ms) time-scale data collection. Novel developments include different sample delivery techniques (Beyerlein et al., 2017;Coquelle et al., 2015;Martin-Garcia et al., 2017;Nogly et al., 2015;Owen et al., 2017;Roedig et al., 2016;Tsujino & Tomizaki, 2016) and new data analysis software (White et al., 2012;Barty et al., 2014;Sauter et al., 2013;Kabsch, 2014). Furthermore, interest in serial synchrotron X-ray crystallography is continually growing within the community, leading to the construction of new micro-focus beamlines dedicated to serial crystallography (e.g. ID29, ESRF, Grenoble, France; MicroMAX, MAX IV, Lund Sweden; TREXX, PETRA III, Hamburg, Germany). MAX IV Laboratory and BioMAX MAX IV Laboratory is the first fourth-generation synchrotron storage ring facility in operation worldwide. It utilizes a seven-bend achromat magnet lattice for its 3 GeV storage ring, which dramatically decreases the emittance of the electron beam (Martensson & Eriksson, 2018) and hence increases the brilliance of the X-ray beam produced. The facility consists of a 3 GeV storage ring, a 1.5 GeV ring and a full-energy linear accelerator (LINAC), which simultaneously functions as the injector for the two rings and for a short-pulse hard X-ray facility (Tavares et al., 2014) producing 100 fs hard X-ray pulses (Fig. 1). To date, ten beamlines at MAX IV are open for users (including BioMAX), and six others are currently (July 2020) in commissioning or construction mode. BioMAX is the first macromolecular X-ray crystallography beamline at MAX IV Laboratory and began user operations in 2017. BioMAX is a 40 m-long energy-tunable beamline, which is fed by an 18 mm period length in-vacuum undulator (Hitachi, Japan). The focused X-ray beam has a cross-section of 20 mm  5 mm full width at half-maximum (FWHM) at the sample position with a photon flux of 2  10 13 photons s À1 at 500 mA ring current. Alternatively, using aperture overfilling, it is possible to obtain a stable 5 mm  5 mm beam at the sample position which is more suitable for smaller crystals. The energy range of the beamline is 5 keV to 25 keV. The beamline optics consist of an Si (111) double-crystal monochromator and a pair of focusing mirrors in Kirkpatrick-Baez geometry (Ursby et al., 2020). The BioMAX experimental setup is suitable for X-ray crystallography with microcrystals and can resolve ultra-large unit cells. The experimental station consists of an MD3 micro-diffractometer (ARINAX, France), an EIGER-16M hybrid pixel detector (DECTRIS, Switzerland) and an ISARA sample changer (IRELEC, France). Data can be collected at 100 K using a Cryojet XL (Oxford Instruments, UK) as well as at room temperature using an HC-Lab humidity controller (ARINAX, France). The EIGER-16M detector can collect full-frame data with a frequency of up to 133 Hz. In the 4M region-of-interest mode, the detector can be operated at up to 750 Hz using only the central eight modules. Sample environments for SSX at BioMAX At BioMAX, room-temperature SSX data have been collected using either the high-viscosity extrusion (HVE) injector or different fixed-target supports (which can be combined with the HC-Lab humidity controller). The MD3goniometer is equipped with a high-resolution on-axis microscope and sub-micrometre x, y, z sample-centering stage. The standard omega-goniometer head is used to perform fixed-target SSX experiments [ Fig. 2(a)]. This device can be easily exchanged with an HVE injector head, which is fully supported by the MD3 environment [ Fig. 3(a)]. Fixed-target supports are translated quickly and accurately through the X-ray beam using U-turn raster-scan movements performed by the motion system of the MD3. The fixed-target approach helps to reduce sample consumption, which is particularly useful for proteins that cannot be crystallized in large quantities. Currently, BioMAX uses two different types of fixed-target supports: silicon nitride membranes (Silson, UK), with an area of either 1.5 mm  1.5 mm or 2.5 mm  2.5 mm and a thickness of 500 nm, which are clipped onto the goniometer base (Molecular Dimensions, UK) [Figs. 2(a) and 2(d)], and a novel solid support, XtalTool [Figs. 2(b) and 2(c)], was also implemented and tested at the beamline (Feiler et al., 2019; Jena Bioscience, Germany). The 21 mm-thick polyimide foil features 5 mm pores, allows direct crystal growth using 24-well plates [Fig. 2(c)] and facilitates direct mounting onto the beamline with a unique goniometer base [ Fig. 2(b)]. Both membrane and XtalTool supports are made from highly X-ray transparent materials (silicon nitride and bio-inert polyimide, respectively) which minimize background scattering. Data collection parameters of the scan are set using the MXCuBE3 web application . Data are collected with a small degree of rotation for each line ($1-10 ) specified by the user via MXCuBE3. Both supports can be used to collect up to 40 000 raster points within minutes. Recently, the MD3 received a performance upgrade to further decrease the data collection time of the scan rate to 60 points per second. Another way to perform SSX experiments at BioMAX is to use an HVE injector ( (a) HVE injector mounted and prepared for the experiment. The water line is connected to the HPLC pump and is regulated to control the sample extrusion rate. The gas line is connected to a pressurized helium gas cylinder which is controlled for a stable extrusion path. The blue fitting can be optionally connected to the thermostat to maintain a stable sample temperature inside the reservoir. (b) Schematic representation of the HVE injector (adapted from the original design by Bruce Doak). The sample shown in green is pushed by a plunger with a PTFE ball at the end through the silica capillary. (c) Schematic representation of the end of the injector nozzle. The sample travels through the silica capillary (yellow) and is exposed to the X-rays at the exit. The gas stream travels through the injector down to the borosilicate capillary to stabilize the extrusion. Planck Institute for Medical Research (Heidelberg, Germany) to be fully compatible with the environment of the BioMAX experimental station and to enable a long extrusion time, which is permitted by the relatively large working sample volume . The injector body is stainless steel with a maximum sample reservoir volume of 130 ml available for extrusion. This sample volume is sufficiently large to run for several hours, depending on the flow rate, which facilitates the collection of a high number of diffraction images in a continuous fashion. The extrusion pressure is generated by the hydraulic pressure produced by a high-performance liquid chromatography (HPLC) pump (Shimadzu, Japan, Solvent Delivery Unit LC-20AD), which forces water to push against a piston/plunger inside the injector [ Fig. 3(b)] at a constant flow rate and pressure. The plunger has a larger diameter on the upper side (water) and a smaller diameter on the sample side, acting as a force multiplier for extrusion of the more viscous sample material. This piston directly pushes a polytetrafluoroethylene (PTFE) ball which acts as a secondary plunger for separation and sealing of the water from the crystal sample [ Fig. 3(b)]. The ball continuously pushes the sample from the reservoir into a thin silica capillary [ Fig. 3(c)]. The internal size of the silica capillary varies from 50 mm to 100 mm in diameter, depending on the crystal size expected to be used. The sampleloading procedure uses Hamilton syringes with volumes of 100 ml or 250 ml, as described in the work by Botha et al. (2015). At the end of the nozzle, helium gas at a maximum pressure of 50 psi flows through a borosilicate sheath capillary [ Fig. 3(c)], which creates a laminar stabilizing sheath gas flow of the extruded crystal sample. Both water flow and He gas flow are controlled remotely to obtain a constant stable flow of sample from the HVE injector. To avoid contamination of the apertures and the surrounding experimental environment, a custom-made sample catcher was designed and 3D-printed. Data collection is started from the MXCuBE3 experimental control software and the crystal hit rate is calculated on the fly. Preparation of microcrystals Three proteins from three different user groups were crystallized in their home laboratories and afterwards collected at the BioMAX beamline as part of the collaborations for the development of SSX at BioMAX: the C-terminal carbohydrate recognition domain of galectin-3 (galectin-3C, provided by SARomics Biostructures), the ribonucleotide reductase R2 subunit from Saccharopolyspora erythraea (hereafter referred to as R2, from the Martin Hö gbom group at Stockholm University) and ba 3 -type cytochrome c oxidase (CcO) from Thermus thermophilus (a membrane protein from the Gisela Brä ndé n group at University of Gothenburg). Galectin-3C crystals were grown directly on an XtalTool support (Jena Bioscience, Jena, Germany) using a 24-well VDX-plate with 1 ml reservoir volume [ Fig. 2(c)]. The drop was set up using 1 ml 20 mg ml À1 protein in buffer containing 10 mM sodium phosphate pH 7.4, 100 mM NaCl, 10 mMmercaptoethanol, 2 mM lactose and 0.25 ml of galectin-3C seed crystals in a stabilization solution containing 0.1 M Tris pH 7.5, 33% (w/v) PEG 4000, 0.1 M MgCl 2 , 0.2 M NaSCN and 0.75 ml reservoir solution containing 0.1 M Tris pH 7.5, 30% PEG 4000, 0.05 M MgCl 2 , 0.2 M NaSCN and 8 mM -mercaptoethanol. Crystals used for the silicon nitride membrane experiment were grown on the same XtalTool support and VDX-plate using the same protein and seed solution with a slightly different reservoir composition [0.1 M Tris pH 7.5, 34% (w/v) PEG 4000, 0.2 M NaSCN and 8 mMmercaptoethanol]. Crystals were grown at 4 C to a size of approximately 10-15 mm. The R2 protein was crystallized at a concentration of 20 mg ml À1 using the batch method with a crystallization buffer of 16% (w/v) polyethylene glycol (PEG) 3350 and 2% (v/v) tacsimate pH 4.5 in a 1:1 protein solution to crystallization buffer ratio (Lebrette et al., in preparation). Crystals appeared overnight at 21 C in the size range 10-40 mm [ Fig. 2(e)]. The membrane protein ba 3 -type cytochrome c oxidase from Thermus thermophilus was purified and crystallized at the University of Gothenburg as described in the work by Anderson et al. (2017Anderson et al. ( , 2019. The lipidic cubic phase (LCP)grown crystals 5-20 mm in size were used for data collection at BioMAX. Data collection and data processing at BioMAX Data analysis is one of the major challenges for serial crystallography since each diffraction image contains reflections measured with unknown partialities (Kirian et al., 2009). In addition, SSX data collection typically generates several terabytes of data, including many frames that contain no usable data and thus should be excluded to speed up the data processing. At BioMAX, initial hit identification from raw HDF5 files can be performed using Cheetah or NanoPeakCell (Coquelle et al., 2015). Sorted diffraction patterns can then be indexed, integrated and merged using CrystFEL (White et al., 2012(White et al., , 2016. In this article, all data were processed using NanoPeakCell and CrystFEL 0.7.0. Data collection and processing results are provided in Table 1. The resolution cutoff was determined based on conservative criteria for a signal-to-noise ratio >2, completeness (100%) and R split (<60%) for the highest resolution shell. Data were converted to MTZ format and phased by molecular replacement using PHENIX (Adams et al., 2010) and the structure models from the Protein Data Bank [PDB entries 6eym for galectin (Manzoni et al., 2018) and 3s8g for ba 3 -type CcO (Tiefenbrunn et al., 2011)]. A structure model for the R2 protein was an unpublished result of a previous data collection (Lebrette et al., in preparation). Model building was performed using Coot (Emsley & Cowtan, 2004). Structures were refined using phenix.refine (Adams et al., 2010). All data collections were performed at a wavelength of 0.98 Å (12.7 keV) with a photon flux of 2  10 12 photons s À1 at 100 Hz frame rate, with a 20 mm  5 mm beam size. The energy of the beamline monochromator was calibrated using absorption edges (Cu, Se, Zr). Tests using further absorption research papers edges and powder diffraction measurements confirmed that the Bragg axis encoder values are accurate. The beam centre and detector distance were calibrated with LaB6 by PyFAI (Ashiotis et al., 2015). The average dose per crystal was calculated using RADDOSE-3D (Bury et al., 2017). The data collection using the HVE-injector took several hours to obtain a complete dataset due to the low hit rate of 4.7% and some beamline issues. Data collections using fixedtarget supports were performed with a small degree of rotation (5 ) for each support and took less than 1 h to collect a full dataset, including mounting of the samples at the beamline. The average hit rate was 64%. Galectin using XtalTool and silicon nitride membranes SSX diffraction datasets from galectin-3C crystals were collected using XtalTool supports and silicon nitride membranes with U-turn continuous scanning and an average data collection speed of 100 mm s À1 . The crystal size did not exceed 20 mm (Fig. 4, left) and the X-ray exposure time per image was 0.05 s for both datasets. For the dataset obtained with XtalTool, an HC-Lab humidifier was set up at 98.5% relative humidity [ Fig. 2(b)]. Full dataset collection required two XtalTool supports, with 2 ml droplets on each. In total, 70 537 diffraction images were collected with a 70.1% hit rate, from which 15 555 could be indexed in the space group P2 1 2 1 2 1 (PDB entry 6y4c). The final structure was refined to 1.7 Å resolution with R work and R free values of 17.2% and 20.1%, respectively (Table 1). The resulting experimental electron density maps were of excellent quality, revealing the presence of lactose (ligand). Fig. 4 (right) demonstrates the high quality of the 2mF o À DF c electron density map near the ligand. The structure is very similar to the room temperature complex determined by joint neutron/X-ray crystallography (1.7 Å for neutrons, 1.1 Å for X-rays; PDB entry 6eym). The root-mean-square deviation (r.m.s.d.) in 120 matched C positions between the two structures is 0.47 Å , which is higher than the usual r.m.s.d. (<0.2 Å ) between galectin-3C structures determined at 100 K. The core of the structure appears most similar, while shifts of up to 0.77 Å are seen in surface loops. The reason for this is unknown but it could be coupled to a systematic reduction in cell dimensions in both SSX datasets, compared with the values usually observed for galectin-3C research papers Table 1 Data collection and refinement statistics for the proteins used. Values in parentheses are for the highest resolution shell. The second dataset from galectin-3C crystals was collected using silicon nitride membranes. A volume of 0.4 ml of the sample was distributed directly onto the membrane and covered with a second membrane to create a so-called sandwich. No additional sealing was needed to maintain the humidity of the sample as it can be kept at the beamline for up to 30 min before crystal dehydration at the chip edges can be observed. Subsequently, the sample was clipped onto the goniometer base and mounted at the beamline [ Fig. 2(d)]. Two sandwiches were needed to obtain a complete dataset with a total volume of 0.8 ml. The final structure was refined to 1.7 Å resolution with R work and R free values of 19.2% and 20.8%, respectively (PDB entry 6y78). The results of data collection and data processing are presented in Table 1. On the whole, the data collected from XtalTool were of slightly higher quality, allowing, for example, the identification of more well ordered water molecules (117 versus 72) at the same resolution. R2 protein crystals on silicon nitride membranes Data were collected using U-turn raster-scanning applied to silicon nitride membranes. The sample preparation and deposition onto the membranes were the same as for galectin crystals. The exposure time per crystal was 0.05 s. A total of 68 061 images were collected. The hit rate was 58.1%, from which 22 224 images were indexed in the space group P4 1 2 1 2 ( Table 1). The final structure was refined to 2.4 Å resolution with R work and R free values of 17.3% and 22.1%, respectively (PDB entry 6y2n). The refined 2m|F o | À D|F c | electron density map is shown in Fig. 5(c). Cytochrome c oxidase using the HVE injector The viscosity of the LCP phase was fine-tuned by mixing the ba 3 -type CcO LCP crystals with monoolein in Hamilton syringes at a crystal to monoolein ratio of 80:20. The flow rate of crystals in LCP was 1.18 ml min À1 . The calculated exposure time per crystal was 0.028 s. A total of 214 170 images were collected with a hit rate of 4.7%, from which 6513 could be indexed in the space group C121. Several cycles of real-space and rigid-body refinement were performed at 3.6 Å resolution. The structure was not refined further due to the limited resolution and the fact that a room temperature high-resolution SFX structure from this crystal form has already been solved (PDB entry 5ndc). This experiment was mainly a proof of principle of SSX data collection at BioMAX using an HVE-injector. The low hit rate in this experiment was related to several beamline and software issues which are now solved. Discussion and conclusions The desire to perform serial crystallography at room temperature at synchrotrons and XFELs is growing continuously, which is connected with the expanding interest of the structural biology community and the current research questions that are to be answered. The resulting demand from the user community has led to the development of beamlines dedicated to SSX. Here we have presented an initial integration of two SSX environments at BioMAX in order to collect diffraction data from micrometre-sized crystals of membrane and globular proteins at room temperature. The environments were different fixed-target scanning supports and an HVE- Left: microcrystals of galectin. Right: 2mF o À DF c simulated annealing omit map, omitting only the lactose residue contoured at 1. injector for LCP extrusion experiments. The HVE injector setup is complex, and sample preparation is arduous and timeconsuming; however, it can provide data to successfully solve the structure of a membrane protein grown in LCP. Data collection using fixed-target supports was comparatively fast, simple and required only a few microlitres of protein crystal solution, which could be grown directly on the support (Opara et al., 2017). It was easier to observe microcrystals on the silicon nitride membranes than on XtalTool due to the thinner material and the absence of pores for liquid handling. Currently, many fixed-target supports can be mounted via standard SPINE supports and soon these supports will be standardized and commercialized to perform SSX at synchrotrons. Fixed targets are a relatively cheap, fast and practical way to collect complete datasets at room temperature with minimal sample. Further advances in automated SSX data collection and data processing could help current and future users to perform serial crystallography both at synchrotrons and XFELs. All data collections presented in this work are a result of collaborations with Swedish research groups at Gothenburg and Stockholm universities and a pharmaceutical contract research company (SARomics Biostructures). These groups are interested in applying these techniques frequently and would like to be able to utilize SSX at synchrotron beamlines as a new investigative technique. The current plans for the BioMAX SSX program also comprise the implementation of another serial crystallography environment as well -Roadrunner (Roedig et al., 2017), a high-performance fixedtarget scanning stage for large microfabricated crystal-holding chips. In future work the BioMAX team plan to implement SSX environments at a new beamline dedicated to serial crystallography -MicroMAX. The combination of a beam size down to 1 mm  1 mm, together with a wide range of sample delivery systems and the possibility to perform time-resolved studies with microsecond resolution will ensure that MicroMAX will help to expand the toolbox of the Scandinavian and European SSX user community. The construction of MicroMAX has started recently and the first users will be hosted in 2022.
5,240.2
2020-08-21T00:00:00.000
[ "Physics", "Chemistry" ]
Treatment and Valorization of Palm Oil Mill Effluent through Production of Food Grade Yeast Biomass Palm oil mill effluent (POME) is high strength wastewater derived from processing of palm fruit. It is generated in large quantities in all oil palm producing nations where it is a strong pollutant amenable to microbial degradation being rich in organic carbon, nitrogen, and minerals. Valorization and treatment of POME with seven yeast isolates was studied under scalable conditions by using POME to produce value-added yeast biomass. POME was used as sole source of carbon and nitrogen and the fermentation was carried out at 150 rpm, 28 ± 2°C using an inoculum size of 1 mL of 106 cells. Yeasts were isolated from POME, dump site, and palm wine. The POME had chemical oxygen demand (COD) 114.8 gL−1, total solid 76 gL−1, total suspended solid (TSS) 44 gL−1 and total lipid 35.80 gL−1. Raw POME supported accumulation of 4.42 gL−1 dry yeast with amino acid content comparable or superior to the FAO/WHO standard for feed use SCP. Peak COD reduction (83%) was achieved with highest biomass accumulation in 96 h using Saccharomyces sp . POME can be used as carbon source with little or no supplementation to achieve waste-to-value by producing feed grade yeast with reduction in pollution potential. Introduction Palm oil is the most widely consumed vegetable oil and accounts for about 33% of total vegetable oil production in the world [1].Global palm oil production has been dominated by Indonesia and Malaysia and to a lesser extent by Colombia, Thailand, and Nigeria.Combined, these countries produce over 93% of global palm oil output [2,3].Nigeria is currently the fifth leading producer with over 930,000 metric tons annually [4].The palm oil industry in Nigeria is a major agroenterprise especially in the southern parts where palm trees grow in the wild and in plantations [5].About 80% of the palm oil industry in Nigeria is dominated by smallholders who typically use manual equipment and, to a lesser extent, semimechanized processors for processing palm fruit [2,3].Processing of palm fruit in both methods employs large volumes of water.This results in the production of copious volumes of the liquid waste known as palm oil mill effluent (POME) [6,7].Estimates of the volume of POME produced per litre of palm oil extracted from palm fruits are few and variable.This is occasioned by several variables including differences in the efficiencies of the different processes and nature of the fruit.Manual processes appear to be the least efficient in terms of volume of POME generated, with excess of 10 litres of POME being generated for each litre of oil produced in some instances.Ohimain and Izah [8] reported that 72-80 liters of water are required to process one ton of fresh fruit bunches in the semimechanized process.Of these, 72-75% ends up as POME. POME is a high strength pollutant with a low pH (due to the organic and free fatty acids arising from partial degradation of palm fruits before processing).The characteristics of POME depend on the quality of the raw material and the production processes [9], but it typically contains large amounts of total solids (40,500-75000 mgL −1 ) and oil and grease (2000-8300 mgL −1 ).Its suspended solids content is in the range of 18,000-47,000 mgL −1 , total nitrogen in the range of 400-800 mgL −1 , while the ash content is between 3000-42000 mgL −1 [10].POME has very high biochemical oxygen demand (BOD) and chemical oxygen demand (COD) in the range of 25,000-54,000 mgL −1 and 50,000 to > 100,000 mgL −1 , respectively.These values are 100 times more than that of municipal sewage [11][12][13][14]. No matter how effective the method of processing, POME discharged from a mill is objectionable and could pollute streams, rivers, or surrounding land [15].When discharged into water bodies it turns the water brown, smelly, and slimy and causes anoxygenation [16], and may kill fishes and other aquatic organisms thereby denying humans access to good water for domestic use [17].Also, application of untreated POME on soil alters its physic-chemical properties, causing undesirable decreases in pH and increases in salinity.Unfortunately, most oil palm processors still discharge raw effluent directly into nearby streams and rivers and on land [18]. Several treatment technologies have been applied to POME with varying levels of success.These include ponding [19], aerobic digestion [20], anaerobic digestion [21], and physicochemical treatments [22,23].These methods seek to dispose of POME without any consideration to the current trend in the management of high strength agro-food wastes that seeks to reprocess them through value addition [24].The management of agro-food wastes has evolved from treatment for disposal to beneficial utilization of resources through valorization [24][25][26].Valorization is a concept that seeks the recovery of value added products from wastes and effluents through the application of cost effective technologies.Since POME has currently little or no recycling value, it constitutes an environmental hazard, undergoing slow (acidic) degradation in pits from which it emits strong, foul odour causing air pollution and contamination of ground and surface waters and agricultural land besides vector attraction.Attempts have been made to achieve valorization of POME through production of microbial biomass, enzymes, energy, and biochemicals [27][28][29][30][31][32][33][34][35][36][37][38]. Cost effective disposal of POME hinges on a sustainable and economic method for treatment.Where this can be coupled to some value addition it will become an incentive resulting in a no-loss waste treatment, or a reduction in the overall cost, with the achievement of clean environment being a bonus.The possibility of using POME for biomass production has been suggested and, in this regard, room for innovation exists in both the choice of microbe and fermentation environment.It is envisaged that development of a process for production of yeast biomass from POME will reduce the time and cost in the cycle-to-value and thereby create opportunities for the valorization of a waste stream the accumulation and environmental concern for which can only increase with increasing global production of palm oil.This work was carried out to study the treatment and valorization of POME through production of yeast biomass and to assess the quality of the biomass in terms of amino acid content. Collection of POME. Raw POME was obtained from a local palm oil processor employing manual process in Ejuona Obukpa Community in Nsukka Local Government Area of Enugu State, Nigeria.Samples were collected in clean containers and transported to the laboratory.The fresh POME was dispensed in 500 mL containers and stored frozen in a deep freezer when not used immediately. Isolation and Identification of POME Utilising Yeasts. Yeasts were isolated from stale POME and soil taken from POME dump site and from palm wine (all collected from the same community as above).Stale POME, soil, and palm wine samples were collected in sterile sample bottles and taken immediately to the laboratory for analysis.Samples were processed by 1 : 10 serial dilution in sterile half strength peptone water and plated by pour plate method on Saboraud Dextrose Agar (SDA) medium (Oxoid, England) supplemented with chloramphenicol (50 gmL −1 final concentration).Plates were incubated at room temperature (30 ∘ C ± 2 ∘ C) for up to 72 hours or until yeast colonies appeared if earlier.Yeast colonies that appeared on the media were purified on fresh plates of SDA.Pure colonies of representative isolates were stored on slants of SDA at 4 ∘ C until needed. Identification of Yeast Isolates. All the yeasts isolates were identified using conventional microbiological methods based on their cultural, morphological, and physiological/biochemical characteristics as described by Kurtzman and Fell [39].Isolates were also tested for their ability to growth at 37 ∘ C and 40 ∘ C. Microscopy was carried out with a drop of lacto-phenol cotton blue stain at ×40 objective. Carbon Fermentation Test. Each isolate was tested for ability to ferment different sugars with the production of acid and or gas.Fermentation basal medium containing bromothymol blue indicator was prepared with 2% sugars (except for raffinose which was used at 4%).Basal medium was prepared by dissolving 4.5 g of powdered yeast extract, 7.5 g of peptone, and 26.7 mg of bromothymol blue indicator in 1 litre of distilled water.A 6 mL volume of the medium was dispensed in fermentation tube containing inverted Durham tube for detection of gas production.The tubes were capped and sterilized at 121 ∘ C for 10 minutes.Suspensions of the isolated yeast cells were made from 24 h cultures of isolates using sterile distilled water.Each tube containing the test medium was inoculated with 0.1 mL of the yeast suspension (5 × 10 6 mL −1 ) and incubated at 25 ∘ C for two weeks.The tubes were shaken and inspected at frequent intervals for accumulation of gas in the Durham tube and change of colour of indicator.A positive result was indicated by a colour change from deep green to yellow for acid production and accumulation of air in the Durham tube for gas production.The results were scored according to the scheme of Kurtzman and Fell [39]. Carbon Assimilation Test. Each isolate was tested for ability to assimilate different sugars using Yeast Nitrogen Base (YNB) agar slants containing 2% sugar (except for raffinose which was 4%).The media were prepared as described by Kurtzman and Fell [39].Slant of the test medium was inoculated with 0.1 mL suspension of test isolate.The tests were incubated at 25 ∘ C in a cool incubator and inspected after 1, 2, and 3 weeks.A positive result is indicated by the growth of the yeasts after some days and heavy growth signifies strong assimilation [39]. Nitrogen Assimilation Test.This test was carried out as for carbon assimilation test but using Yeast Carbon Base (YCB) agar containing different nitrogen sources.Each isolate was tested for ability to utilize different nitrogen compounds as sole source of nitrogen.The two nitrogen sources used were potassium nitrate (test nitrogen source) and ammonium sulphate (positive control).YCB was prepared as described by Kurtzman and Fell [39] and inoculated as above.The tests were incubated at 25 ∘ C in a cool incubator and inspected for growth after 1, 2, and 3 weeks.A positive result is indicated by the growth of the yeasts after some days and heavy growth signifies strong assimilation. Growth at Different Temperatures.This test was carried out using glucose-peptone-yeast extract (GPYE) broth.The broth was prepared by dissolving 20 g of glucose, 10 g of peptone, and 5 g of yeast extract in 1 litre of distilled water.A 6 mL volume of this medium was dispensed in test tubes, plugged with cotton wool, and sterilized at 121 ∘ C for 15 minutes.Each of the tubes was inoculated with 0.1 mL of test yeast suspension incubated at 25 ∘ C, 30 ∘ C, 37 ∘ C, and 40 ∘ C for one week.The tubes were inspected each day for growth. Preparation of Yeast Inoculum for Growth in POME. Inoculum was prepared by adding 5 mL of sterile 0.85% normal saline onto 2-day-old SDA slant cultures in universal bottles and gently rubbing with sterile wire loop to dislodge yeast growth.The yeast concentration was adjusted to approximately 5.0 × 10 6 cells mL −1 using a haemocytometer and fresh inoculum was prepared from 24 hour culture for each parameter [40].Prior to fermentation, POME was allowed to completely thaw at room temperature, boiled, and filtered while still hot by simple surface filtration using a double layered muslin cloth to remove coarse solids followed by fine filtration through Whatman Number 41 filter paper. Determination of Physicochemical Parameters of POME. The total solids (TS) and total suspended solids (TSS) were determined as described in standard methods [41].The chemical oxygen demand (COD) was determined by using modified titrimetric/dichromate oxidation method [41].The total dissolved solid (TDS) was determined using Hanna portable TDS metre.Total ash was determined by ignition of the total solids in a muffle furnace at 550 ∘ C (Gallenkamp, size 3, England).Total lipid was determined by extraction with chloroform/methanol (2 : 1), according to the method of Folch et al. [42].Total nitrogen was determined using Kjeldahl method [43].The pH of POME as well as change in pH of the fermenting medium was measured using Hanna portable pH meter (HANNA HI 198107, USA).The organic carbon was determined by dichromate oxidation method [44]. Screening of Isolates for Biomass Production and COD Reduction.Isolates were screened to determine their ability to degrade COD of POME and to produce biomass under culture conditions.Erlenmeyer flasks of 250 mL capacity containing 25 mL of POME were set up in duplicates and sterilized at 121 ∘ C for 15 min.When cooled, each flask was inoculated with 1 mL suspension of the test isolates and incubated at 28 ± 2 ∘ C on a rotary shaker incubator (model KS 4000 I Control (IKA-Werke GmbH Germany)) at 150 rpm for 7 days.Sterile POME medium was included as control.Changes in the COD and physicochemical parameters of POME were monitored throughout the fermentation period as above. At the end of culture the biomass was harvested by centrifuging the culture medium at 5000 ×g for 20 minutes, using Hettich II centrifuge.The pellets were washed several times with cold distilled water and then dried in an oven at 80 ∘ C, using preweighed filter paper to constant weight (Sartorius model AGBS 323S Sartorius, AG Germany).This was followed by the filtration of the supernatant through a preweighed membrane filter of 0.45 m pore size.The filters were washed with distilled water severally and then dried at 80 ∘ C for at least 16 h to constant weight.The biomass concentration was estimated from the membrane and filter paper weight difference with and without the dried sample.The determinations were performed in duplicate.The supernatant was used for determination of residual COD. 2.6.Amino Acid Analysis of Yeast Biomass.Amino acid analysis was done in accordance with the Technical method of AOAC [41].Exactly 1 g of the dried yeast biomass was placed in the conventional hydrolysis tubes.To each tube 100 L of 6 molL −1 HCl containing 30 mL phenol and 10 mL 2-mercaptoethanol (6 molL −1 HPME) were added and the tubes were evacuated, sealed, and hydrolyzed at 110 ∘ C for 22 hours.After hydrolysis, HCl was evaporated in a vacuum bottle heated to 60 ∘ C. The residues were dissolved in ultrapure water (HPLC) grade, containing ethylene diamine tetra acetic acid (EDTA).The hydrolyzed samples were derivatised (45 minutes per sample) on Waters 616/626 HPLC by reacting free amino acids, under basic condition, with phenylisothiocyanate (PITC) to obtain phenyl-thiocarbamyl (PTC) amino acid derivatives and analyzed by using reverse phase HPLC (Waters 616/626 LC USA).A set of standard solutions of the amino acids were prepared from Pierce Reference standards H (1000 mol) into autosampler crops and also derivatised.These standards (0.0, 0.5, 1.0, 1.5, 2.0 mol) were used to generate a calibration file that was used to determine the amino acid contents of the samples.After the derivatisation methanol solution (1.5 N) containing the PTCamino acids were transferred to a narrow bore waters 616/626 HPLC system for separation. The separation and quantization of the PTC-amino acids were done on a reverse phase 18 silica column and the PTC chromophores were detected at 254 nm.The column temperature was 60 ∘ C and elution took 30 minutes.The buffer system used for separation was 140 mm sodium acetate pH 5.50 as buffer A and 80% acetonitrile as buffer B. The program was run using a gradient of buffer A and buffer B concentration and ended with a 55% buffer B concentration at the end of the gradient.The chromatographic peaks areas were identified and quantified using a Dionex chromeleon data analysis system attached to the HPLC System.The calibration curve or file prepared from the average values of the retention times (minutes) and areas (in Au) of the amino acids in 5 standard runs was used.Amino acid was expressed as g100 g −1 of proteins and compared with FAO/WHO [45] reference. Statistical Analysis of the Experimental Data. All results were expressed based on duplicate determinations.Data collected were subjected to analysis of variance (ANOVA) using GenStat discovery edition 4 and means were compared using least significant difference at 95% confidence and separated using Duncan new Multiple Range Test. Identification of POME Utilizing and Associated Yeasts. Seven representative isolates of POME utilizing yeasts were selected from a total of more than 100 initial isolates based on the amount of biomass accumulated and identified.Two of the representative isolates identified as Candida sp TMCC and Pichia sp SP 5 were obtained from POME dump site, another two identified as Candida spp.TWDC and TWC were from stale POME, while the remaining three isolates identified as Saccharomyces spp L 3 1 and V 4 2 and Pichia sp V 4 5 were from palm wine.Table 1 shows the morphological characteristics of the isolates yeasts, while Table 2 shows their physiological characteristics. Biomass Accumulation and COD Degradation by Isolates. As shown in Figure 1, the isolates differed from one another in their ability to reduce COD content of POME in the course of fermentation for biomass production and reduction in pollution potential.The isolates achieved peak reduction of POME COD content after 72, 96, 120, or 144 hours.Isolate L 3 1 achieved the highest reduction in COD (83% reduction after 96 hours), while SP 5 achieved the least COD reduction (73% reduction after 72 hours).During the fermentation, there was a rise in pH of the POME medium for all the isolates from the initial acid value to alkaline levels (Figure 2).The yield of biomass by the isolates is shown in Figure 3. Biomass accumulation mirrored the trend of COD removal and increased with duration of fermentation to a peak point corresponding to time of maximum COD removal after which it decreased.A slight exception was in the case of isolate TWDC, which showed increase in biomass production throughout the fermentation.Isolate L 3 1 showed increase in biomass production up to 96 hours when a maximum of 4.42 gL −1 (the highest in the process) was obtained after which it declined while isolate SP 5 achieved maximum production of 3.14 gL −1 at the 72 hours.Isolates TMCC and TWC yielded their maximum biomass of 1.94 gL −1 (the least in the process) and 2.22 gL −1 , respectively, after 120 hours, while TWDC gave maximum biomass yield of 3.06 gL −1 at 168 hour.Isolate V 4 2 gave maximum biomass yield of 3.02 gL −1 after 72 hours, while isolate V 4 5 yielded its maximum biomass of 3.46 gL −1 after 144 hours. Proximate Composition and Amino Acid Profile of Microbial Biomass. Following the biomass yield screen, isolate L 3 1 was selected for production of biomass for proximate and amino acid analysis.This isolate was selected for biomass production and amino acid analysis because it produced the most biomass of all the isolates and was also very effective in the reduction of waste pollution potential (COD).The biomass produced in POME had a moisture content of 8.95% and a dry matter content of 91.05%.The biomass crude protein content was approximately 27% while the fat, carbohydrate, crude fibre, and ash contents were 0.83%, 35.45%, 4.70%, and 6.12%, respectively, for biomass harvested at peak biomass content (96 hours culture).Table 4 shows the amino acid profile of the biomass protein.A comparison of the amino acid composition of the biomass with FAO standard for SCP protein intended for use in animal feeding indicates that the biomass was comparable to or superior to the recommended standard with respect to a number of amino acids, while being inferior with respect to only a few others. Discussion In this study, seven yeasts isolated from palm wine (L 3 1 , V 4 2 and V 4 2 ), stale POME (TWDC and TWC), and POME dump site (TMCC and SP 5 ) were screened for their efficiency in reducing the COD content (pollution potential) of POME while producing biomass.Based on their colonial and physiological characteristics [39], the isolates were identified as species of Saccharomyces (L 3 1 and V 4 2 ), Pichia (V 4 5 and SP 5 ), and Candida (TMCC, TWC, and TWDC). The proximate composition of POME used in this study indicates high total solid but was considerably different from those used in some previous studies [7,[11][12][13].This is probably due to differences in the nature of the mill and process operation.The POME used in this study was obtained from local palm oil extractors who use manual method of oil extraction.This method is inefficient in recapturing all the oil in the palm fruits.The manual method employs smaller volume of water (relative to the mechanized process) to achieve oil extraction and allows for a considerable reduction in the volume of wastewater generated in the process.However, the reduction in effluent volume results in a more concentrated effluent with higher content of organic matter.The use of limited volume of water for oil extraction particularly in the upland area of this study is understandable since water is not available in unlimited quantities outside the wet season.It is also possible that the fruits may have undergone some deterioration prior to processing thereby increasing the polluting potential of the effluent.Delay in processing of produce results in degradation of oil/lipids into low molecular weight organic acids that remains in solution and so are not extracted with oil.This leads to low pH and high content of soluble solids and pollution potential of the ensuring POME, and is common during the fruit season when many manual small holder processors are regularly overwhelmed. The POME used in this study had a pH of 3.9.This value falls in the range of 3.5 to 5.0 reported by other authors [46][47][48].It is however lower than the guideline value (pH 6-9) for effluent from vegetable oil processing [49].The total lipid content (3580 mgL −1 ) was comparable to but lower than the range of 4000-6000 mgL −1 reported by Ma [14] and much higher than the regulatory discharge limit of oil and grease in POME (50 mgL −1 [7]), a value that is low and considerably challenging to achieve in most POME management procedures.COD values reported for POME vary considerably from less than 42,000 mgL −1 to over 112,000 mgL −1 [7,[11][12][13].The value of COD (114, 800 mgL −1 ) recorded in this study was high but compares closely to figures that have been reported.It is, however, several orders of magnitude higher than the IFC [49] guideline value of 250 mgL −1 for effluent from vegetable oil processing and so requires significant treatment prior to disposal.It is also consistent with the high values of TS obtained and may be the result of incomplete extraction of lipid as has been reported by Oswal et al. [50]. The yeast isolates obtained in this study varied considerably in their ability to grow in POME and accumulate biomass.They also showed variations in their ability to reduce the COD load of the POME.Of the seven isolates screened, Saccharomyces sp L 3 1 showed the highest COD reduction of 83% in 96 h.This is comparable to the COD reduction value (82%) reported by Alam et al. [51] when Aspergillus niger (A103) was used to produce citric acid in POME.If the reduction of COD were the sole end of the process then the used Saccharomyces sp L 3 1 will be considered more economical, even though the product of this process retained higher COD than the regulatory requirement for final disposal.Barker and Worgan [52] reported 77% COD reduction in POME used for mycoprotein production after 2 days, while Oswal et al. [50] obtained higher COD reduction (96%) in culture of Yarrowia lipolytica NCIM 3589.Using Rhodotorula glutinis Saenge et al. [53] on the other hand obtained only 66% COD removal in POME during lipid and carotenoid production.Our process also compares with but is slightly better in COD reduction than 80% reported during anaerobic digestion of POME [21].Growth of Saccharomyces sp L 3 1 in POME was accompanied by a rise in pH to 8.0.This is consistent with the report of Wu et al. [35], who explained the rise due to utilization of organic acid and would make adjustment of pH during mass propagation of yeasts in this medium unnecessary. The peak biomass accumulation of 4.42 gL −1 obtained by Saccharomyces L 3 1 shows the isolate as the best for biomass production in POME (Figure 1).This result is consistent with data reported by Nwuche et al. [54], who obtained maximum biomass of 4.0 gL −1 when A. niger ATCC 9642 was cultured in POME.Saenge et al. [53] on the other hand, obtained 7.5 gL −1 biomass when the oleaginous Rhodotorula glutinis TISTR was used to produce biomass, lipid, and carotenoid in POME.The total protein content of Saccharomyces L 3 1 at peak biomass production was 27% while the fat, carbohydrate, crude fibre, and ash contents were 0.83%, 35.45%, 4.70%, and 6.12% of dry biomass, respectively, making the resulting biomass quite suitable for feed use. A comparison of the amino acid composition of the biomass with FAO standard for SCP protein intended for use in animal feeding (Table 4) indicates that the biomass was comparable to or superior to the recommended standard with respect to a number of amino acids, while being inferior with respect to only a few others, mostly nonessential amino acids.The amino acid profile of this isolate shows that it is also superior to the that of thermophilic Bacillus stearothermophilus isolated from thermophilic digestion of agricultural residue [24].The superior content of essential amino acid in this biomass relative to FAO standards for feed is interesting considering the status of the substrate as waste and the possibility of using this process to produce valuable biomass while achieving economic waste disposal.As culture conditions affect the amino acid profiles of microbial biomass [24] these may be manipulated in optimization processes to improve the content of desired amino acids.It is remarkable that biomass produced in this waste is nutritionally qualitative.In many rural Nigeria villages stale POME with visible yeast and fungal growth is usually traditionally fed to swine either directly or used in compounding feed. Conclusion This work has shown that POME can be used to produce yeast biomass by fermentation without costly pretreatment or nutrient supplementation.The biomass production process worked as a cost effective biological treatment for the reduction of pollution in the industrial wastewater, requiring simple and feasible methods that can be operated in the industry, so minimizing by products.In small scale process the cultured POME may also be fed as enriched feed directly to animals or used in compounding feed.This will remove the cost that may be associated with harvest of biomass. Figure 1 : Figure 1: Changes in COD of POME with fermentation time. Figure 2 : Figure 2: Changes in pH of POME with fermentation time. Figure 3 : Figure 3: Biomass yield of the different isolates in POME. Table 1 : Morphological and microscopic characteristics of the isolated yeasts. Table 2 : Physiological characteristics of the isolated yeasts. Table 4 : Amino acid profile of isolate L 3 1 cultured in POME in g/100 g of protein. * Phe and Tyr were taken together.
6,259.4
2014-09-25T00:00:00.000
[ "Agricultural And Food Sciences", "Engineering" ]
Transcriptional profiling differences for articular cartilage and repair tissue in equine joint surface lesions Background Full-thickness articular cartilage lesions that reach to the subchondral bone yet are restricted to the chondral compartment usually fill with a fibrocartilage-like repair tissue which is structurally and biomechanically compromised relative to normal articular cartilage. The objective of this study was to evaluate transcriptional differences between chondrocytes of normal articular cartilage and repair tissue cells four months post-microfracture. Methods Bilateral one-cm2 full-thickness defects were made in the articular surface of both distal femurs of four adult horses followed by subchondral microfracture. Four months postoperatively, repair tissue from the lesion site and grossly normal articular cartilage from within the same femorotibial joint were collected. Total RNA was isolated from the tissue samples, linearly amplified, and applied to a 9,413-probe set equine-specific cDNA microarray. Eight paired comparisons matched by limb and horse were made with a dye-swap experimental design with validation by histological analyses and quantitative real-time polymerase chain reaction (RT-qPCR). Results Statistical analyses revealed 3,327 (35.3%) differentially expressed probe sets. Expression of biomarkers typically associated with normal articular cartilage and fibrocartilage repair tissue corroborate earlier studies. Other changes in gene expression previously unassociated with cartilage repair were also revealed and validated by RT-qPCR. Conclusion The magnitude of divergence in transcriptional profiles between normal chondrocytes and the cells that populate repair tissue reveal substantial functional differences between these two cell populations. At the four-month postoperative time point, the relative deficiency within repair tissue of gene transcripts which typically define articular cartilage indicate that while cells occupying the lesion might be of mesenchymal origin, they have not recapitulated differentiation to the chondrogenic phenotype of normal articular chondrocytes. Background Full-thickness articular cartilage defects that penetrate into the subchondral bone undergo a repair process characterized by the in-growth of fibrous tissue within the lesion [1,2]. Initially, blood from the bone marrow below the articular cartilage fills the defect and forms a fibrin clot [2,3]. Subsequent to vascularization of the defect is the proliferation of granulation tissue over the first 10 days as the clot scleroses [2,3]. The granulation tissue is rich in type I collagen fibers and the cells within the tissue have been traced to a mesenchymal origin [2,[4][5][6]. Within fullthickness defects generated by arthrotomy and controlled drilling into the subchondral bone, not more than 30% of total collagen content is type II four months after surgery [4]. Type I fibrillar collagen predominates the extracellular matrix in repair tissue of most full-thickness defects without graft or transplant [4,7]. Decreases in proteoglycan content also occur which render the repair tissue more rigid and unable to fully protect the joint from biomechanical stress [1,4,5,7]. In addition, morphological differences exist between the cells in repair tissue and the chondrocytes of skeletally mature articular cartilage [3]. Repair tissue anchors incompletely to the surrounding articular cartilage matrix adjacent to the lesion [2]. While repair tissue seems to be primarily derived from stromal cells of mesenchymal origin, the functional similarity of these cells to articular chondrocytes is not completely described. Repair tissue is often called fibrocartilage or hyaline-like repair cartilage, though it does not necessarily contain an actual chondrocyte cell population. The engineering of repair tissue cells is widely investigated in an attempt to improve the chondral surface within injured joints. Techniques like microfracture have been developed in an effort to facilitate healing of the articular surface with cells from the subchondral bone [6,[8][9][10][11][12][13]. There is also a focus on manipulating repair tissue, implanted stem cells, and even autologous chondrocyte transplants in an effort to generate more hyaline-like phenotypes [14,15]. Assessment of the similarity of repair tissue to cartilage is typically done by monitoring established matrix biomarkers, such as type I collagen, type II collagen, and aggrecan core protein. Even with the introduction of growth factors or scaffolds of maintenance proteins associated with the chondrocyte phenotype, the repair tissue is still unable to completely restore the structural and biomechanical integrity of the joint surface, consistent with the limited capacity of articular cartilage to heal. In this study, we used an equine cDNA microarray containing 9,413 probe sets to compare gene expression profiles of grossly normal articular cartilage and repair tissue occupying medial femoral condyle full-thickness defects in the femorotibial joints of skeletally mature horses four months after a microfracture surgical procedure. The hypothesis tested was that the cells occupying repair tissue four months postoperatively are not identical to articular chondrocytes. Consequently, we would expect the transcriptomes of cells from each tissue to have substantial differences, especially with respect to the expression of cartilage matrix biomarkers. Animals Articular cartilage defects were made in the axial weightbearing portion of the medial femoral condyles of four adult Quarterhorses (2-3 years) as previously described by Frisbie et al. [6,16] within the guidelines set forth in an Institutional Animal Care and Use Committee-approved protocol at Colorado State University. Briefly, one-cm 2 full-thickness articular cartilage lesions were arthroscopically made bilaterally which included the removal of the calcified cartilage layer. This was followed by microfracture penetration of the subchondral bone to create perforations with an approximate spacing of 2-3 mm and depth of 3 mm uniformly within the defect site. The horses were maintained for four months in box stalls (3.65 m × 3.65 m) with controlled hand walking. After euthanasia, repair tissue from the lesions and full-thickness grossly normal articular cartilage from within the same joint were collected from each stifle, rinsed in sterile phosphate-buffered saline, snap-frozen in liquid nitrogen, and stored at -80°C. Histology Samples were also collected and prepared for histological analyses as described in Frisbie et al. [17]. Briefly, repair tissue and adjacent cartilage were trimmed with a standard bone saw and Exakt bone saw with a diamond chip blade (Exakt Technologies, Oklahoma City, OK, USA), placed into histological cassettes, and then fixed in 10% neutral buffered formalin for a minimum of 2 days. Samples were then applied to 0.1% EDTA/3%HCl decalcification solution (Thermo Scientific Richard-Allan Decalcifying Solution, cat. no. 8340) which was replenished every three days until specimens were decalcified. Specimens were embedded in paraffin and sectioned at 5 μm. Sections were stained with hematoxylin and eosin or with Safranin-O. Total RNA Isolation and Linear Amplification Normal articular cartilage was reduced to powder with a BioPulverizer (BioSpec Products, Bartlesville, OK, USA) under liquid nitrogen and total RNA was isolated as described by MacLeod et al. [18,19]. Briefly, total RNA was isolated in a buffer of 4 M guanidinium isothiocyanate, 0.1 M Tris-HCl, 25 mM EDTA (pH 7.5) with 1% (v/v) 2mercaptoethanol, followed by differential alcohol and salt precipitations and then final purification using QIA-GEN RNeasy columns [18][19][20][21]. Repair tissue sample sizes were minimal in size (10-50 mg). Repair tissue was placed in QIAzol reagent (QIAGEN), cut into 1-mm 3 slices, and total RNA isolated using the QIAGEN RNeasy Lipid Tissue Mini Kit. RNA quantification and quality assessments were performed with a NanoDrop ND-1000 and a BioAnalyzer 2100 (Agilent, Eukaryotic Total RNA Nano Series II). Total RNA (1 μg) from each tissue sample received one round of linear amplification primed with oligo-dT (Invitrogen -SuperScript RNA Amplification System) [22,23]. Two micrograms of amplified RNA were then used as a template to create fluorescent dye-coupled singlestranded aminoallyl-cDNA probes (Invitrogen -Superscript Indirect cDNA Labeling System, Molecular Probes -Alexa Fluor 555 and 647 Reactive Dyes). For each sample, probes were coupled to both Alexa Fluor dyes individually so that a dye swap comparison could be made. Transcriptional Profiling Microarray slides were printed with clones selected from a cDNA library generated using equine articular cartilage mRNA from a 15-month old Thoroughbred [24]. Microarray slides were pre-hybridized in 20% formamide, 5× Denhardt's, 6× SSC, 0.1% SDS, and 25 μg/ml tRNA for 45 minutes. The slides were then washed five times in deionized water, once in isopropanol, and spun dry at 700 g for 3 minutes [25]. Two labeled cDNA samples, one repair tissue and the other normal cartilage from the same joint, were combined with 1× hybridization buffer (Ambion, 1× Slide Hybridization Buffer #1, cat. no. 8801), incubated for 2 minutes at 95°C, and then applied to the slide under a glass lifterslip for 48 hours at 42°C. All hybridizations were performed in duplicate with a dye swap to eliminate possible dye bias [26]. Sequential post-hybridization washes were each for 5 minutes as follows: 1× SSC, 0.2% SDS at 42°C; 0.1× SSC, 0.2% SDS at room temperature; and twice with 0.1× SSC at room temperature. The slides were then spun dry under argon gas at 700 g for 3 minutes. Each slide was coated once in DyeSaver 2 (Genisphere) and allowed to dry for 10 minutes. Slides were scanned using a GenePix 4100A scanner and spot intensities were computed using GENEPIX 6.0 image analysis software (Axon Instruments/Molecular Devices). Statistics and Analysis Raw mean intensity data for each probe set pair of all the microarray scans were statistically analyzed by planned linear contrast [27] using SAS (SAS Institute, Cary, NC). One sample t-tests were performed, which were followed by a Benjamini-Hochberg correction based on a false discovery rate of 2.2% for probe sets with a p-value < 0.01 [28]. Differences in the transcriptional profiling data between repair tissue and articular cartilage were analyzed based on a linear model formulation with fixed tissue and dye effect, and random chip, horse, and leg effect. A dye swap design was used, so for each available leg, the sum of the two measurements corresponding to the articular cartilage was subtracted from the sum of the measurements corresponding to the repair tissue. Each intensity measurement (I) is modeled statistically as: with the components designated as follows: d, additive effects due to dye (red or green); c, chip effect (1-16); t, tissue (repair or normal); h, horse (1-4); l, leg (left or right); E, statistical error. The dye swap design yields two outcomes per location and tissue type. Thus, for each of the eight locations corresponding to a particular leg of a particular horse, a new aggregated quantity is calculated that takes into account all measurements related to this location. The only remaining systematic effect represents the expressional difference between tissues with remaining statistical error. Since there were 4 horses with 2 femorotibial joints per horse, eight such tissue differences were evaluated. Gene identity was assigned for each microarray ID from an internal annotation database through selection of either the best RNA RefSeq BLAST (E < 1 × 10 -7 ) or Protein RefSeq BLAST (E < 1 × 10 -5 ) result [29][30][31]. Gene ontology (GO) annotation was derived from batch queries of the Database for Annotation, Visualization, and Integrated Discovery (DAVID) Bioinformatics tool or manually through individual NCBI Entrez Gene queries [32,33]. The human ortholog of each gene was predicted and used for the determination of overrepresentation of GO categories via Expression Analysis Systematic Explorer (EASE) standalone software [32,34]. Statistical data, fold change quantities, and GO annotations were managed within an Excel spreadsheet (Microsoft, Redmond, WA). Microarray data are available at the NCBI Gene Expression Omnibus (GEO) under Series Accession GSE11760. Validation of Microarray Hybridization Results with RT-qPCR Differential expression for selected genes was validated using quantitative polymerase chain reactions (RT-qPCR). Briefly, total RNA was reverse-transcribed into cDNA using an oligo-dT primer with the Promega Reverse Transcription System (Promega, cat. no. A3500). Quantitative "real-time" PCR (7500 Sequence Detection and 7900 HT Fast Real-Time PCR Systems, Applied Biosystems, Foster City, CA) was performed using TaqMan Gene Expression Master Mix (Applied Biosystems) and intron-spanning primer/probe sets (Assays-by-Design, Applied Biosystems) designed from equine genomic sequence data (Ensembl -http://www.ensembl.org/Equus_caballus/ index.html; UCSC Genome Browser -http:// genome.ucsc.edu). Beta-2-microglobulin (B2M) and large ribosomal protein P0 (RPLP0) were selected as endogenous control transcripts because they showed the greatest stability for the sample set as defined by the geNorm reference gene application (data not shown) [35]. Steady I d c t h l E = + + + + + state levels of mRNA encoding type I procollagen (COL1A2), type II procollagen (COL2A1), cartilage oligomeric matrix protein (COMP), dermatopontin (DPT), fibroblast activation protein (FAP), and tenascin-C (TNC) were selected for validation (Table 1). Amplification efficiencies were measured by the default fit option of Lin-RegPCR while maintaining the cycle threshold as a data point within the measured regression line [36]. Since amplification efficiencies for some of the genes were determined to be different between sample groups (normal and repair) by paired t-test, mean group efficiencies were utilized for adjustment of results for each gene (data not shown) [37]. Relative expression levels of target genes were normalized to the relative quantities of endogenous control genes using geometric averaging with the geNorm VBA applet [35]. For each gene of interest, mean fold change was determined by first finding the difference in transcript abundance between normal and repair samples from each leg and then by determining the mean difference amongst all legs for that gene. Statistical analysis of RT-qPCR results was performed with a general linear model (GLM) strategy using SPSS software with consideration for variables of horse, leg, and tissue. One-tail (α = 0.05) test of the hypothesis that microarray data are valid was considered for tissue effect, which is the significance reported. By the same GLM analyses, no significant "within horse" leg effect was demonstrated for any of the genes validated (data not shown). Repair tissue histology Tissue samples were harvested four months after surgical induction of full-thickness cartilage lesion with microfracture. Gross examination revealed repair tissue within each lesion that was dimpled in appearance and not completely level with the articular surface ( Figure 1A, D). His-tologically, repair tissue generally had homogeneous matrix architecture with elongated, flattened cells ( Figure 1G) that interfaced with surrounding articular cartilage ( Figure 1H). Varying levels of repair tissue were noted with some lesions having a poor response ( Figure 1A-C), while others appeared to respond better ( Figure 1D-F). Safranin-O staining demonstrated that the repair tissue was generally proteoglycan-deficient relative to the adjacent normal articular cartilage surrounding the lesions ( Figure 1C), but there was variation with some repair tissue samples showing evidence of proteoglycan content ( Figure 1F). Overall level of differential gene expression A total of 4,269 probe sets (45.4%) were differentially expressed (p < 0.01; Figure 2A). A clear transcriptome divergence was evident between the two tissue types (Figure 2B). After Benjamini-Hochberg correction, 3,327 (35.3%) significant probe sets remained ( Figure 3). Of these probe sets, 1,454 demonstrated greater transcript abundance in repair tissue relative to grossly normal articular cartilage, and 1,873 demonstrated greater transcript abundance in normal articular cartilage relative to repair tissue. Assessment of probe set annotation produced 2,688 significant probe sets with known gene identities. Correcting for redundancy where different probe sets hybridize to the same mRNA transcript yielded 2,101 unique gene symbols. Of these, 858 gene symbols were present at higher steady-state levels in repair tissue and are designated repair > normal, while 1,243 of the gene symbols are designated normal > repair ( Figure 3). Ontological differences When significant probes are organized according to molecular function ontology with a fold change threshold of two, ontological categories of differentially abundant Table 1: Primer nucleotide sequences used in RT-qPCR assays for genes described in the study. Gene Name Gene Symbol Forward Primer Reverse Primer Procollagen, type I, alpha 2 COL1A2 5-TGAGACTTAGCCACCCAGAGT-3 5-GCATCCATAGTGCATCCTTGATTAGG-3 Procollagen, type II, alpha 1 COL2A1 5-CTGGCTTCAAAGGCGAACAAG-3 5-GCACCTCTTTTGCCTTCTTCAC-3 Cartilage oligomeric matrix protein transcripts emerge by EASE analyses (Tables 2 and 3). Categories statistically overrepresented in normal articular cartilage include skeletal development and glycosaminoglycan binding, which contain many of the conventional cartilage biomarkers (Table 2). Protease and endopeptidase inhibitor activities were also overrepresented for normal cartilage; these categories contain matrix metalloproteinase inhibitors essential to the main-tenance of cartilage matrix (Table 2). In contrast, the categories statistically overrepresented in repair tissue (immune response, cytoskeletal and cell component organization, and histogenesis) are indicative of wound healing or tissue re-modeling (Table 3). One shared overrepresented category is calcium ion binding, which is involved with protein-folding conformation of matrix molecules and chondrocyte differentiation, among other Gross and histological assessment of repair tissue Flowchart of cDNA microarray data analysis Figure 3 Flowchart of cDNA microarray data analysis. Expression data were initially analyzed by planned linear contrast with a Benjamini-Hochberg correction yielding 35.3% of the probe sets on the microarray demonstrating significant differential gene expression (p < 0.01). Of these, 43.7% and 56.3% of the probe sets represented increased relative transcript abundance for repair tissue > normal cartilage and normal cartilage > repair tissue, respectively. When annotation is applied to these probe sets, 2688 (80.8%) have known gene symbols with a redundancy across this subset of probe sets equal to 21.8% yielding 2101 unique gene symbols. Individual genes Expression differences for genes encoding biomarkers typically associated with normal articular cartilage and repair tissue corroborate previous reported findings (Figure 4). That is, transcript abundance for collagen types II and IX were greater in normal articular cartilage relative to repair tissue. The expression of type I collagen and several type-I-associated collagen types (V, VI, XII, XV) were up-regu-lated in repair tissue relative to normal articular cartilage. Moreover, transcript abundance was greater in normal cartilage for proteoglycans, an associated sulfotransferase, non-collagenous adhesion proteins, and skeletal development biomarkers linked to cartilage development. In contrast, transcripts were up-regulated in repair tissue for Tenascin-C and matrix metalloproteinase 3, which are both associated with wound healing. Other genes with limited or no established functional annotation in chondrocytes were also differentially expressed between normal articular cartilage and repair tissue. Within the angiogenesis category, transcripts encoding vascular endothelial growth factor and the serpin peptidase inhibitor SERPINE1 had higher steady state levels in repair tissue ( Figure 5). Also represented were genes involved in cell adhesion, cell communication, skel- Redundant or similar categories were removed. When present, the number of redundant or similar categories are indicated in parentheses within the listed "gene category." "GO System" represents the major gene ontological system (i.e., cellular component, biological process, or molecular function). "Gene category" shows a descriptive term shared by a group of genes. "List hits" are the numbers of differentially abundant transcripts that belong to the gene category. "List total" is the number of differentially expressed genes within the corresponding cellular component, biological process, or molecular function system. "Population hits" represent the total number of genes found on the microarray possessing that specific gene category annotation (e.g., extracellular, skeletal development, etc.). "Population Total" represents the total number of genes found on the microarray possessing that ontological system annotation. "EASE score" is a measure of overrepresentation that scales the results of a statistical analysis (Fisher's exact test) by biasing against categories supported by few genes. etal development, and carbohydrate and proteoglycan metabolism. Of note was increased transcript abundance in repair tissue for proliferative cell markers like fibroblast activation protein (FAP) and stathmin-1, as well as the inflammatory mediator cyclooxygenase-2 (COX2) ( Figure 5). Quantitative PCR validation Steady-state transcript abundance was measured for endogenous controls beta-2-microglobulin (B2M) and large ribosomal protein P0 (RPLP0), as well as target genes type I procollagen alpha-2 chain (COL1A2), type II procollagen alpha 1 chain (COL2A1), cartilage oligomeric matrix protein (COMP), dermatopontin (DPT), fibroblast activation protein (FAP), and tenascin-C (TNC). Relative quantification of target transcripts revealed significant increases in mRNA abundance for COL2A1 and COMP in normal articular cartilage ( Figure 6). Fold change differences were similar or slightly greater than what was measured by microarray profiles. Increased COL1A2, DPT, and FAP transcript abundance for repair tissue was also validated by RT-qPCR ( Figure 6). Transcript abundance for TNC in repair tissue demonstrated an increasing trend by RT-qPCR, though significance was not achieved ( Figure 6). Discussion Histological analyses and transcriptional studies identified clear differences between chondrocytes of grossly normal articular cartilage and the cells present in repair tissue of full-thickness articular lesions following a microfracture surgical procedure. At four months post-surgery, Redundant or similar categories were removed. When present, the number of redundant or similar categories are indicated in parentheses within the listed "gene category." "GO System" represents the major gene ontological system (i.e., cellular component, biological process, or molecular function). "Gene category" shows a descriptive term shared by a group of genes. "List hits" are the numbers of differentially abundant transcripts that belong to the gene category. "List total" is the number of differentially expressed genes within the corresponding cellular component, biological process, or molecular function system. "Population hits" represent the total number of genes found on the microarray possessing that specific gene category annotation (e.g., extracellular, skeletal development, etc.). "Population Total" represents the total number of genes found on the microarray possessing that ontological system annotation. "EASE score" is a measure of overrepresentation that scales the results of a statistical analysis (Fisher's exact test) by biasing against categories supported by few genes. repair tissue is morphologically discernible from normal cartilage. Type I collagen transcripts are detected in the repair tissue, and much of the repair tissue is proteoglycan-deficient. Moreover, a substantial transcriptional divergence is readily apparent between the two cell types even at a genomic level. Analyses of overrepresented gene categories for differentially expressed transcripts demonstrate broad functional differences. Conventional biomarker transcripts used to characterize a chondrocytic phenotype indicated that the repair tissues in this sample set were quite different from the adjacent articular cartilage in the same joint. Increased transcript levels for types II and IX collagen were found in the articular cartilage (Figure 4). Quantitative RT-PCR indicated a 16.1-fold expression difference for COL2A1 in articular cartilage relative to repair tissue (p = 0.0090, Figure 6B). In contrast, abundance of transcripts associated with type I collagen-rich fibrous tissues were greater in repair tissue ( Figure 4). Steady-state mRNA levels for COL1A2 were 77.1-fold higher in repair tissue relative to articular cartilage (p = 0.0485, Figure 6A). These transcriptional data directly support published biochemical results which demonstrated differing collagen type I: type II ratios for articular repair tissue and perilesional articular cartilage through detection of cleaved peptides [4,7]. Differences in the magnitude of fold changes in microarray and RT-qPCR results can be explained by the differences in dynamic range of detection between hybridization-based assays and amplification-based assays [38]. Notable differences for proteoglycans between repair tissue and the surrounding articular cartilage were observed with transcript levels and by Safranin-O staining (Figures 1, 4). Proteoglycan differences have also been noted through Safranin-O staining of articular repair tissue in the distal femur of the New Zealand White rabbit [2] and in the distal radial carpal bone of the horse [7], relative to proteoglycan content of perilesional articular cartilage in both studies. Divergent characteristics between articular cartilage and repair tissue extend to transcripts of other matrix proteins. Transcripts encoding cartilage macromolecules believed to play a role in cell-cell and cell-matrix interactions were significantly less abundant in repair tissue relative to normal articular cartilage ( Figure 4). Such transcripts included chondroadherin (CHAD), cartilage intermediate layer protein (CILP), cartilage oligomeric matrix protein (COMP), and fibronectin (FN1) [39][40][41][42][43][44]. COMP interacts with type II collagen for fibrillogenesis and has been shown to bind to the chondroitin sulfate glycosaminoglycans associated with aggrecan. COMP expression is initially up-regulated in chondrocytes exposed to increased dynamic compression [45,46], those from the superficial zone in fibrillated OA cartilage [47], and chondrocytes adjacent to an OA lesion [44]; however, transcript levels in repair tissue at the four month time point were 30.5-fold lower (p = 0.0010, Figure 6C). Matrix molecules like CILP which are present in normal cartilage slow down the responsiveness of chondrocytes to insulin-like growth factor 1 (IGF-1) as a result of accumulation of calcium pyrophosphate dehydrate [43]. Thus, CILP might inhibit the ability of the surrounding chondrocytes to expand and occupy the lesion [43]. Transcript abundance for hypoxia inducible transcription factor 2α (HIF-2α) was up-regulated in normal cartilage ( Figure 5) and has been found to support the cartilage phenotype by (SRY-box 9) SOX9 induction of matrix genes [48]. In contrast, tenascin-C (TNC), which is typically found in provisional matrices throughout development and wound healing [49][50][51][52], demonstrated greater transcript levels in repair tissue by microarray analyses (Figure 4). While statistical significance was not confirmed by RT-qPCR (p = 0.0665, Figure 6F), upregulation of TNC has been noted in early stages of Microarray transcriptional profiles of articular cartilage and repair tissue molecules/biomarkers Figure 4 Microarray transcriptional profiles of articular cartilage and repair tissue molecules/biomarkers. Bars represent median fold changes of differentially expressed genes (p < 0.01) previously associated with cartilage and fibrocartilage. Gene symbols are organized by functional annotation and are listed with the number of representative probe sets in parentheses. COL1A2 (1) COL2A1 (1) COL5A2 (5) COL6A3 (1) COL9A3 (2) COL12A1 (3) COL15A1 (2) ACAN (3) PRG1 (1) PRG4 (4) HS3ST3A1 (4) HAPLN3 (1) CHAD (2) CILP (2) COMP (6) FN1 (3) TNC (3) BMP2 (5) BMPR2 (2) FRZB (4) SOX9 (4) MMP1 (1) MMP3 (1) TIMP2 (5) TIMP3 ( osteoarthritis and also during the repair process of many other tissues through in situ hybridization, immunohistochemistry, and knockout mouse studies [49,[53][54][55]. Based on its function in the expansion of provisional matrices, it is likely that analyses of earlier time points would have detected greater divergence of TNC mRNA levels. Within the repair tissue, differential expression was noted for transcripts encoding proteins involved in wound healing and matrix synthesis. Shapiro et al. have shown that stromal cells of mesenchymal origin from the subchondral bone enter into the wound with the blood which fills the full-thickness lesion [2]. With angiogenic cues such as vascular endothelial growth factor ( Figure 5) and vascularization from the subchondral bone, these cells proliferate within the granulation tissue to occupy the lesion [2,56,57]. Increased transcript abundance of fibroblast activation protein (FAP) is consistent with the proliferative cellular response reported by Shapiro et al. (Figure 5) [2]. RT-qPCR indicated a 2.6-fold relative expression difference for FAP in repair tissue four months post-microfracture relative to articular cartilage (p = 0.0415, Figure 6E). Assessment of FAP expression at additional time points during the repair process would further delineate its importance. Steady-state levels of dermatopontin (DPT) were also elevated in repair tissue (Figures 5, 6D). Fibrillogenesis of type I collagen is accelerated by DPT, which has previously been localized in skin fibroblasts, skeletal muscle, heart, lung, bone, and chondrocytes that de-differentiate while expanding in monolayer culture [58][59][60]. DPT interacts synergistically with decorin and transforming growth factor-β1 to bolster collagen synthesis and accelerate fibrillogenesis to the point of decreasing fibril diameters in proliferating skin fibroblast cultures [59]. A wound healing process is further indicated by the 7.5-fold up-regulation of cyclooxygenase 2 (COX2, Figure 5), an inflammatory modulator shown to be essential in the repair of bone fractures and growth plate lesions [61]. RT-qPCR validation of differential gene expression Figure 6 RT-qPCR validation of differential gene expression. Significant up-regulation of COL2A1 (B) and COMP (C) gene expression in normal articular cartilage relative to repair tissue was confirmed. Higher steady-state levels of COL1A2 (A), DPT (D), and FAP (E) transcripts in repair tissue were also confirmed. Gene expression of TNC (F) demonstrates a trend of increased steady state abundance in repair tissue relative to normal articular cartilage, though statistical significance was not achieved. Steady state mRNA levels for each gene were standardized to the sample with the lowest value. Plots are depicted as box and whisker plots demonstrating the median (solid line), upper and lower quartiles, and highest and lowest values (range bars). Mean fold differences are given above the box and whisker plots for each gene; one-tailed general linear model (α = 0.05) statistical analysis applied with SPSS software. Transcript profiles for COX2 and S100 protein are compatible with chondrogenic differentiation of stromal cells [61,62], but the consistent deficiency of cartilage matrix protein biomarkers highlighted by the switch of type I collagen (COL1A1, COL1A2) in place of type II collagen (COL2A1) as the primary fibrillar collagen document the failure of true hyaline cartilage restoration. Microarray transcriptional profiles for representative genes of interest A limitation of this study must be noted. Tissues utilized in these experiments included repair tissue and grossly normal articular cartilage from within the same joint. Thus, any gene expression differences between grossly normal cartilage within the lesioned joint and cartilage from an intact articular surface from another joint were not assessed. Differences have been reported with intact cartilage from human OA joints [63]. However, equine joints used for the current sample set had minimal OA and the defects were freshly created in the medial femoral condyles four months prior to tissue sample collection. Conclusion Transcriptional profiling data support the hypothesis and indicate that repair tissue cells following a microfracture surgical procedure are still very different from normal articular chondrocytes at the four month postoperative time point. The cell and matrix organizational phenotypes of repair tissue are substantially different from those of chondrocytes within mature articular cartilage that has developed and adapted to biomechanical strains from birth. Microarray data in the current study corroborate what has been reported previously at mRNA and protein levels for conventional cartilage biomarkers, but extends our understanding by documenting differences in transcript abundance across multiple ontology categories and genes not previously studied in these tissues. By directing further research toward factors which contribute to the transcriptome dissimilarities of repair tissue and normal articular cartilage phenotypes, we should advance our understanding of the repair process and improve upon therapeutic strategies directed at restoring the structural and biomechanical integrity of the joint surface.
6,827.8
2009-09-14T00:00:00.000
[ "Medicine", "Biology" ]
Differential Proteome Analysis of Extracellular Vesicles from Breast Cancer Cell Lines by Chaperone Affinity Enrichment The complexity of human tissue fluid precludes timely identification of cancer biomarkers by immunoassay or mass spectrometry. An increasingly attractive strategy is to primarily enrich extracellular vesicles (EVs) released from cancer cells in an accelerated manner compared to normal cells. The Vn96 peptide was herein employed to recover a subset of EVs released into the media from cellular models of breast cancer. Vn96 has affinity for heat shock proteins (HSPs) decorating the surface of EVs. Reflecting their cells of origin, cancer EVs displayed discrete differences from those of normal phenotype. GELFrEE LC/MS identified an extensive proteome from all three sources of EVs, the vast majority having been previously reported in the ExoCarta database. Pathway analysis of the Vn96-affinity proteome unequivocally distinguished EVs from tumorigenic cell lines (SKBR3 and MCF-7) relative to a non-tumorigenic source (MCF-10a), particularly with regard to altered metabolic enzymes, signaling, and chaperone proteins. The protein data sets provide valuable information from material shed by cultured cells. It is probable that a vast amount of biomarker identities may be collected from established and primary cell cultures using the approaches described here. Introduction According to the American Cancer Society, breast cancer remains the most commonly diagnosed cancer among women in the United States, predicting over 250,000 new cases in 2017 and accounting for 30% of all new cancer diagnoses [1]. Breast cancer further ranks second, behind lung cancer, in terms of cancer cell death among women with >40,000 incidences in 2017. While five-year breast cancer survival rates approach 100% in the case of localized tumors (stage 0 or I), the prognosis for metastasized (stage IV) breast cancer is poor, with only 22% survival at 5 years [2]. Early detection of breast cancer patients is thus crucial to improving recurrence-free survival and quality of life. Markers that distinguish invasive breast cancer phenotypes are urgently required. Conventional screening tools, such as self-examination, mammography, diagnostic imaging (ultrasound, MRI), genetic screens (e.g., BRCA1), and tissue biopsies [3,4], collectively present vital tools for breast cancer detection. As with all tests, limits in sensitivity and specificity will miss some cancers (false-negatives); in other cases, abnormal findings associated with benign disease (false positives) will direct between 55% and 75% of women into unnecessary and potentially toxic chemotherapy [5][6][7]. The next generation of diagnostic and prognostic tests is continuously being per mL (Sigma). Remaining cells in the media were cleared by centrifugation (1000g, 10 min) and microparticulate matter was further removed by centrifugation (17,000g, 5 min), followed by syringe filtration through a 0.22 µm membrane. Filtered solutions were stored at 4 • C. Ultracentrifugation and Sucrose Density Gradient Fractionation The filtered SKBR3 cellular media was subject to 45 min ultracentrifugation (200,000g, 4 • C). The pellet was resuspended in 100 µL PBS, and overlaid onto a sucrose gradient (0.2-2.5 M), then spun for an additional 1 h (100,000g). A total of 11 fractions were harvested, and the refractive index was determined. Each density fraction was split in two equal portions of 250 µL, with one subject to Vn96 affinity pull-down while the other served as a control. Vn96 Affinity Capture of EVs Peptide Vn96 (New England Peptide, Gardener, MA, USA) was used to precipitate vesicular material from the sucrose density fractions (Section 2.2), or from 1.9 mL of the microparticle-free breast cancer cell media by adding 2.5 µL (or 10 µL for the cell culture media) of a 10 µg/µL stock solution prepared in Extraction Buffer I of the subcellular proteome extraction kit (S-PEK, Millipore Sigma), and also containing 0.04% sodium azide. The solution was briefly vortexed, and incubated overnight at 4 • C. Complexed material was pelleted by centrifugation (3000g, 5 min, room temperature). The supernatant was removed, and the pelleted material was subject to two washes with 1.9 mL PBS with 10 µL protease inhibitor, followed by centrifugation as above. The washed pelleted complex was visible as a translucent straw-yellow residue. EV Purity by Immunobloting and Transmission Electron Microscopy Vn96-captured EVs were resuspended by vortex in SDS-PAGE sample buffer and heating at 95 • C for 5 min. Twenty microliter volumes were resolved on BioRad Criterion gels using XT-MES or XT-MOPS electrophoresis running buffers (BioRad, Hercules, CA, USA). Resolved proteins were transferred to either supported nitrocellulose (BioRad) or PVDF (Millipore) using standard procedures. Total protein on blots was visualized with reversible stain using either the MemCode kit (PIERCE) for nitrocellulose, or Red Alert Ponceau S (EMD Chemicals Gibbstown NJ). Membranes were blocked in PBS containing 5% skimmed milk and 0.1% Tween-20 for 1 h, then incubated overnight at 4 • C in primary antibody solutions (1:1000 dilution, prepared in blocking buffer with exception of 3% milk powder). Four 10 min washes followed (blocking buffer without milk powder), then the blot was incubated for 30 min at room temperature in secondary antibody (1:2000 dilution, HRP-labelled antibody to the Ig consistent with primary antibody). All antibodies were obtained from Santa Cruz Biotechnology. HRP signal was produced using SuperSignal West Dura substrate (Pierce). The chemiluminescent image was captured using the ChemiGenius system (Syngene, Cambridge, UK). For transmission electron microscopy, Vn96-captured EVs were prepared by fixation in aldehydes and osmium tetroxide [21]. The fixed pellet was embedded in epoxy resin and prepared as 50 µm sections. The sections were processed and examined by standard transmission electron microscopy. Proteome Analysis The Vn96 pellet was resuspended in 250 µL of SDS-PAGE sample buffer, supplemented to a final concentration of 4 M urea and 25 mM TCEP reducing agent (Pierce). Following heating (95 • C, 10 min), 150 µL suspended protein solutions were respectively loaded onto each of three GELFrEE cartridges (8%, 10%, and 12% Tris Acetate, Expedeon, San Diego, CA, USA) and resolved according to the manufacturer's operating guidelines. With each run, 12 fractions were collected as 150 µL aliquots. Together with non-fractionated Vn96 proteome pellets, a 7 µL portion of each collected GELFrEE fraction was subject to SDS-PAGE and silver staining, for visualization of mass-based separation and recovery per fraction. Bottom up proteomic analysis of the GELFrEE-fractionated proteome first proceeded via SDS removal through chloroform-methanol-water precipitation, as described previously [25]. The resulting protein pellet was resolubilized in 20 µL of 8 M urea, then diluted to a final volume of 100 µL in 50 mM Tris buffer (pH 8). Proteins were reduced following addition of 5 µL of 200 mM DTT (30 min, 55 • C), then alkylated by adding 10 µL 200 mM iodoacetamide (30 min, room temperature, dark). Proteins were digested overnight at 37 • C following addition of 1 µg trypsin per fraction, and the reaction was terminated with 10 µL of 10% TFA. Mass spectrometry was on an LTQ classic linear ion trap (ThermoFisher, San Jose, CA, USA), coupled to an Agilent 1200 HPLC system. Digested protein fractions were desalted by offline reversed phase HPLC with UV detection [26], loading one-third of the total volume of purified protein onto a 75 µm × 30 cm self-packed C12 column (3 µm Jupiter beads, Phenomenex, Torrance, CA, USA). Peptides were separated using a 1 h linear solvent gradient from 5% acetonitrile/water/0.1% formic acid to 35% acetonitrile/water/0.1% formic acid at flow rate of 0.25 µL/min. The LTQ operated in data dependent mode (MS followed by zoom scan and tandem MS of the top three ions), with 30 s dynamic exclusion over a mass range covering the full isotopic distribution of the peptide. Peptide identification used the Thermo Proteome Discoverer (v. 1.3, ThermoFisher, Mississauga, Canada) software package and the SEQUEST searching algorithm. MS spectra were searched against the human UniProt database, at a mass tolerance of 1 Da, allowing static cysteine carbamidomethylation and dynamic oxidation of methionine, and up to two missed cleavages per peptide. A peptide false discovery rate of 1% by decoy database searching, and minimum of one unique peptide per protein were employed for data filtering. Relative protein abundance was obtained via spectral counting [27] with normalized peptide spectral matches (PSMs), obtained by way of a ratio of the total number of PSMs observed in the SKBR3 cell line (highest PSM total) to that of the total PSMs from the given cell line. Functional annotation was performed by Ingenuity Pathway Analysis (QIAGEN Bioinformatics, Redwood City, CA, USA). Results and Discussion We report a comparative proteome investigation on an in vitro model of breast cancer, examining the extracellular media from SKBR-3 (invasive cancerous cell), MCF-7 (non-invasive), and MCF-10a (immortal but non-cancerous cells). While cells adapted to grow in plastic flasks are regarded as very different from those obtained in vivo, material secreted by cancer cells into the external environment in vitro is likely to produce a similar proteomic profile, reflecting the original growth from which it was derived [28]. The Vn96 peptide was employed to selectively capture EV material released by cultured cell lines into their growth media. Vn96 targets heat shock proteins (HSPs), overexpressed on the surface of aggressive cancer cells, and by extension, their derivative vesicles [29]. The Vn96 peptide has been shown to capture exosome-like vesicles containing proteins comparable to EV preparations by traditional ultracentrifugation when analyzed by Western blot [21]. To demonstrate the specificity of Vn96 to pellet EV material, exosomes from SKBR3 were harvested through conventional ultracentrifugation with separation into characteristic flotation zones by sucrose density fractionation. As shown in Figure 1, the exosomal marker proteins TSG101 and Alix were isolated in the fractions from 1.15 to 1.23 g/mL. Moreover, while these marker proteins remain suspended in the supernatant (SN) following low speed centrifugation, the addition of Vn96 resulted in recovery of exosomal markers in the pellet (P) of the low speed spin. These observations demonstrate the capacity of Vn96 to concentrate proteins associated with vesicular material. The Vn96 peptide was next employed to directly recover EV materials from filtered bioreactor cell culture media. When observed by TEM Figure 2, material pulled down by Vn96 generally consisted of bilayer orb structures between 30 and 50 nm in diameter, with electron dense centers. Such features were also evident in the MCF-10a cell line, indicative that HSP-decorated vesicles are also released by these cells. In recent years, the term "small EVs" has been used as an alternative to exosomes [30]. Vn96 may capture a subset of small EVs, as defined by surface accessorization of HSPs. Electrophoretic profiling of Vn96 pull-downs yielded robust total protein profiles (e.g., from SKBR3, Figure 3A lane 1). Replacement of Vn96 with a random peptide (SW) yielded faint profiles likely to be aggregates of albumin and immunoglobulin from culture medium ( Figure 3A, lane 2). The heat shock proteins (HSP70, HSP90) targeted by Vn96 were depleted from the medium and exclusively recovered in the low centrifuge speed pellet. Employing a random peptide (SW), or scrambled amino acid sequence of Vn96 did not pellet these proteins ( Figure 3B). Western blots of EV material also identified PKM2, receptor kinase HER2, membrane protein TRPV6, and fatty acid synthase; none of which were recovered using the null peptide ( Figure 3B). The stability of EVs isolated by Vn96 from SKBR3 is also evident in Figure 3C, which employs various commercial detergent washes (components of Millipore's subcellular proteome extraction kit, S-PEK). As seen in lanes 1 and 2 of Figure 3C, neither cytosolic Extraction Buffer I (EB1), which contains the mild detergent digitonin, nor membrane and organelle Extraction Buffer II (EB2), containing Triton X-100, would solubilize the EV marker proteins from the Vn96 pellet. Similar results are obtained from the other cell lines. Though EV marker proteins are retained following EB1 and EB2 washes, weakly associated proteins are liberated from the EV pellet by these buffers (see Supplementary Figure S1). These findings permit incorporation of stringent washes to further purify the EV fraction recovered in the Vn96 pellet. Immunoblot analysis on material recovered from the non-transformed MCF-10a The Vn96 peptide was next employed to directly recover EV materials from filtered bioreactor cell culture media. When observed by TEM Figure 2, material pulled down by Vn96 generally consisted of bilayer orb structures between 30 and 50 nm in diameter, with electron dense centers. Such features were also evident in the MCF-10a cell line, indicative that HSP-decorated vesicles are also released by these cells. In recent years, the term "small EVs" has been used as an alternative to exosomes [30]. Vn96 may capture a subset of small EVs, as defined by surface accessorization of HSPs. The Vn96 peptide was next employed to directly recover EV materials from filtered bioreactor cell culture media. When observed by TEM Figure 2, material pulled down by Vn96 generally consisted of bilayer orb structures between 30 and 50 nm in diameter, with electron dense centers. Such features were also evident in the MCF-10a cell line, indicative that HSP-decorated vesicles are also released by these cells. In recent years, the term "small EVs" has been used as an alternative to exosomes [30]. Vn96 may capture a subset of small EVs, as defined by surface accessorization of HSPs. Electrophoretic profiling of Vn96 pull-downs yielded robust total protein profiles (e.g., from SKBR3, Figure 3A lane 1). Replacement of Vn96 with a random peptide (SW) yielded faint profiles likely to be aggregates of albumin and immunoglobulin from culture medium ( Figure 3A, lane 2). The heat shock proteins (HSP70, HSP90) targeted by Vn96 were depleted from the medium and exclusively recovered in the low centrifuge speed pellet. Employing a random peptide (SW), or scrambled amino acid sequence of Vn96 did not pellet these proteins ( Figure 3B). Western blots of EV material also identified PKM2, receptor kinase HER2, membrane protein TRPV6, and fatty acid synthase; none of which were recovered using the null peptide ( Figure 3B). The stability of EVs isolated by Vn96 from SKBR3 is also evident in Figure 3C, which employs various commercial detergent washes (components of Millipore's subcellular proteome extraction kit, S-PEK). As seen in lanes 1 and 2 of Figure 3C, neither cytosolic Extraction Buffer I (EB1), which contains the mild detergent digitonin, nor membrane and organelle Extraction Buffer II (EB2), containing Triton X-100, would solubilize the EV marker proteins from the Vn96 pellet. Similar results are obtained from the other cell lines. Though EV marker proteins are retained following EB1 and EB2 washes, weakly associated proteins are liberated from the EV pellet by these buffers (see Supplementary Figure S1). These findings permit incorporation of stringent washes to further purify the EV fraction recovered in the Vn96 pellet. Immunoblot analysis on material recovered from the non-transformed MCF-10a Electrophoretic profiling of Vn96 pull-downs yielded robust total protein profiles (e.g., from SKBR3, Figure 3A lane 1). Replacement of Vn96 with a random peptide (SW) yielded faint profiles likely to be aggregates of albumin and immunoglobulin from culture medium ( Figure 3A, lane 2). The heat shock proteins (HSP70, HSP90) targeted by Vn96 were depleted from the medium and exclusively recovered in the low centrifuge speed pellet. Employing a random peptide (SW), or scrambled amino acid sequence of Vn96 did not pellet these proteins ( Figure 3B). Western blots of EV material also identified PKM2, receptor kinase HER2, membrane protein TRPV6, and fatty acid synthase; none of which were recovered using the null peptide ( Figure 3B). The stability of EVs isolated by Vn96 from SKBR3 is also evident in Figure 3C, which employs various commercial detergent washes (components of Millipore's subcellular proteome extraction kit, S-PEK). As seen in lanes 1 and 2 of Figure 3C, neither cytosolic Extraction Buffer I (EB1), which contains the mild detergent digitonin, nor membrane and organelle Extraction Buffer II (EB2), containing Triton X-100, would solubilize the EV marker proteins from the Vn96 pellet. Similar results are obtained from the other cell lines. Though EV marker proteins are retained following EB1 and EB2 washes, weakly associated proteins are liberated from the EV pellet by these buffers (see Supplementary Figure S1). These findings permit incorporation of stringent washes to further purify the EV fraction recovered in the Vn96 pellet. Immunoblot analysis on material recovered from the non-transformed MCF-10a cell culture media, as well as for MCF-7, with the breast cancer exosomal marker CD24 [31], illustrate that some, but not all, CD24 is released into the supernatant following incubation of the pellet with EB II (results not shown). Thus, we subsequently chose a PBS washing protocol, also under reducing conditions (25 mM TCEP), to achieve a balance of yield and purity. To fully solubilize EV proteins (lane 3 of Figure 3C), the pellet is boiled in SDS gel-loading buffer, supplemented with 4 M urea (USB). As a consequence, MS analysis demands an SDS-compatible proteomics workflow. Proteomes 2017, 5, 25 6 of 16 cell culture media, as well as for MCF-7, with the breast cancer exosomal marker CD24 [31], illustrate that some, but not all, CD24 is released into the supernatant following incubation of the pellet with EB II (results not shown). Thus, we subsequently chose a PBS washing protocol, also under reducing conditions (25 mM TCEP), to achieve a balance of yield and purity. To fully solubilize EV proteins (lane 3 of Figure 3C), the pellet is boiled in SDS gel-loading buffer, supplemented with 4 M urea (USB). As a consequence, MS analysis demands an SDS-compatible proteomics workflow. Mass spectrometry detection followed GELFrEE fractionation of the solubilized EV proteomes. GELFrEE enables recovery of proteins in an SDS-containing buffer, with separation according to molecular weight. SDS depletion via organic solvent precipitation permits LC/MS of trypsin-digested proteins. As demonstrated in Figure 4A, a higher abundance of EV proteins was recovered from the SKBR3 and MCF-7 cell lines, relative to the non-cancerous MCF-10a, in support of the theory that aggressive cancer cells will overexpress EV materials. The decreased abundance of proteins recovered by Vn96 from the MCF-10a cell media may also reflect a lower number of Vn96 binding opportunities, due to lack of surface expressed HSP/chaperones. Nonetheless, it is clear that a greater concentration of proteins is recovered from the tumorigenic cell lines. GELFrEE resolved the proteins over a mass range extending to ~100 kDa Figure 4B, isolating proteins in discrete fractions according to molecular weight ( Figure 4C). A detailed listing of the identified proteins and peptides from each of the three cell lines are provided as supplementary files (Tables S1-S3). Mass spectrometry detection followed GELFrEE fractionation of the solubilized EV proteomes. GELFrEE enables recovery of proteins in an SDS-containing buffer, with separation according to molecular weight. SDS depletion via organic solvent precipitation permits LC/MS of trypsin-digested proteins. As demonstrated in Figure 4A, a higher abundance of EV proteins was recovered from the SKBR3 and MCF-7 cell lines, relative to the non-cancerous MCF-10a, in support of the theory that aggressive cancer cells will overexpress EV materials. The decreased abundance of proteins recovered by Vn96 from the MCF-10a cell media may also reflect a lower number of Vn96 binding opportunities, due to lack of surface expressed HSP/chaperones. Nonetheless, it is clear that a greater concentration of proteins is recovered from the tumorigenic cell lines. GELFrEE resolved the proteins over a mass range extending to~100 kDa Figure 4B, isolating proteins in discrete fractions according to molecular weight ( Figure 4C). A detailed listing of the identified proteins and peptides from each of the three cell lines are provided as supplementary files (Tables S1-S3). As summarized in the Venn diagrams of Figure 5, some ~300 to 400 unique proteins were identified from each cell type (minimum 2 peptides per protein). The largest number of identified proteins (392) was seen from the most aggressive cancer cell line (SKBR3), and might be a reflection of the increased prominence of HSPs expressed on the EV surface. Considering all three samples and technical replicates (three GELFrEE runs per sample type), MS collectively identified 647 unique proteins across the various cell lines. Despite marked differences in protein concentration across the three cell lines, similar numbers of proteins were identified through MS. This may reflect on the limitations of an MS platform which favors detection of the most abundant components. While greater differences are reflected at the peptide level Figure 5, comparative analysis of the discrete proteomes is best presented through changes in the protein abundance. Here, spectral counting is employed as a means of assessing relative protein abundance across the three cell types. In total, we obtained 7400 peptide spectral matches (PSMs) from SKBR3; 4584 PSMs from MCF-7; and 2381 PSMs from MCF-10a, being indicative of the greater concentration of EV proteins recovered in the most invasive phenotype. Supplementary Table S4 details a full comparative assessment of the spectral matches observed per protein across the three cell lines. Supplementary Table S5 compares the identified proteins to the ExoCarta protein database [32], indicating that the majority of proteins detected (509 of 575 gene products) were previously found to be associated with exosomal material. As summarized in the Venn diagrams of Figure 5, some~300 to 400 unique proteins were identified from each cell type (minimum 2 peptides per protein). The largest number of identified proteins (392) was seen from the most aggressive cancer cell line (SKBR3), and might be a reflection of the increased prominence of HSPs expressed on the EV surface. Considering all three samples and technical replicates (three GELFrEE runs per sample type), MS collectively identified 647 unique proteins across the various cell lines. Despite marked differences in protein concentration across the three cell lines, similar numbers of proteins were identified through MS. This may reflect on the limitations of an MS platform which favors detection of the most abundant components. While greater differences are reflected at the peptide level Figure 5, comparative analysis of the discrete proteomes is best presented through changes in the protein abundance. Here, spectral counting is employed as a means of assessing relative protein abundance across the three cell types. In total, we obtained 7400 peptide spectral matches (PSMs) from SKBR3; 4584 PSMs from MCF-7; and 2381 PSMs from MCF-10a, being indicative of the greater concentration of EV proteins recovered in the most invasive phenotype. Supplementary Table S4 details a full comparative assessment of the spectral matches observed per protein across the three cell lines. Supplementary Table S5 compares the identified proteins to the ExoCarta protein database [32], indicating that the majority of proteins detected (509 of 575 gene products) were previously found to be associated with exosomal material. As summarized in the Venn diagrams of Figure 5, some ~300 to 400 unique proteins were identified from each cell type (minimum 2 peptides per protein). The largest number of identified proteins (392) was seen from the most aggressive cancer cell line (SKBR3), and might be a reflection of the increased prominence of HSPs expressed on the EV surface. Considering all three samples and technical replicates (three GELFrEE runs per sample type), MS collectively identified 647 unique proteins across the various cell lines. Despite marked differences in protein concentration across the three cell lines, similar numbers of proteins were identified through MS. This may reflect on the limitations of an MS platform which favors detection of the most abundant components. While greater differences are reflected at the peptide level Figure 5, comparative analysis of the discrete proteomes is best presented through changes in the protein abundance. Here, spectral counting is employed as a means of assessing relative protein abundance across the three cell types. In total, we obtained 7400 peptide spectral matches (PSMs) from SKBR3; 4584 PSMs from MCF-7; and 2381 PSMs from MCF-10a, being indicative of the greater concentration of EV proteins recovered in the most invasive phenotype. Supplementary Table S4 details a full comparative assessment of the spectral matches observed per protein across the three cell lines. Supplementary Table S5 compares the identified proteins to the ExoCarta protein database [32], indicating that the majority of proteins detected (509 of 575 gene products) were previously found to be associated with exosomal material. Reflecting the origin of EVs as they are produced by the cell, and considering the cellular distribution of proteins identified from the EV fractions (Supplementary Figure S2), it is not surprising that several of the identified proteins were associated with the membrane (19%), or cellular surface (12%). The large number of extracellular proteins is reflective of the heat shock proteins along with other proteins normally released by cells. Tables 1-3 summarize the top 25 proteins (excluding probable contaminants), as categorized according to primary function (metabolism; chaperone and protein binding; skeletal and assembly). Multiple cytoskeleton proteins (keratins, actin, myosin, etc.) dominated the list of identified proteins (see Supplementary Tables S1-S3). Keratins are frequently encountered as unavoidable contaminant proteins during sample collection and processing. As such, suspected keratin contaminants (keratin type 1, 2, 5, 9, 14, 16) were among the identified proteins. These proteins exhibit minimal statistical difference in peptide spectral counts among the three cell types. However, such contaminants are easily distinguishable from bona fide cancer markers, such as cytokeratin 8, 18, and 19. These molecules are found at the surface of cancer cells, and are thus either incorporated as bystanders, or as part of the EV assembly and embarkation process. CK8 and CK18 are well documented as secreted cancer biomarkers, and may be detected in serum of patients with breast cancer receiving chemotherapy [33]. These markers were shown to resist extraction by wash buffers EB1 and EB2 (Supplementary Figure S1), and are thus likely to be strongly associated or embedded within the extracellular vesicles. CK19 has been suggested to be likely involved in driving the more aggressive tumor proliferation, invasion, and metastasis associated with HER-2/neu-positive tumors [34]. By their PSMs, each of these marker proteins were highly elevated in the cancer cell lines relative to MCF-10a (Tables 1-3). Thus, these proteins are included in the lists of relevant EV components. Considering the full list of proteins collectively identified in the EV material, a broad range of biological function is conveyed (Supplementary Figure S2). The largest group of proteins identified (18%) were associated with metabolic activity. The chaperone/binding proteins were also prominent among the identified groups. A preliminary comparison of the molecular pathways associated with the identified proteins is seen through Ingenuity Pathway Analysis (IPA), for which the top 12 pathways of each cell type are depicted in Supplementary Figure S3. The most distinguishing feature was the glycolysis/gluconeogenesis and the pentose phosphate pathways, represented at the top of IPA pathways represented in SKBR3 (second and fourth for MCF-7). By sharp contrast, metabolic enzymes constitutive of these pathways were essentially absent from MCF-10a. Cancers are dominated by metabolic pathways that vary from normative physiology with emphasis on accelerated uptake of glucose and glutamine, aerobic glycolysis, decreased mitochondrial activity, and enhanced lipogenesis. Table 4 provides a quantitative comparison of proteins involved in metabolic pathways. Perhaps the most well-known characteristic, the Warburg effect [35], refers to the avid consumption of glucose for direction into a glycolytic pathway with the accumulation of lactate, rather than incorporation of pyruvate into the tricarboxylic acid (TCA) cycle for oxidative phosphorylation. Nine of the ten canonical enzymes of glycolysis were represented in the Vn96 captured EVs of both SKBR3 and MCF-7, though only five were observed in MCF-10a. Some elements were exclusively found in the invasive and non-invasive VNEs, such as isoforms of pyruvate kinase and lactate dehydrogenase. Lactate dehydrogenase is perhaps the most widely recognized secreted enzyme of glycolysis contributing to the acidification paradigm of the Warburg effect [36]. Metabolism is thus closely linked to cancer progression, because the ability for a cell to proliferate is dependent upon the availability of nutrients to build new cells. Glycolytic enzymes also protect cancer cells from stress by inhibiting apoptosis, and correlate well with resistance to radio-and chemotherapy [37,38]. Early diversions are promoted toward the pentose phosphate pathway as the means to generate nucleotide and amino acids. Of relevance was the detection of phosphogluconate dehydrogenase in higher abundance for the cancerous cell lines. Beyond glycolysis, the provision of raw materials into peripheral biosynthetic pathways is crucial to cancer survival and colonization of areas distal to the primary tumor [39]. For example, glycolysis channels raw material into lipid biosynthesis for membrane expansion and vesicle production [40,41]. Lipogenesis, required for membrane expansion and vesicle production, and typical of the aggressive cancer, would benefit from the donation of precursors. As seen in Table 4, a high proportion of peptides originate from fatty acid synthase (FASN) in the invasive SKBR3. Similarly, tumor protein D52 was also exclusively observed in SKBR3, being implicated in increased capacity for storing lipid typical of invasive cancer cells [42]. The significance of FASN abundance in an invasive phenotype is likely associated with the promotion of membrane biogenesis. FASN is frequently associated with invasive cancer, and has been proposed as a therapeutic target [43]. It is normally expressed in low levels when dietary sources are sufficient. However, FASN expression and activity in cancer cells can be very high, and becomes associated with lipid rafts following cell signaling events [44]. Serum levels of FASN have been found extracellularly in breast cancer [45], and are predictive of colorectal cancer stage [46]. FASN has previously been found in exosomes from multiple cancer cells [32]. Accordingly, it is possible that proportional representation of FASN in exosomes is a prospective biomarker of invasive phenotype [47]. As ligands for the Vn96 affinity peptide, individual canonical heat shock proteins are listed in Table 5. HSP60 was the most abundantly represented heat shock protein among the three cell lines, and most highly expressed in SKBR3. HSP60 is a known surface-displayed molecule, secreted by cancer cells, and an important marker of cancer-derived exosomes [48]. Isoforms of HSP90 were more extensively represented in SKBR3, though also observed in the non-cancerous cell line MCF7. HSP90-alpha constitutes the extracellular isoform, and is particularly characteristic of invasive cancer [49]. Although location of HSP90 isoforms was not determined, this family of proteins is frequently found on the cell surface of cancer cells, and by extension, on derivative vesicles [50]. HSP90 is imperative as a stabilizing chaperone of a broad range of aberrantly overactive receptors and kinases imperative to cancer. Transient HSP-multi protein complexes, coined as the "epichaperome" and found in high prevalence on numerous cancers, have been shown to play important roles in facilitating cell regulation and survival [51]. Thus, while smaller chaperones, HSP10 or HSP27, were observed in higher abundance in the cancerous cell lines, it is unknown if these are independently capable of engaging Vn96. However, HSP10 and 27 have been implicated in multifunctional chaperone networks in invasive breast cancer [52]. HSP chaperone complexes not only present biomarkers for cancer diagnostics, but have been identified as targets for drug therapy [51]. The functional elements of a metastatic cascade reside in secreted proteins invested in HSP-decorated EVs. Cumulatively, these proteins enable cells to detach, invade tissue, and access circulation, while buffering against toxicity. Proteome profiling of the EV materials from malignant vs non-invasive phenotypes reveals multiple features conducive to malignancy, some of which have only been appreciated in the last year. A short selection is provided in Table 6. The role of these proteins in cancer progression is described in the table with reference to literature. For example, protein disulfide isomerases are multifunctional chaperones instrumental in the breakage and rearrangement of disulfide bonds of extracellular matrix proteins, being required for detachment, extravasation, and intravasation at secondary sites, particularly with regard to cellular matrix remodeling. Other proteins may protect or preserve aspects crucial to metabolism (e.g., selenium-binding protein 1), while other proteins may promote expression of specific proteins or enable function to maintain the malignant phenotype (14-3-3 zeta). The Vn96 protocol to isolate EV materials uncovers multiple distinguishing protein features, which collectively constitute candidate biomarkers of breast cancer. Previous investigations on breast cancer cell lines [66], or of their secreted exosomes [67], have highlighted proteomic profiles indicative of cancer cell proliferation and mobility, which was also apparent from our study. Our observations also corroborate a proteomic study that identified proteins involved in metabolic and detoxification pathways as highly expressed in HER-2/neu-positive breast cancer [68]. Conclusions Vesicles were recovered from extracellular media by peptides with affinity to heat shock proteins, which are abundant on cancer cell surface and derivative vesicles or exosomes. Extracellular vesicles (EVs) were subject to GELFrEE fractionation and proteome analysis via bottom up liquid chromatography mass spectrometry (LC/MS). Enzymes typical of altered metabolic pathways were abundantly represented in EVs from cancer cells. Vesicle-associated proteins from the most invasive phenotype, SKBR3, included most of the canonical enzymes of glycolysis and gluconeogenesis. In contrast, the same proteins were of limited representation or absent in EVs from non-transformed MCF-10a, while MCF-7 yielded an intermediate representation. These observations indicate that the collection of extracellular vesicles from different cancer cell phenotypes may provide insight into the importance of individual enzymes in cancer progression, and further, that the vesicles shed from cancer cells serve as surrogates for profiling abundance of altered metabolic enzymes.
7,637
2017-10-08T00:00:00.000
[ "Biology" ]
Measurements of the Quiet-Sun Level Brightness Temperature at 8 mm Defining the solar brightness temperature accurately at millimeter wavelengths has always been challenging. One of the main reasons has been the lack of a proper calibration source. New Moon was used earlier as a calibration source. We carried out a new extensive set of observations at 8 mm using the New Moon for calibration. The solar and Moon observations were made using the 14-meter radiotelescope operated by the Aalto University Metsähovi Radio Observatory in Finland. In this article, we present our method for defining the brightness temperature of the quiet-Sun level (QSL). Based on these observations, we found 8100 K ± 300 K to be the mean value for the QSL temperature. This value is between the values that were reported in earlier studies. Introduction The Sun is also a powerful emitting source at radio wavelengths, and partly because of this, defining its brightness temperature is challenging. In several earlier studies, the Moon has been used as a calibrator source (Wrixon, 1974;Zirin, Baumert, and Hurford, 1991;Iwai et al., 2017). In this study we use the New Moon as a calibrator source at 8 mm, and we produce several datasets from which we obtain a new reference value for the solar brightness temperature. All the other solar features (e.g. radio brightenings) can be scaled to the quiet-Sun level (QSL) temperature. In addition, the more precise QSL temperature value could explain which atmospheric layer the 8 mm emission is coming from. Kallunki, M. Tornikoski Measuring the solar brightness temperature accurately requires favorable conditions to eliminate the effects contributed by the changing sky, and the measurements should be made during solar minimum when no major activity complicates the data analysis. In this article we present our instrumentation, observations, analysis method, and results for calibrating the QSL temperature. The Brightness Temperature of the Moon at 8 mm The mean brightness temperature of the Moon (T moon ) at millimeter wavelengths consists of two different components, and it depends on the observing frequency (ν) (Hafez et al., 2014;Krotikov and Pelyushenko, 1987). The first component T 0 (ν) is a constant (214 K at 8 mm), and the second component T 1 (38 K at 8 mm) varies with the lunar phase. Thus, the mean brightness temperature of the Moon can be defined as where ω is the angular frequency of the lunar cycle (12.26 • per day) and ξ(ν) is the phase relative to the time of New Moon (32 • at 8 mm). During the New Moon, ωt ≈ 0. Thus, the mean brightness temperature of the Moon at 8 mm is 246.2 K during the New Moon (Krotikov and Pelyushenko, 1987;Hafez et al., 2014). Some other values for the New Moon brightness temperature have been reported as well: 216 K at 8.3 mm (the disk center temperature) (Kuseski and Swanson, 1976), and 239.5 ± 10.1 K (mean temperature) and 219 K (the disk center temperature) at 10 mm (Linsky, 1973). In this article we use the value obtained from Equation 1. Instrumentation Observations were carried out at the Aalto University Metsähovi Radio Observatory (MRO) in Finland (Helsinki region; GPS coordinates: N 60:13.04, E 24:23.35). The solar and lunar radio maps were observed at 8 mm in four sessions close to the occurrence of the New Moon with the MRO 14-meter radiotelescope (RT-14), shown in Figure 1. RT-14 is a radomeenclosed Cassegrain-type antenna with a diameter of 13.7 m. The usable wavelength range of the telescope is 13.0 cm -2.0 mm. During solar observing sessions, the antenna is used for solar mapping, partial mapping, and tracking of any selected areas on the solar disk. The beam size of the telescope is 2.4 arc min at 8 mm, marked as white circles in Figures 2 and 3. The receiver is a Dicke-type radiometer with a Peltier element temperature stabilization and with a noise temperature of approximately 280 K. The temporal resolution during observations is 0.1 s or lower. The data are recorded in intensities. Solar radio maps can be observed in both linear and logarithmic scales and measured in both right ascension and declination directions. The logarithmic scale data are used only for strong solar radio brightenings. The time between two consecutive solar radio maps is around 172 seconds at fastest. Before 2016, the time between maps was approximately 8 minutes, but after our observing system and position controller software upgraded, it is possible to make more extensive Sun-Moon-calibration measurements with a sample count of approximately three times better than before. Because the measurements are always scaled relative to the QSL, our observations are comparable over the years. QSL is the median temperature of the data samples of the solar disk in the radio maps (Kallunki et al., 2012;Kallunki and Tornikoski, 2017). Observations The first of four observing sessions was carried out in March at the time of the New Moon, the other sessions in May were offset by one to three days from the New Moon. During all these sessions, the observing conditions were nearly optimal, and the activity of the Sun at 8 mm was low. During the March session (on 17 March 2018), solar and lunar radio maps were obtained in turns from 06:30 UT to 15:30 UT. As an example, one solar and lunar map from this session are presented in Figures 2 and 3. It took 172 seconds to observe one map. We excluded observations below 15 • of elevation from our data analysis because of possibly pronounced atmospheric effects. In general, the atmospheric conditions were very good during the observations, with a clear sky and no clouds at all. Solar activity was very low during the session, with the maximum brightness temperature relative to the QSL not exceeding 103%. Higher solar activity could distort the determination of the QSL temperature value. The Moon culminated at 24.5 • of elevation. Therefore, only observations between 15 • -24.5 • of elevation were included in the analysis. In total, we had about 40 solar and lunar radio maps in our final calculations. Each solar and lunar map consists of 29 sweeps (Figure 4) over the solar or lunar disk. For each map, the minimum value is defined as the background temperature, T bg , which is the summed contribution of the sky temperature, T sky , and the noise from the receiver during the solar and lunar observations, the observed QSL and Moon mean intensities including the background (T qsl,obs , T qml,obs ), and the real quiet-Sun and Moon temperatures (T qsl , T qml ). The rising elevation is marked with the circles and the descending elevation with the pluses. system, T system . The intensity level of the quiet Sun, QSL (and a similar value from the lunar maps), consists in reality of the true QSL temperature and the contribution from T bg . Throughout this study, the solar and lunar maps were not observed exactly at the same time, nor exactly in the same direction. Our method was chosen to minimize the effects from the possibly changing sky contribution, without the need for more elaborate atmospheric models. A minor contribution may remain in the solar maps also from the solar radiation in the minimum data point, but the effect on the final brightness temperature calculation is approximately 15 K at most. The observed QSL intensity, T qsl,obs , is computed to be the median intensity on the solar disk, where the solar disk is assumed to include all the data points that have an intensity value higher than 50% of the whole intensity range within which the data points fall. The similar lunar intensity, T qml,obs , is the mean temperature over the lunar disk. The Moon has a greater temperature gradient, and the mean value needs to be used (Krotikov and Pelyushenko, 1987;Hafez et al., 2014). This is also evident from Figure 3. In Figure 4 the upper dashed line indicates the QSL intensity and the lower dashed line the background contribution, T bg,sun . The real QSL that we wish to determine from these observations is T qsl = T qsl,obs − T bg,sun . We assume that the sky temperature, T sky , is the same at the same elevations during the solar and lunar observations under these near optimal conditions, and we eliminate the effect of the changing elevation during each map by applying a cosine correction, cos(el), to the sweeps. All the intensity values discussed above are in arbitrary units, and the brightness temperatures were calculated from them in Kelvin. In Figures 5, 6, 7, and 8 we present a summary of all observations: the background temperatures (T bg,sun , T bg,moon ) during the solar and lunar observations, respectively, the QSL and the mean Moon intensities including the background contribution (T qsl,obs , T qml,obs ), and after subtracting the background contribution (T qsl , T qml ). The arbitrary intensity units are converted into real temperature values in Kelvin using Because we can observe the sky and the source almost simultaneously and in many successively repeated observations, we did not have to use rather complicated atmospheric emission and attenuation models. From our data we can see that the sky value does not vary more than 5% over the observation period, making its effect on the final results negligible. Additionally, from our earlier solar observations we know that the effect of the sidelobe distortion on map measurements is negligibly small. The observations were repeated on three other days close to the occurrence of the New Moon in May 2018: 12 May (−3 days from New Moon), 14 May (−1 day from New Moon), and 16 May (+1 day from New Moon). All the observations and data processing were made using similar methods and principles as described above. A summary of all observations is presented in Table 1. On 17 March 2018, the observations were conducted in both the rising and descending elevation. The symbols shown in Figure 5 for the solar observations that almost overlap in elevation were therefore taken with a relatively large time difference. Results Owing to observational constraints, we did not have an identical number of solar and lunar maps in each observing session. We therefore do not calculate the results for T sun for each pair of solar and lunar maps, but rather for the whole set of observations in each session. We wish to use the complete dataset taken at reasonably high elevations to obtain meaningful statistics and to eliminate the effect of the sky contribution. We calculate the mean, minimum, and maximum values for each parameter, as shown in Figures 5 -8. Using Equation 2, we derive the values shown in Table 1 for each observing day. The mean values for all the observing sessions, i.e. of each column in Table 1, are the mean QSL brightness temperature, T qsl,avg = 8085 K, the minimum QSL brightness temperature, T qsl,min = 7886 K, and the maximum QSL brightness temperature, T qsl,max = 8402 K. The minimum and maximum values are used to estimate the measurement error. From our measurements we conclude that the solar brightness temperature at 8 mm is 8100 K ± 300 K.
2,616.6
2018-11-01T00:00:00.000
[ "Physics" ]
New Methanol Maser Transitions and Maser Variability Identified from an Accretion Burst Source G358.93-0.03 The high-mass young stellar object G358.93-0.03 underwent an accretion burst during the period from 2019 January to June. Given its extraordinary conditions, a number of new maser transitions may have been naturally excited during the burst stage. Searching for new maser lines and monitoring maser variability associated with the accretion burst event are important for understanding the complex conditions of the massive star formation toward G358.93-0.03. In this work, using the Shanghai 65 m Tianma Radio Telescope, we continuously monitored the multiple maser (including methanol and water) transitions toward G358.93-0.03 during the burst in the period from 2019 March 14 to May 20. There were 23 CH3OH maser transitions and one H2O maser transition detected from the monitoring. Nearly all the detected maser transitions toward this source have dramatic variations in their intensities within a short period of ∼2 months. Eight new methanol transitions from G358.93-0.03 were identified to be masering in our observations based on their spectral profile, line width, intensity, and the rotation diagram. During the monitoring, the gas temperature of the clouds in the case of saturated masers can show a significant decline, indicating that the maser clouds were going through a cooling process, possibly associated with the propagation of a heat wave induced by the accretion burst. Some of the maser transitions were even detected with the second flares in 2019 April, which may be associated with the process of the heat-wave propagation induced by the same accretion burst acting on different maser positions. Introduction Episodic accretion comprises prolonged quiescent and transient periods punctuated by intense bursts of accretion. Generally, the quiet stage has a low accretion rate and luminosity along with a cooling of the disk. However, most of the protostellar mass is accumulated in the burst stage with luminosity outbursts and the heating and stabilization of the surrounding disk (Vorobyov & Basu 2006;Stamatellos et al. 2011). There is observational and theoretical evidence that accretion of material is often episodic in early evolution for low-mass star formation (Herbig 1977;Peneva et al. 2010). Further evidence for episodic accretion comes from the periodically spaced knots seen in bipolar jets (Reipurth 1989). Apart from that, the luminosity problem provides indirect observational support for episodic accretion (Kenyon et al. 1990;Evans et al. 2009), as the luminosity outbursts are intermittent and most protostars are observed between bursts. Recent studies have shown that massive star formation can also experience phenomena similar to episodic accretion and accretion bursts that occur in low-mass star formation. The discoveries of accretion bursts in three massive young stellar objects (MYSOs), NGC6334I−MM1 (Hunter et al. 2017), S255IR−NIRS3 (Caratti o Garatti et al. 2017), and G358.93 −0.03 (Chen et al. 2020a), provide vital evidence for episodic accretion in massive star formation (Cesaroni et al. 2018;Brogan et al. 2018). The accretion burst source studied in this paper (G358.93−0.03) has a central protostar mass of ∼12 M e in its MM1 region (Chen et al. 2020a), a bolometric luminosity in a range of 5700−22,000 L e , and an accretion rate of 3.2 × 10 −3 M e yr −1 (Stecklum et al. 2021). Observational studies of the Orion molecular clouds show that episodic accretion accounts for >25% of a star's mass (Fischer et al. 2019). And even theoretical considerations show that massive stars can gain 40%-60% of their mass during accretion bursts (Meyer et al. 2021), suggesting that they are an essential rather than serendipitous process for massive star formation (Cesaroni et al. 2018;Brogan et al. 2018;Chibueze et al. 2021). If we are to gain a clear understanding of whether episodic accretion is a common phenomenon in the formation of all young stars, the study of episodic accretion bursts in massive star-forming regions (MSFRs) is crucial. So far, due to the lack of sufficient observational evidence, this process of high-mass star formation is still poorly understood. It is relatively difficult to observe bursts of accretion in massive protostars due to their rapid evolution (much shorter timescales than low-mass stars), and the fact that accretion bursts tend to be relatively short compared to the more common quiescent periods (Stamatellos et al. 2011). Moreover, these MYSOs are usually buried in very dense clouds of dust and gas. Therefore, accretion bursts in MSFRs are very difficult to capture from a temporal and environmental point of view. Fortunately, masers are powerful tracers of several astronomical events, as they are commonly believed to be extremely sensitive to changes in the physical conditions of their natal clouds, especially those caused by the enhancement of radiation fields and collisions of matter. The increased local radiation field due to the stellar luminosity burst induced by an accretion burst will result in the increase of incident photons, thus enhancing the maser emission. Class II CH 3 OH (methanol) masers are pumped by infrared radiation and are thought to be closely associated with massive protostellar luminosity outbursts. Additionally, class II methanol masers are well established as tracers of the early stage of massive star formation (Minier et al. 2001;Ellingsen 2006) and exclusively observed near MYSOs (Minier et al. 2002;Xu et al. 2008;Paulson & Pandian 2020). It is worth mentioning that a direct link between 6.7 GHz class II CH 3 OH maser flaring and an accretion burst has recently been established for the three known MYSOs (NGC6334I-MM1, S255IR-NIRS3, and G358.93-0.03) with accretion burst events (Moscadelli et al. 2017;Hunter et al. 2018;Sugiyama et al. 2019). The target source of this paper, G358.93-0.03 (hereafter G358), was identified as undergoing an accretion burst from variability monitoring of the 6.7 GHz methanol maser by the Maser Monitoring Organization (M2O, which is a global cooperative of maser monitoring programmers). 9 The 6.7 GHz maser burst started in 2019 January (10 Jy; Sugiyama et al. 2019), reached its peak emission (1156 Jy) in a short period of ∼2 months (MacLeod et al. 2019), and subsequently decayed rapidly returning to a normal accretion state. The burst thus lasted only about 5 months, such a rapid timescale that no current theory can adequately explain. Therefore, further methanol maser monitoring is needed to inform a theoretical explanation of episodic accretion process in MSFRs. Monitoring maser variability can also yield valuable information on changing conditions around the maser regions and the potential kinematics of the maser clouds. In addition to the 6.7 GHz methanol maser, multiple new maser transitions have been detected in G358, such as the new class II methanol maser transitions, some of which are in torsionally excited states (v t = 1 and 2), at both centimeter MacLeod et al. 2019;Volvach et al. 2020) and millimeter wavelengths , and new molecular maser species 13 CH 3 OH, HDO, and HNCO (Chen et al. 2020a(Chen et al. , 2020b. The latter three new species of masers accurately depict spiral-arm accretion flow structures tracing fragmentations caused by the instability of large-mass disks (Chen et al. 2020a). The discoveries of these new maser transitions suggest that the episodic accretion process of G358 has a special physical environment to effectively excite new and rare masers from methanol and other molecule species. Nearly all these new maser transitions have dramatic changes within a short period. The rapid variability of the maser emission suggests that it is a transient phenomenon, probably associated with rapid changes in the thermal radiation field due to an accretion outburst (Chen et al. 2020a;Burns et al. 2020). Moreover, the accretion burst in G358 was decisively confirmed by multiepoch SOFIA observations (Stecklum et al. 2021). The event is found to be the first near infrared (NIR)/(sub)millimeterdark and far infrared (FIR)-loud MYSO accretion burst, showing an increase in the flux of the source only in the FIR, and not in the NIR or (sub)millimeter (Stecklum et al. 2021). The dense monitoring of methanol masers at multiple transitions will help us to further investigate more details of the accretion burst phenomenon in this source. In this paper, we reported the monitoring results for the multiple methanol maser lines accompanied with accretion burst phase using the Shanghai 65 m Tianma Radio Telescope (TMRT) toward G358 during the period of 2019 March 14 to May 20. We detected eight new methanol maser transitions that have not previously been known to show maser emission. Observations The TMRT was used to conduct monitoring observations of a series of molecular lines, including masers, toward the flaring 6.7 GHz maser, G358 (J2000 position: 17 h 43 m 10 1014, 29°51′ 45 693; ). These observations began on 2019 March 14, and concluded on 2019 May 20, with a number of epochs in order to sample the different phases of the bursting source. We used the cryogenically cooled C-, Ku-, K-, Ka-and Q-band receivers covering a frequency range of 4 −50 GHz and the Digital Backend System (DIBAS) to receive and record signals. DIBAS is an FPGA-based spectrometer designed on the basis of the Versatile GBT Astronomical Spectrometer (Bussa 2012). Observations were first made in the wideband mode with bandwidths of 187.5 MHz in C band, 500 MHz in Ku band, and 1500 MHz in the K, Ka and Q bands, and detected emission was monitored using zoom-band mode with a high spectral resolution. In the zoom-band mode, each narrowband window has a bandwidth of 23.4 MHz. Using the active surface correction system, the achieved aperture efficiency of the TMRT ranges from 53% to 65%, corresponding to a sensitivity ranging from 1.28 to 1.59 Jy K −1 . The uncertainty in the absolute flux density for both wideband and zoom-band spectra is less than 10% by checking the flux density of nearby continuum calibrators. More details of our TMRT observations are listed in Table 1. All observations were performed in position-switching mode, as a series of 1 or 1.5 minutes ON/OFF cycles. For each epoch, the total on-source time ranges from 12 to 56 minutes, depending on the signal-to-noise ratio of each detected line. The GILDAS/CLASS package was used to perform the spectral line data reduction. The linear baseline of the spectrum was first fitted and then removed from the averaged spectrum of all scans. The rms noise levels achieved for each line are listed in Tables 2 and 3. Eight New Methanol Transitions In total, eight new class II methanol transitions at rest frequencies of 26.12, 27.28, 28.97, 31.98, 34.24, 41.11, 46.56, and 48.71 GHz were detected toward G358 in our observations. In addition to the 28.97 GHz transitions, all of the others are also discovered for the first time in interstellar space. Spectra of the eight new CH 3 OH transitions are shown in Figure 1 and their line properties are summarized in Table 2. The parameters and profiles of the Gaussian fits for the new transitions detected with the zoom-band mode are given in Appendices A and B, respectively. The new transitions have E u /k ranging from 121.3 to 950.7 K and the majority are from the torsional ground state v t = 0, with two from the first torsionally excited state v t = 1. As seen in Figure 1, the flux density of these methanol transitions changed significantly within ∼1 month, but the velocity extent was always contained within the range of −18.9 to −14.3 km s −1 . 26.12 GHz (10 1 -11 2 A − v t = 1): This transition was monitored over five epochs from March 17 to May 7. According to the Gaussian fit, the three velocity components of this emission are detected near −17.5, −16.4, and −15.7 km s −1 . All three velocity components have shown significant variability in flux density during the monitoring (see Figure 4). The peak-flux density reached 1218 Jy on April 12 at −17.5 km s −1 , and on May 3 the velocity component at −15.8 km s −1 reached 1265 Jy, (5): inherent parameters in different backend modes including channel number, bandwidth, and velocity resolution. Columns (6)-(9): system temperature, sensitivity, aperture efficiency, and beam size at the corresponding receiver band. Column (10): total integration time. Note. Columns (1)-(2): methanol maser transitions and adopted rest frequency with uncertainties given in parentheses from CDMS https://cdms.astro.uni-koeln.de/ and JPL (denoted with * ) https://spec.jpl.nasa.gov/. Column (3): observation epoch (YY/MM/DD). Columns (4): velocity range of the emission, which is determined by those velocity channels with emission above 3 σ rms . Columns (5)- (6): peak velocity and peak-flux density, which are directly measured for the brightest component of the emission at the given transition. Column (7): integrated flux density, which is the area enclosed by spectral profile within the velocity range in Column (4). Column (8): observational rms noise. Column (9): velocity resolution of each spectral line. (5): velocity range of the emission which is determined by those velocity channels with emission above 3 σ rms . Columns (6)- (7): peak velocity and peak-flux density, which are directly measured for the brightest component of the emission at the given transition. Column (8): integrated flux density, which is the area enclosed by spectral profile within the velocity range in Column (4). Column (9): observational rms noise. Cragg et al. (2005). The observations made on nearly same date around March 17-18 show that the spectral profile of the 26.12 GHz is similar, but with weaker emission compared to that of the 20.97 GHz presented in Volvach et al. (2020), in accordance with the model predictions of Cragg et al. (2005). 28.97 GHz (8 2 -9 1 A − ): This is the first detection of this transition in G358, observed twice on April 1 with the wideband mode and on May 3 with the zoom-band mode. The spectrum of this transition shows a complex profile with at least five velocity components detected with the zoom-band mode (see Appendices A and B). In order to determine if this source varied, we smoothed the zoom-mode data (0.015 km s −1 ) to match the spectral resolution of the wideband data (0.95 km s −1 ). At the same resolution, we found that the flux density of the ∼−16 km s −1 component slightly decreased from 14.24 Jy to 10.68 Jy on April 1 and May 3, respectively. Notably, this transition has been detected in other sources. It was emitted in the quasi-thermal from the methanol emission center, about 10″ south from the hot core region in Orion KL, and with a peakflux density of about 1.1 Jy (Wilson et al. 1993). But toward W3(OH), the 28.97 GHz emission is masering and the peakflux density reached 15 Jy (Wilson et al. 1993). Shuvo et al. On April 6 with the wideband mode, the peak-flux density reached 9.7 Jy at −15.9 km s −1 , then on May 2, the peak flux decreased to 6.3 Jy at −17.0 km s −1 with the zoom-band mode. This is the line from the same series as the 229.589 GHz (15 4 -16 3 E) line detected by . Comparisons of spectra of these lines is impossible because the 229.589 GHz line is weak and its spectrum is not shown. Anyhow, detection of the maser line from the same series can be considered as a support for the maser nature of the detected line. 46.56 GHz (20 7 -21 6 A + ): This transition was observed twice on April 6 with the wideband mode and on May 2 with the zoom-band mode. The spectral profile of the wideband data shows a clear double-peaked structure (see Figure 1) with a peak-flux density of 2.6 Jy at −15.5 km s −1 . Zoom-band mode observations less than a month later with an rms noise of 0.09 Jy revealed no emission. Previously Known Maser Transitions in G358 In this section, we report a series of previously known maser transitions including 15 methanol masers and 1 H 2 O maser detected in these observations toward G358. Figure 2 shows the spectra of these maser transitions and the line properties of each maser transition are summarized in Table 3. The parameters and profiles of the Gaussian fits for these transitions detected with the zoom-band mode are also given in Appendices A and B, respectively. It can be clearly seen from Figure 2 that the intensities of all maser emissions varied significantly with their peak variations in the range of 30% to 100% during the monitoring. Their velocity ranges, however, (2) decreasing first, then increasing followed by decreasing; (3) gradually increasing. Most of the class II methanol maser emissions showed their brightest intensity at the beginning of the observations in March and then gradually decreased (i.e., case 1). But there are some others such as at 19.97, 20.35, 20.97, and 23.12 GHz where their flux densities decreased first, then increased, and then decreased during the monitoring period (i.e., case 2), peaking around April 12 (see Section 5.2.1). In addition, for class I methanol maser transitions at 36.17 and 44.07 GHz, and a H 2 O maser transition at 22.24 GHz, their flux densities gradually increase from April to May (i.e., case 3). There were three first torsionally excited methanol maser transitions at 6.18, 20.97, and 44.96 GHz that were consistently detected from March to May. The 6.18 and 20.97 GHz emissions had higher peak intensities than a month ago Volvach et al. 2020). This also suggests that G358 has an specific pumping environment during the outburst. May, the components with a velocity larger than −16.5 km s −1 had completely disappeared. Remarks on Individual Transitions 36.17 GHz and 44.07 GHz: These two class I methanol maser transitions have been only monitored at two epochs, with the wideband mode in April and the zoom-band mode in May. The 36.17 GHz emission has three components close together covering a velocity range of −21.0 to −16.2 km s −1 , which is wider than the class II transitions. On March 7, the peak-flux density of the 36.17 GHz transition is 0.5 Jy at −19.5 km s −1 as detected by and it gets to 0.74 Jy at −18.2 km s −1 on May 2 as detected in our observations. Oddly enough, this transition was not detected on April 1 with the wideband mode, suggesting its flux was less than the threshold of 0.36 Jy (3σ rms ). When the zoom-band spectrum on May 2 was smoothed to the same velocity resolution as the wideband mode, the peak-flux density was ∼0.60 Jy, which is still larger than the above threshold taken on April 1. It is suggested that after excluding the effect using different observing spectral modes, the flux of this transition really has little increment from April to May. The 44.07 GHz transition also has three components sharing a very similar velocity range to the 36.17 GHz transition. The −21.1 km s −1 component was detected with a peak of 4.5 Jy on March 5 by , and 2.7 Jy on May 2 from our observations. Notably, the spectral and peak velocity, as well as the velocity range of these two class I methanol maser transitions, are different from that of the class II transitions detected toward this source. Both observational surveys and theoretical considerations show that the maser emission in 36.17 GHz and 44.07 GHz transitions comes from generally the same regions but their spectra do not coincide in detail, due to the different sensitivity to the pumping conditions (Sobolev et al. 2007;Voronkov et al. 2014;Leurini et al. 2016;Sobolev & Parfenov 2018). The locations of these two transitions ) are close to the position of the water maser cluster components II −3−4 which occurred in 2019 May−June (Bayandina & Burns 2022b). Velocity of the 44.07 GHz line emission is close to the velocity of the water masers detected in May, suggesting that these masers might also have close locations. Occurrence of new water maser clusters is likely associated with the accretion burst in G358 (Bayandina & Burns 2022b). Both water masers and class I methanol masers are pumped by collisions, so the variability of the 36.17 GHz and 44.07 GHz is also likely associated with the accretion burst phenomenon. In particular, it can explain the occurrence of the 36.17 GHz maser only on May 2. 37.70 GHz: This methanol transition has been monitored with two epochs with the wideband mode on April 1 and with the zoom-band mode on May 2. It has two velocity components at −16.2 and −17.6 km s −1 . Its peak-flux density was 250 Jy detected on March 7 at −17.3 km s −1 , 111 Jy on April 1 at −17.6 km s −1 , and 64 Jy on May 2 at −16.2 km s −1 detected in our observations. It seems that the emission of this transition gradually decreases from March to May. 44.96 GHz: This transition has been monitored with two epochs on April 6 and May 2 with the wideband mode. This transition appears to have a distinct double-peaked spectral structure with two velocity components peaked at −15.4 and −17.1 km s −1 during the period from April to May. also detected this transition with a clear double-peaked structure on March 5, with the peak-flux density of 508 Jy at −15.4 km s −1 . The emission at this same velocity component gradually decreased to 61 Jy on April 6 and then to ∼15 Jy on May 2. 45.84 GHz: This transition has been monitored with two epochs with the wideband mode on April 6 and with the zoomband mode on May 2. This transition has two velocity components peaked at −16.2 and −17.3 km s −1 seen from the zoom-band mode spectrum. On March 5, the peak-flux density of this transition reached 414 Jy at −15.4 km s −1 ). On April 6, the peak-flux density reached 54 Jy at −17.2 km s −1 under a velocity resolution of 0.6 km s −1 from our observation. On May 2, at the same velocity component, the peak-flux density reached 86 Jy under a velocity resolution of 0.01 km s −1 . When the zoom-band spectrum on May 2 was smoothed to a velocity resolution of 0.6 km s −1 similar to the wideband spectrum, the peak-flux density is 46 Jy. Therefore, the peak flux of this emission gradually decreased after March 5. 22.24 GHz: This H 2 O maser transition has been monitored with seven epochs from March 17 to May 7. We can see that the H 2 O maser emission has two components peaked at −17.3 and −19.2 km s −1 with a velocity resolution of 0.15 km s −1 in April. The intensity of the H 2 O maser emission is very weak (with a peak of 0.5 Jy) in March, until April 12, when it has a significant increment with a peak of 4.3 Jy, and then reaches 8.6 Jy on April 15 at −17.3 km s −1 . On April 20, at the same velocity component, the peak-flux density reaches 24 Jy (MacLeod et al. 2019). Furthermore, on May 7, some new maser features were detected in the velocity range of −22 to −17 km s −1 (with peak at ∼-20 km s −1 ) and −16 to −13 km s −1 (with a peak at ∼-14.5 km s −1 ), which are close to that detected with the Very Large Array (VLA) in June by Bayandina & Burns (2022b) with the exception of a new maser component peaked at -21.5 km s −1 that appeared in June from the VLA detection. These new water components have the similar velocity ranges to those detected in both the 36.17 and 44.07 GHz methanol transitions, supporting the idea that they have the same pumping origination and are associated with accretion burst actions on the shocked regions where they are excited (see above). Overall, the water maser emission is barely visible in March until the flux density suddenly increases on April 12 and reaches a much higher intensity on April 20, with new velocity components detected in May-June, which is quite different from the class II methanol maser transitions. This means that the water maser emission burst occurred later than the methanol masers, likely because the water maser is distributed in regions far away from the burst source G358-MM1 compared to the methanol masers. This statement is supported by the VLA images of methanol and water masers (Chen et al. 2020a;Bayandina & Burns 2022b), thus there is an expected time delay between the heat-wave propagation to the methanol and to water maser regions. The Maser Characteristics of the Eight New Methanol Lines G358.93-0.03 harbors two molecular hot cores, MM1 and MM3, and MM1 shows significantly richer molecular spectra and the brightest millimeter continuum emission . So far, accretion bursts and all the discovered masers are detected toward the peak of MM1, implying this region is suitable for exciting the maser emission Chen et al. 2020b). The majority of these new CH 3 OH transitions show a velocity range from −18.9 to −14.3 km s −1 , which is similar to the 6.67 GHz spectrum in Figure 2. At the same time, some transitions (e.g., 26.12,27.28,34.24,46.56,and 48.71 GHz) show similar velocity components to the 6.7 GHz transition at one of −15.5, −16.0, and −17.3 km s −1 at least. It suggests that the region where the eight new lines possibly originate from the same as or close to the 6.67 GHz transition. Using a Gaussian fit, we obtain the line width of the strongest feature in each of the new transitions ranging from 0.27 to 0.55 km s −1 , except for the transitions at 31.98 and 46.56 GHz detected only with the wideband mode (see Table A1). For the two brightest spectral peaks at −17.4 and −15.6 km s −1 , the average line widths of the new transitions detected in the zoom band are 0.33 and 0.40 km s −1 , respectively. The line widths of the two main spectral features of the 6.67 GHz transition are 0.44 and 0.48 km s −1 , respectively. Therefore for the two main features, the average line widths of these new transitions are even narrower than the 6.67 GHz transition. In addition, like the other maser lines, the new transitions have complex spectral compositions and rapid variations. The maser emission in all detected transitions resides in a region of 0 2 around the bursting source (Bayandina et al. 2022a). Assuming that the new methanol emission originates in this region, we infer a lower limit of brightness temperature in the range from 3 × 10 4 to 8 × 10 7 K for these new transitions through the equation: T B = S υ λ 2 /2kΩ, where S υ is flux density, λ is wavelength, k is the Boltzmann constant, and Ω is solid angle. They are much higher than the maximum kinetic temperatures (100-300 K) of the typical thermal emission associated with hot molecular cores in MYSOs. In addition, the typical thermal line width expected under such a high-temperature condition (3 × 10 4 K) is ∼6 km s −1 , through the relation: Δv FWHM = 0.2(T/A) 1/2 km s −1 , where T is the temperature in kelvin and A is the atomic mass number of the methanol molecule. It is much bigger than the average line width of our detections. These all suggest that the new emissions are unlikely to be thermal in nature. Beyond this, we performed the rotation diagram analysis to obtain additional evidence that the excitation of these transitions shows significant deviation from the local thermodynamic equilibrium. The rotation diagram of the newly detected CH 3 OH transitions is shown in Figure 3 and the detailed method is described in Blake et al. (1987) and Chen et al. (2013). In general, the eight new transitions do not distinctly show a linear correlation between ln(3 kW/(8π 3 υSμ 2 )) and E u /k expected from thermal conditions in the rotation diagram. In particular, when E u /k < 500 K, the five transitions (observed at the same epoch on 2019 May 3) even show an inverse trend in the rotation diagram, inconsistent with thermal conditions (van Dishoeck et al. 1995;Ginsburg et al. 2017;Chen et al. 2019). Notably, the three transitions in the lower-right corner observed in April do not show an opposite trend to the above five transitions, probably because of their variabilities. Combining all these properties discussed above, we conclude that all the detected eight new methanol transitions from G358 deviate from the thermal condition. There are two methanol maser transitions at 26.12 and 48.71 GHz from the first torsionally excited state (v t = 1) detected in our observations. They were not included in the published lists of class II methanol maser candidates predicted from theoretical calculations (Cragg et al. 2005). The detection of such two torsionally excited transitions with bright intensities suggests that we were observing a special pumping regime for class II methanol masers and therefore the current observations offer substantial new information for methanol maser pumping models. Maser Variability Dynamic spectra of the 11 CH 3 OH maser transitions and 1 H 2 O maser transition with more than 5 epoch observations over 2 months from 2019 March to May associated with G358 are presented in Figure 4. It is clearly seen that these masers show temporal variations with different velocity components (see panels (a) and (b) of each transition in Figure 4). 5.2.1. The New Flare Event of the 19.97, 20.35, 20.97, 23.12, and 26.12 GHz Maser Transitions on April 12 From Figure 4, we can see that during the monitoring from 2019 March to May, the flux density shows a declining trend in general for most class II CH 3 OH transitions with exceptions at the 19.97, 20.35, 20.97, 23.12, and 26.12 GHz transitions. These five transitions show a sharp increment in their flux densities in the April 12 observations. After this flare, the emission continues to gradually decay again. From the variability shown in Figure 4, we can derive that this new flare can be considered to start around April 8. In just 4 days, the maser emissions have increased by a factor of 5, 3.6, 3.25, 3.26, and 6.22 for the above five transitions, respectively. To further confirm whether this new short-period flare event that occurred in these five maser transitions is real or not, we compared data observed with other telescopes at dates close in Therefore, these two close-date observations also show the similarity in the flux density within the uncertainty of the flux scale at the 23.12 GHz transition. By comparing the peak-flux densities for the above three transitions detected in close-date observations using different telescopes, their agreement supports the increment of the 19. 97, 20.35, 20.97, 23.12, and 26.12 GHz maser transitions on April 12 being reliable. Apart from the new line at 26.12 GHz, we have collected the peak-flux densities for the remaining four lines during the flare preceding the one on April 12. For the 19.97 GHz transition at −15 km s −1 , the peak-flux density was 104 Jy detected on A direct question related to the two maser flares concerns whether these flares are associated with two different accretion burst events, or are they the result of the influence of the same accretion burst in different maser locations? As suggested by Volvach et al. (2020), due to the changes in line width of the 20.97 GHz transition experienced at different flares, it is possible that the same accretion burst leads to the maser being excited in different regions along the path of the heat-wave passage. If the two flares of masers are the result of two different accretion bursts acting on the same maser locations, then it suggests that the maser spatial locations would not vary with different flares. However, the current interferometric observations have revealed that the spatial distributions of multiple methanol maser transitions are actually changed at different burst phases (Burns et al. 2020;Bayandina et al. 2022a). Therefore, we argue that the new flare detected in April in our monitoring is mainly contributed to by the same burst acting on different maser clouds. Figure 4(d)). The standard theory of masers predicts line narrowing during unsaturated amplification and rebroadening to the full Doppler width during saturated amplification (Goldreich & Kwan 1974;Hirota et al. 2014). It means that these maser spots may be saturated. Under the saturation case, the standard theory of masers predicts that the profile in the Doppler velocity becomes the same as the Gaussian velocity distribution of the masing molecules (Litvak 1970;Nedoluha & Watson 1991;Watson et al. 2002). Therefore, for these saturated maser spots, we can derive the gas temperature from the Doppler velocity extent through the relation Δv FWHM = 0.2(T/A) 1/2 km s −1 (see Section 5.1). The relationship of gas temperature with observing time is presented in Figure 4(c) by a dashed line. It can be seen that in general the gas temperature of most maser spots gradually decreased during the monitoring from 2019 March to May. This might suggest that the maser components were going through a cooling process during the monitoring, possibly associated with the propagation of a heat wave induced by the accretion burst event in G358. It is also noted that the 12.18 GHz maser emission is generally stronger than the 6.67 GHz on the same observing dates (see Figure 4). Usually, the 6.67 GHz maser emission is stronger than the 12.18 GHz in almost all of the sources with both maser transition detections (Breen et al. 2012;Song et al. 2022). But our observations present a reversal detection in G358, suggesting that the physical conditions at the time of the accretion burst event in G358 might significantly change with respect to those expected in MSFRs without the accretion bursts. Summary Monitoring for multiple maser (including methanol and water) transitions was made with the TMRT during the period from 2019 March to May toward an accretion burst MYSO G358.93-0.03. This period mainly corresponds to the decay phase of this accretion burst. The main results and scientific insights obtained from this monitoring are summarized as follows: 1. The special conditions associated with the accretion burst are currently known to be able to excite new methanol maser transitions. Eight new methanol maser lines at 26.12, 27.28, 28.97, 31.98, 34.24, 41.11, 46.56, and 48.71 GHz were first detected during the accretion burst of MYSO G358.93-0.03. This large number of new methanol maser transitions are not reported for the other two accretion bursts in S255IR and NGC 6334I. So, the G358 burst has a kind of special condition. Indeed, it has a much shorter duration, suggesting that the accreted mass is smaller than in the other detected events. Sophisticated estimates in Stecklum et al. (2021) show that the accretion burst event was produced by accreting a clump with the planetary mass. In this case, the star does not experience substantial bloating and its emission during the burst can be considerably harder. This will result in the detection of unknown highly excited maser transitions. 2. Nearly all the maser lines have obvious and dramatic changes within a short period (∼order of a month). Some of the maser transitions showed repeated flares in 2019 April. This may be related to the passage of the heat wave induced by the same accretion burst event acting on different maser positions. 3. During the monitoring, the gas temperatures of clouds under the hypothesis of saturated masers generally showed a significant decay. This indirectly proves that the maser flares are not due to a kinetic process in such a short period, but to the propagation of thermal radiation from the MYSO's luminosity outburst, with a cooling process detected in our monitoring.
8,311.8
2022-11-01T00:00:00.000
[ "Physics" ]
Efficiency evaluation of customer satisfaction index in e-banking using the fuzzy data envelopment analysis Article history: Received Feb 28, 2013 Received in revised format 19 September 2013 Accepted 23 October 2013 Available online November 24 2013 E-commerce has created significant opportunities for the corporations to understand the customers’ expectations, desired values, to increase their satisfaction and to expand their market share, more efficiently. The most significant activity of e-commerce is in the field of e-banking and financial services. Customer satisfaction index is a concept introduced for evaluating of the service quality in electronic banking. Considering the relative importance of customer satisfaction in e-banking, defining scientific criteria for the assessment of mentioned index is very important. So, a scientific and efficient method is always needed. The purpose of this paper is to use the fuzzy data envelopment analysis (DEA) techniques for evaluating and ranking the efficiency of online customer satisfaction index in eight economic banks in Iran. Here, we first study the fuzzy set theory and the method of traditional DEA in the same model. Next, the relationship between them were developed, which provide the fuzzy DEA model with qualitative data. The SPSS and GAMS software package were employed to analyze the data collected through questionnaires. The results show that three economic banks in terms of customer satisfaction in e-banking were located on the efficiency border and were quite efficient. © 2014 Growing Science Ltd. All rights reserved. Introduction Currently, because of the progress occurred in terms of technology, businesses have switched to electronic transactions.Digital economy is the most popular in the field of the electronic commerce and electronic banking.Companies use the internet and web site can adopt a cost-effective way, which helps them become fast-to-market, fast-to-produce, fast to-deliver and fast-to-service company (Feicheng et al., 2009).Phenomenon of e-banking is one of the e-commerce achievements.Today, ebanking is known all around the world as an inseparable part of e-commerce and has a fundamental role in its implementation.Growing e-commerce in the world and the commerce requires an easy, fast, accurate banking system to transfer financial resources, thus e-banking plays a fundamental role in e-commerce.Electronic banking (e-banking) is defined as the automated delivery of new and traditional banking products and services directly to customers through electronic, interactive communication channels (Salehi & Alipour, 2010).Interactive nature of e-banking provides an opportunity to achieve a deeper understanding of customers.On the other hand, the benefits of ebanking in terms of providing better services to customers and improving productivity index of the banks have been investigated by many researchers in the world.The role of customer satisfaction and service quality in development of banking communications has been especially emphasized in the banking literature (Petridou et al., 2007).Many studies show that in today's competitive world, the success of companies is due to the customer retention and having good relationship with customer (Hsiao, 2009).Blanchard and Galloway (1994) stated that customer satisfaction is the result of a customer's perception of the value received in a transaction or relationship; where value is equal to perceived service quality relative to price and customer acquisition costs (Su, 2004).The concept of customer satisfaction plays an important role in the marketing literature.Almost all researchers agree that customer satisfaction is generally assumed to be a significant determinant repeat sales of products and services, positive word-of-mouth, and customer loyalty (Barry & Terry, 2008).Customer satisfaction is considered as a key ingredient in achieving superiority and as an essential role on the success of any organization because it leads to profit ability and customer loyalty to the organization.Since Oliver (1980) proposed a cognitive model to determine satisfaction, customer satisfaction and customer satisfaction index (CSI) have been widely developed in both theory and applications, especially in the field of marketing, education, medical treatment and guesthouse management (Liu et al., 2008).Established in 1989, the Swedish Customer Satisfaction Barometer (SCSB) was the first truly national customer satisfaction index for domestically purchasing and consuming products and services (Grigoroudis et al., 2008).In developed countries, great efforts have been accomplished in the research and development unit to improve practical understanding customer satisfaction.As an example, we can point out to Fornell described the effective factors in customer satisfaction and found the American Customer Satisfaction Index (ACSI).Later, the European Consumer Satisfaction Index (ECSI) was introduced as an economic indicator, which measures customer satisfaction by an adaptation of the SCSB and the ACSI.Following the public acceptance and understanding the importance of customer satisfaction in Europe and America, many countries have attempted to determine CSI in a national scale; such as SWICS in Switzerland, NCSB in Norway, MCSI in Malaysia and KCSI in Korean (Grigoroudis et al., 2008). Electronic satisfaction and quality of electronic services are considered as major issues in global electronic commerce and electronic banking.High quality of electronic services is the success key for any corporation that works in global competitive environment of electronic commerce.In the modern management science, the customer satisfaction is considered as a baseline standard of performance and a possible standard of excellence for any business organization (Mihelis et al., 2001).Measuring customer satisfaction has become increasingly popular for the past two decades and today represents an important source of revenue for market research firms.Measuring customer satisfaction offers an immediate, meaningful and objective feedback about clients' preferences and expectations (Gazor et al., 2012).Considering the importance of customer satisfaction index in e-commerce (e-banking), using the scientific criteria for evaluation of mentioned index plays essential role for applying a scientific and efficient method (Liu et al., 2008). Due to the critical role of customer satisfaction index, analysis of this index is an essential element to discover the strengths and weaknesses of an organization in terms of e-commerce and e-banking.Therefore, efficiency evaluation can significantly help to select the appropriate model and it has been used in the previous works on e-banking from Internet services.A variety of methods have been so far presented by considering several factors to measure customer satisfaction index where each has its own advantages and disadvantages.One of the most useful analysis methods is the Data Envelopment Analysis (DEA).This method is described as a widely used, quantitative and standard tool in efficiency measuring studies and is able to provide solutions for better management to achieve the expected output (Cooper et al., 2005).In recent years, the use of qualitative and imprecise data in DEA model is the most important challenge of researchers.One of the options discussed in this context is the use of fuzzy set theory (Wen & Li, 2009;Zerafat Angiz et al., 2012). In this study, using fuzzy set theory applications, the possibility of using theoretical data, experiences and human judgments, which are usually qualitative and imprecise, has been applied in the context of DEA model.Fuzzy logic theory is against the Aristotelian binary logic and not only can be used in theory domain, but also is used in industry, which has been involved in various researches.Fuzzy logic theory can be widely used in operation research, management science, control theory and many other fields (Zimmermann, 2010).DEA is a management technique providing a powerful tool for managers to measure their customer satisfaction index compared with other competitors for making better decisions in future.Therefore, in this study, a fuzzy DEA model is used to eliminate the defects in the classical DEA models and to provide the possibility of the measurement of customer satisfaction in e-banking, where data are imprecise and in this way, some useful suggestions on improving customer loyalty are presented. Here, the fuzzy DEA method based on qualitative data is used for the first time to determine the effectiveness of customer satisfaction index on internet services in e-banking.The organization of this paper is as follows: Some basic concepts of customer satisfaction and studies on customer satisfaction, e-commerce and e-banking and the research model are presented in section 2. The research methodology is provided in section 3. Section 4 presents the results of data analysis.In the last section (Section 5) a summary of research results and management suggestions for future research are presented. Literature review and research model E-banking is a prerequisite for e-commerce and e-commerce will also further expand with e-banking development.Using e-commerce for achieving the customer satisfaction is considered as the primary driver for the development of modern business firms.The researchers emphasize the importance of customer satisfaction in the banking industry and its indispensable role in maintaining the customer (Farquhar & Panther, 2008).When an organization can attract a new customer, customer satisfaction will be the starting point to establish a long-term relationship between customer and the organization.Therefore, customer satisfaction is an important factor in determining the quality of provided service and a source of competitive advantage for the organization.Customer satisfaction is post purchase evaluation of a service following a consumption experience (Vasudevan et al., 2006).Dissatisfied customers, on the other hand, are likely to switch brands and engage in negative word of mouth advertisement.Furthermore, behaviors such as repetitive purchase and word-of-mouth directly influence the viability and profitability of a firm (Razak et al., 2007).E-Satisfaction is defined as the contentment of a customer with respect to his/her prior purchasing experience with a given electronic commerce firm (Anderson & Srinivasan, 2003).Customer satisfaction and loyalty are two dimensions of the most important constructs in relationship e-banking and customer loyalty is the result of customer satisfaction.Customer loyalty is defined as buyer's deeply held commitment to stick with a product, service, brand or organization consistently in the future, despite new situations or competitive overtures to induce switching (Flint et al., 2011).E-loyalty is defined as the customer's favorable attitude toward an electronic business resulting in repetitive buying behavior (Anderson & Srinivasan, 2003). Many studies have been conducted on customer satisfaction and e-banking.Numerous examples of these studies are presented as follows.Szymanski and Hise (2000) provided a very interesting model to identify the determinants of electronic satisfaction in online shopping and e-retailing and found that convenience, site design, and financial security display the greatest effect on e-satisfaction.Yang and Fang (2004) determined the online services quality dimensions and showed the relationship between these dimensions with customer satisfaction.These qualitative dimensions include: reliability, responsiveness, ease of use, competence.Petridou et al. (2007) in a research on the quality of bank service studied the effect of service quality on customer satisfaction in private banks in Greece and Bulgaria; the results showed that Greece customers had higher levels of service quality perceptions as compared with Bulgarian customers.Molina et al. (2007) surveyed the effect of longterm customer relationships with their bank and satisfaction.Their results showed that an ensured customers' attitude has a significant effect on customer satisfaction of bank.In a recent study, Liu et al. (2008) studied the concept of customer satisfaction index in e-commerce from the viewpoint of system control and system analysis.Poddar et al. (2009) presented a model in which three variables of web site personality, web site quality, and web site customer orientation were identified as factors affecting purchase intentions.Finn (2011) in a research in the field of electronic services, using statistical methods showed that some service quality dimensions have non-linear relationship with customer satisfaction.Considering the importance of customer satisfaction in e-banking and the results of previous researches, the variables which can create different values at different times for internet customers in e-banking explained as follows: Perceived quality: Perceived quality refers to a consumer's judgment about overall excellence or superiority (Sánchez-Fernández & Iniesta-Bonillo, 2009).Service quality is the customers' subjective assessment of the expectations with actual service performance (Cheung & Lee, 2005).Service quality in e-commerce can be defined as the consumers overall evaluation and judgment of the excellence and quality of e-service offerings in the virtual market place (Marimon et al., 2012).Customer satisfaction and loyalty are functions of customer perception about service quality (Lee & Hwan, 2005). Perceived value: Perceived value refers to the perceived level of product and service quality relative to the price paid (Ciavolino & Dahlgaard 2007).Perceived value is often assumed to involve a consumer's assessment of the ratio of perceived benefits to perceived costs (Wang et al., 2012).Customer satisfaction is directly associated with perceived value; when perceived value increases, customer satisfaction also increases. Customer expectations: Customer expectations refer to the level of quality that customers expect to receive and are the result of prior consumption experience with a firm's products or services (Ciavolino & Dahlgaard, 2007).According to Zeithaml et al. (2006), customer expectations are beliefs about a service, which serves as standards or reference points in which the performance of the service is judged.Customer expectations are reference points for measuring the received service quality.That is why the variable of customer expectations has a positive impact on customer satisfaction and customer loyalty.In addition, customer expectations have positive and direct impact on perceived quality and perceived value. Image: the image construct evaluates the underlying image of the company.Image refers to the brand name and the kind of associations customers obtain from the product/company (Ciavolino & Dahlgaard, 2007).Kristensen et al. (2000) argued that image is an important dimension of the customer satisfaction model.Image is a consequence of being reliable, professional and innovative, having contributions to society, and adding prestige to its user.It is anticipated that image has a positive effect on customer satisfaction, customer expectations and customer loyalty. Customer satisfaction: Customer satisfaction can be described as customers' evaluations of a product or service with regard to their needs and expectations (Bai et al., 2008).Customer satisfaction is post purchase evaluation of a service following a consumption experience (Vasudevan et al., 2006).Customer satisfaction is an emotional reaction or a state of mutual cognitive understanding (Reynolds & Harris, 2009).Szymanski and Hise (2000) defined e-satisfaction from the e-commerce perspective, as the measure of satisfaction with the online or web shopping.According to a popular theory in marketing literature, the immediate consequence of customer satisfaction increase is due to reduction in his/her complaints and increase customer loyalty (Casado-Díaz & Nicolau-Gonzálbez, 2009). Customer loyalty: Customer loyalty has been defined as a deeply held commitment to rebuy or repatronize a preferred product/ service consistently in the future (Davis-Sramek et al., 2009).Cyr et al. (2008) defined e-loyalty, meaning online loyalty, as perceived intention to visit or use web sites and to consider purchasing from them now and later.Wirtz (2003) listed the results of customer satisfaction; repeat purchase; loyalty; positive word-of-mouth and increased long-term profitability. For using the fuzzy DEA model to determine efficiency of customer satisfaction index from provided internet service in e-banking, the perceived quality, customer expectations, image, perceived value were considered as input variables while customer satisfaction and customer loyalty were considered as output variables (See Fig. 1).To determine the efficiency of customer satisfaction, we converted a fuzzy DEA model to linear programming model using the -cut method.GAMS software was used in this study to design the structure of branches and to evaluate the efficiency of customer satisfaction index from electronic banking services. Survey instrument Fuzzy DEA model is associated with customer satisfaction index shown in Fig. 1 based on prominent previous researches and theories on customer satisfaction.This model includes some hidden variables, which can be described by a set of observable items.Perceived quality, image, customer expectations, and perceived value are considered as input variables of the model while customer satisfaction and customer loyalty are considered as output variables of the model.These variables and constituent items as presented in Table 1. Data collection tool was a questionnaire based on mentioned index.This questionnaire included 20 questions adjusted in ordinal and Likert scale including very low, low, medium, high and very high.Validity and reliability are two necessary features for every measuring material such as questionnaire because these materials should analyze data and provide conclusions for researchers. To ensure the validity of the questionnaire the experts' opinions were used.In addition, Cronbach's alpha coefficient was used to assess the reliability of the questionnaire.In order to test the reliability Sample and data collection Evaluation model of customer satisfaction index efficiency in e-banking was conducted in the second half of 2013, in Iran.Electronic Banking in Iran is developing so rapidly that new banking services have led to intense competition in e-banking industry.In Iran, more than 20 public and private banks are active in the field of electronic banking service.Isfahan is the pioneer center of e-banking and tourism industry in Iran.Population of this study included online customers of eight economic banks in Isfahan.Customers of these banks were pretty active in using the e-banking services in this period of time.Although the estimated required sample numbers for this study were 240, to ensure the efficiency of data, the questionnaires were given to 276 purposefully selected samples.These samples were selected among customers who had enough information about e-banking.Demographically, 64% of respondents were men, 67% of respondents were in the age range of 20 to 35 years old and 33% of respondents were over 35 years old.In terms of education, 53% of respondents had a bachelor's degree, 31% had a master's degree or higher and 16% were students.These statistics showed that the higher the education of people, the higher their tendency to use e-banking. Data envelopment analysis A non-parametric method for assessing the decision making units (DMUs) is data envelopment analysis model which has many applications in efficiency evaluation and productivity measurement. The DEA method was first introduced in 1978 by Rhodes in the Carengie Mellon University as a doctoral thesis and was used for assessing the educational progress of national school students in America.The first article of DEA was presented by Charnes et al. (1978) in that year and was known as the CCR model.DEA models assess the ability of each decision making units (DMUs) in converting inputs to outputs.This level of ability calls the efficiency.Efficiency evaluation of one decision making unit (DMUs) in the DEA model requires comparing its outputs with its inputs.The DMUs are characterized by several inputs and outputs.The efficiency score in the midst of multiple input and output factors is defined as (Talluri, 2000). weighted sum of outputs weighted sum of inputs Banker et al. (1984) provided the basic principles and in addition to reformulation of CCR model according to these principles, designed another model, which was known as BCC model.CCR is adopted to evaluate performances of DMUs under the assumption of constant returns to scale.BCC is used to understand variable returns to scale, including constant, decreasing and increasing.In general, DEA models are divided into both input-orientated and output-orientated.The main objective in input -orientated model is to reduce the inputs so that it can be possible to achieve the same previous outputs, and in output-orientated models, the main objective is to increase the outputs so that the inputs remain stable.Consider a set of n DMUs where DMU consumes = , , … , inputs and produce = , , … , outputs.Suppose (i = 1, 2, … ., m) is quantity of input i consumed by DMU and (r = 1, 2, …, s) is quantity of output r produced by DMU .The outputorientated CCR primal model for is then written as follows (Charnes et al., 1978): In the output-orientated CCR primal model, the objective function value is greater than or equals to 1.If the objective function value is 1, it will be the sign of the efficiency of understudy DMU and if this value is more than 1, it will show the inefficiency of DMU (Charnes et al., 1978).Ranking of the different units can orient correctly the analysis and decisions on the efficiency or inefficiency of units, change the inefficient units to efficient ones and for the better conclusions to recognize the importance of one unit situation rather than others.The method of ranking DMUs in this study is called Super-Efficiency Ranking which was presented by Andersen and Petersen (1993).In this method, the efficiency of an efficient DMU is measured compared with the other efficient DMUs.In output-oriented Andersen-Petersen model (AP model), DMUs which have the objective function value of less than 1, will be ranked higher than other DMUs (Andersen & Petersen 1993).The outputoriented AP model is given as follows: . . Fuzzy set theory Fuzzy concept was expressed for the first time in Max Black's article called "vagueness" in 1937.Then, in 1965, Lotfi Zadeh, published his article entitled "fuzzy sets" in the Journal of Information and Control.Lotfi Zadeh described fuzzy sets as "curves membership".Lotfi Zadeh defined fuzzy sets as the sets with vague and imprecise borders.He expresses this concept as: membership in a fuzzy set is not a definite or indefinite matter, but the membership is expressed as a degree (Zadeh, 1965). Fuzzy set A fuzzy set defined on a universe X may be given as: = , ( ) , where : X → [0, 1] is the membership function of .The membership value ( ) describes the degree of belongingness of x ∈ X in (Zimmermann, 2010). Triangular fuzzy number A triangular fuzzy number is defined with piecewise linear membership function ( ) as follow (Li, 2012): and as a triplet (l, m, u) is indicated where l, u are the lower and upper bounds, respectively, and m is the most likely value of A .A triangular fuzzy number can be defined by a triplet (l, m, u) shown in Fig. 2.  Addition of two triangular fuzzy numbers, Division of two fuzzy numbers, Μ ∆ Ν = l p , m q , u s ⁄ l ≥ 0 , p ≥ 0.  The average triangular fuzzy numbers, α -cut An α-cut of a fuzzy set A is a crisp set A that contains all elements in the universal set X with membership grades in A greater than or equal to a specified value of α stated as (Li 2012), Linguistic variable Linguistic variables are variables in which the values of words or sentences are expressed in a natural or artificial language.In other words, in a natural language, the variables with imprecise and ambiguous values are used more than the variables with precise and determined values.Usually the evaluators are requested to judge through the language variables of "very low", "low", "medium", "high" and "very high" where it is possible to consider a scale between 0 and 10 using triangular fuzzy numbers.Triangular fuzzy numbers according to Likert scale is given in Table 2. Fuzzy data envelopment analysis Most methods of DMUs ranking assume that all inputs and outputs data are precisely known.However, in most general cases, the data for evaluation are often gathered from investigation to Judge the natural language such as good, medium and bad rather than a specific case.For example, customer satisfaction measurement in e-banking and service quality cannot be stated as a definite number.Therefore, DEA models in practice cannot provide good results, so, the necessity of using indefinite numbers or fuzzy numbers in these models will become obvious.Fuzzy DEA is a powerful tool for evaluating the performance of DMUs with imprecise data (or interval data).Sengupta (1992) was the first to introduce a fuzzy mathematical Programming approach where the constraints and objective function are not satisfied crisply.Guo and Tanaka (2001) introduced a fuzzy DEA method to determine the efficiency regarding to the -cut in which data were considered as the symmetrical triangular fuzzy numbers.Saati et al. (2001) developed a fuzzy DEA model, which deals with the infeasibility difficulties of Andersen and Petersen's model.Thus, Saati et al. (2002) presented a model for efficiency analysis and ranking of DMUs with fuzzy data. Fuzzy Output-orientated CCR model In this study, to measure the customer satisfaction index in e-banking, the fuzzy data is used as input and output variables of output-oriented CCR model.Fuzzy data is considered as triangular numbers and the following method is used for solving the fuzzy model and converting it to linear programming model. Consider the set of DMUs including DMU with fuzzy inputs and outputs.= , , and = , , are inputs and outputs respectively, and both of them have Triangular property.Existence of the large amount of DMUs causes to the complexity in solving of the equations in the CCR primal model.Therefore, in this study, we used CCR dual model for the better evaluation of the DMUs.The output-orientated CCR dual model is used to maximize the efficiency of DMUs and is efficiency value.Now, considering Output-orientated CCR dual model with fuzzy data as follows: and = , , are the fuzzy input and output variables respectively, the triangular fuzzy numbers can be converted to following crisp intervals using the α -cuts: Regarding the fact that the used data in linear programming model is in the form of crisp numbers, after integrating each variable crisp interval, one crisp number will be achieved for each variable.Integration method is expressed as follows: According to mentioned variable changes fuzzy Output-orientated CCR model can be converted into a linear programming model as follows: . Fuzzy Output-orientated Andersen-Petersen model There are several methods for ranking of DMUs.One of the methods for ranking DMUs is AP model.The super-efficiency model of Andersen and Petersen (1993) is used to score hotels that were included in the CCR-Efficient set.The Output-orientated AP model with fuzzy data is as follows: . Analysis and findings Each technique or model has its own analysis method in which we can analyze the data.In this study, for evaluating and measuring the customer satisfaction index in e-banking, output-orientated CCR model with triangular fuzzy numbers was used which can then cause an increase in customer satisfaction and loyalty.With considering the fuzzy data, the standard CCR model will in fact, convert to a fuzzy linear programming.According to α -cut method, fuzzy linear programming will convert to a definitive linear programming, which can be solved by GAMS software.The important point in determining the efficiency and productivity of e-banking is the respect to the multi-role nature of customer satisfaction index.For efficiency evaluation and performance measurement of this index, we should consider several inputs and outputs.Perceived quality, image, customer expectations and perceived value were considered as the input variables while customer satisfaction and customer loyalty were considered as output variables in evaluation model of customer satisfaction index efficiency in e-banking.30 internet customers of each bank were considered as samples of related bank.After the data collection through the questionnaires and conversion of answers (Likert scale) to the triangular fuzzy numbers, the fuzzy average method was used to determine the final fuzzy values of the variables.At first, the fuzzy average of each person's comments about variables was calculated, then, the fuzzy average of all comments of selected sample from each bank was calculated.The results are given in Table 3.Using the method presented in fuzzy DEA section, triangular fuzzy numbers of input and output variables were converted to crisp numbers to be usable as input and output values in output-orientated CCR model and Andersen-Petersen model.The results are given in Table 4.In this study, the Outputorientated CCR model was used to increase the customer satisfaction index.In output-orientated CCR model the objective function value is more than or equal to 1.If the objective value is 1, it indicates the efficiency of studied unit and if the objective function is more than 1, it indicates the inefficiency of the unit.Data obtained from Table 4, was inserted to output-orientated CCR model without any default and with the same weights.GAMS software was used for solving the model and the results are given in Table 5.In Table 5, the results of customer satisfaction index efficiency are shown.According to above results, the efficiency related to internet customer satisfaction index of Saman, Pasargad and Ansar banks is equal to 1, which indicates that their web sites are efficient in terms of customer satisfaction index.Among these banks, Tose'e bank has the minimum efficiency.Output-orientated Andersen-Petersen model is used for ranking the efficient units. In output-orientated Andersen-Petersen model, the efficient units with objective function value of less than 1, will be ranked higher than the other units.Regarding to the result shown in table 5, the number of determined efficient units is 3. So, in output-orientated Andersen-Petersen model, these units show the efficiency border and the other units can be assessed according to this border.GAMS software is used to solve this model and the results of the customer satisfaction index efficiency from e-banking are shown in Table 6.According to Table 6, Ansar bank has the most efficient web site in terms of customer satisfaction index from e-banking.Pasargad and Saman banks are in the second and third places, respectively.In today's competitive world, banks need to increase their understanding of customer approach to achieve their goals.Regarding the above results, Parsian, Sarmayeh, Shahr and Tose'e banks should focus more on customer satisfaction and loyalty.Since the perceived quality variable is one of the most important and effective factors for customer satisfaction and loyalty variables, continuous improving of service quality and comparative tools of provided service with competitors' service and increasing the structural attraction of web site in terms of prestigious capabilities should be considered in strategic programs and policies of these banks.In e-banking services, a great part of services have the value-creating aspect, especially for the new customers.The greater the Customer's perception about the uniqueness of new services, the greater is their satisfaction with e-banking service and customer loyalty.The augmentation of guide tools for searching the services, the ease of using the provided services for customer and the users' trust on server and services result in increasing of customer satisfaction with e-banking services and customer loyalty, then, lead to increase of efficiency and ultimately efficiency of web site in terms of customer satisfaction index with e-banking services.With respect to obtained results from output-orientated AP model, the banks with inefficient web sites in terms of customer satisfaction index should follow the model of banks with efficient web sites and should try to offer better services in their web sites.The banks with efficient web sites in terms of customer satisfaction index should try to improve, diversify and develop their services which can lead to an increase in their web site efficiency ranking compared with other web sites in terms of customer satisfaction index. Main findings Regarding to the growing trend of e-commerce, what can make a corporation prominent in the competitive market, is gaining competitive advantages in this field.Therefore, companies need banking operations for transferring financial resources-banking, which has a fundamental role in ecommerce.Nowadays, e-banking is an inseparable part of e-commerce and has an important role in its implementation.Today, many banks around the world use the electronic services as a tool for market development, improving the customer services, cost reduction and promotion of productivity. Customer satisfaction is one of the most important issues in e-banking.Modern management philosophies introduced the customer satisfaction as a base standard for each business's efficiency.The first step in evaluating the efficiency is selection of an appropriate evaluation model for dimensions based on which the decision makers can evaluate their units.This paper focuses on how to evaluate the customer satisfaction through studying the effective factors involved in customers' satisfaction with electronic services.Data Envelopment Analysis (DEA) is a management method which measures each unit's relative efficiency (DMU) and provides management solutions.The main goal of this study is evaluating and ranking the customer's satisfaction index efficiency among the internet customers of 8 Iranian banks using the fuzzy DEA model.For using the fuzzy DEA model to determine the customer satisfaction index efficiency in e-banking, perceived quality, image, customer expectations and perceived value were considered as input variables of the model, while customer satisfaction and customer loyalty were considered as output variables.The main idea for determining the customer satisfaction index efficiency is based on fuzzy DEA model conversion to a linear programming model by α -cut method.In this way, the responses obtained from fuzzy model solution are more precise than the classic models.One of the characteristics of this paper is studying simultaneously the fuzzy set theory and traditional DEA method in one model and studying the relation between them.According to results obtained from output-orientated CCR model, Saman, Pasargad and Ansar banks are totally efficient or on the efficiency border in terms of internet customer satisfaction index and Tose'e bank has the minimum efficiency.One of the characteristics of DEA method is ranking the efficient decision making units (DMUs).Results of output-orientated AP model show that Ansar bank has the most effective web site in terms of customer satisfaction index, Pasargad and Saman banks are in the second and third places, respectively.Also, it seems that the fuzzy DEA model in terms of customer satisfaction index evaluation in e-banking is an appropriate model because a model characteristic is to be precise and applicable to similar settings.Electronic services to customers have always been important.But today's customers have more choices compared to customers in the past decade.Illustrious electronic services should be specified by customer demands and expectations.If in the view of supplier, electronic services are very interesting, but in practice they fail to provide customer satisfaction, these services are not considered outstanding.By an overall study of the customer satisfaction index, it can be concluded that internet banking in these 8 banks gives better tools for decision making in internet baking and forecasts the level necessary to eliminate deficiencies and needs in the educational programs, research and services provided especially in the field of e-banking.Comparison of the status of each bank with the same bank in terms of performance indicates that in order to improve efficiency, a comprehensive planning and various types of criteria are required.In some cases, it may take years to achieve the benefits of efficiency.Therefore, banks should not feel despondent and desperate at the beginning stages of experiencing performance improvement programs.Careful analysis, review and remedial action will lead the efficiency programs to the right direction.According to findings of this article, it seems that the fuzzy DEA model in terms of customer satisfaction index evaluation in e-banking is an appropriate model because one of the most significant characteristics of a model is its preciseness and applicability. Future research It is suggested to formulate a series of criteria and standards in terms of effective factors in customer satisfaction index efficiency in e-banking by using how it would be possible to measure and to determine the efficiency changes.The methodology used in this research is easily applicable to organizations which have internet customers.These organizations can identify and rank the effective factors in customer satisfaction index efficiency using the decision models and considering their situation.Regarding the limitations of output-orientated CCR model, it is suggested to future researchers to use the other DEA models and compare the results with results obtained from outputorientated CCR model.Since fuzzy DEA model is a numerical and mathematical technique, so, the measurements errors can cause remarkable changes in results.Therefore, after identifying efficient decision making units (DMUs), data needs to be controlled again to ensure its accuracy. Fig. 1 . Fig. 1.Fuzzy DEA model related to customer satisfaction index , 30 questionnaires were collected from a sample and Cronbach's alpha coefficient was calculated using SPSS software.Cronbach's alpha was calculated 0.903 for this questionnaire that showed the high reliability of the questionnaire.Constituent variables and items related to the model of customer satisfaction index Being the first selected web site for using a service CL2: Recommending the others to use this web site CL3: Selection of this web site as the best one among the competitors CL4: Faith and belief in the use of service Table 2 Triangular fuzzy numbers according to Likert scale Table 3 Fuzzy average of input and output variables values in output-orientated CCR model Table 4 Crisp values output-orientated CCR model Table 5 Efficiency values associated with customer satisfaction index, Output-orientated CCR model Table 6 Ranking of customer satisfaction index efficiency, output-orientated AP model
8,212.4
2014-01-01T00:00:00.000
[ "Business", "Economics" ]
A Hybrid-Driven Optimization Framework for Fixed-Wing UAV Maneuvering Flight Planning : Performing autonomous maneuvering flight planning and optimization remains a challenge for unmanned aerial vehicles (UAVs), especially for fixed-wing UAVs due to its high maneuverability and model complexity. A novel hybrid-driven fixed-wing UAV maneuver optimization framework, inspired by apprenticeship learning and nonlinear programing approaches, is proposed in this paper. The work consists of two main aspects: (1) Identifying the model parameters for a certain fixed-wing UAV based on the demonstrated flight data performed by human pilot. Then, the features of the maneuvers can be described by the positional/attitude/compound key-frames. Eventually, each of the maneuvers can be decomposed into several motion primitives. (2) Formu-lating the maneuver planning issue into a minimum-time optimization problem, a novel nonlinear programming algorithm was developed, which was unnecessary to determine the exact time for the UAV to pass by the key-frames. The simulation results illustrate the effectiveness of the proposed framework in several scenarios, as both the preservation of geometric features and the minimization of maneuver times were ensured. Introduction Recent times have witnessed a wide range of applications for unmanned aerial vehicles (UAVs) including the commercial, military, and research fields [1][2][3][4][5]. Most of the autonomous UAV flight missions are limited to cruise on a predefined path with steady flight states. However, in some scenarios, such as dog fights and high-speed obstacle avoidance, UAVs are required to perform fast and agile maneuver flights. During maneuvers, drastic changes in position and attitude will hinder UAVs from maintaining trim conditions. Therefore, developing maneuvering flight techniques is of great importance, and the current research mainly focuses on the quadrotor UAV [6][7][8][9]. It must be noted that fixed-wing UAVs have longer endurance and a larger payload capacity compared to those of the quadrotors. Yet the maneuvers of fixed-wing UAVs have not been thoroughly studied due to the fact of its sophisticated movement features [10]. Meanwhile, it is harder to conduct real flight tests also. The research on autonomous maneuverable flight is assumed to be hierarchical including several topics such as maneuver decision-making, maneuver planning, and tracking control [11]. The planner first decides the category of the maneuver, then generates a specific trajectory for the tracking control. Among them, maneuver planning is essential for the maneuverability of UAVs. In this article, we were mainly concerned with maneuver planning issues. When validating the complex maneuver representation approaches, appropriate maneuver generation algorithms are also considered. Generally, state-of-the-art maneuver generation algorithms incorporate data-driven [12] and model-driven approaches. One category of maneuver generation algorithms that have been successfully utilized in UAV maneuver generation is learning from demonstration [13]. By collecting the flight data taught by experts with a high-level planning layer, imitating learning methods can be applied to extract the maneuver features [14]. The resulting algorithm is practically capable of reproducing and generalizing the learned motions to some extent. However, due to the limitations of viewing distance, communication delay, and external disturbances, such as wind gusts, it is hard for the pilot to perform their flight skills optimally [11]. Therefore, optimality is essential in the learning procedure and few of the data-driven algorithms in the recent literature take the optimization into consideration. In the robotic literature, model-driven planning has been widely studied and the typical approaches include polynomial interpolation, Dubins curve [10,15,16]. For some special scenarios, it is essential to plan the position and attitude trajectories simultaneously. Therefore, many researchers have also made explorations in SE(3) space [17][18][19]. Most of the methods are based on the differential flatness principle which can greatly simplify the calculation complexity. However, for fixed-wing UAV flight maneuvers, it has to be stressed that those approaches did not take the complicated physical models with complex aerodynamic features into consideration. Therefore, the non-differential flatness property of fixed-wing UAV [20] hinders the application of the aforementioned methods. Solving the optimal control problems is also an effective maneuver planning method and has attracted growing attention. In particular, both the direct optimization method and indirect optimization methods have been applied in UAV flights in recent years [21][22][23]. Through taking the physical model into consideration, dynamical feasibility is ensured. Furthermore, by combing various cost items, optimal maneuvers can be solved for different scenarios [24]. However, for complex maneuvers, it is always hard to model the optimization problem, which has a great impact on the calculation efficiency and trajectory quality. In visual tracking [25,26], maneuvering is also required, vision-and-language navigation [27][28][29][30] provides another novel perspective. This method combines vision, language, and action which can turn relatively general natural-language instructions into robot agent actions. It is generally used in indoor complex environments and also has certain lightening significance for fixed-wing tracking of dynamic targets or maneuvering in complex outdoor environments. For the representation of complex maneuvers, one recent algorithm that has been successfully proposed is the idea of key-frames [31], which selects several specific points in 3D space as the key-frames and various tasks can be accomplished. In [32], both the position and yaw angle are taken into account in the key-frame and the minimum snap trajectory is generated using polynomials. Furthermore, the author formulates the trajectory optimization problem as a quadratic programming issue and then enables the quadrotors to pass through multiple circular hoops quickly. Ref. [33] emphasizes the Bang-Bang characteristics of the minimum time trajectory and compares the existing timeoptimal approaches, and through analysis, it is concluded that the polynomial method can only obtain non-optimal trajectories. On this basis, Foehn proposes complementary constraints [34] and solved both the time allocation and time-optimal trajectory planning problem elaborately. Through comparison and verification, it can be concluded that the trajectory designed by the algorithm is faster than that of professional pilots. For fixed-wing, a model is required, which can be obtained by identifying methods [35]. In addition, it focuses on the minimum time problem through setting a series of waypoints and utilizing the flight corridors and B-spline curves. Using numerical optimization methods to solve non-convex problems and cleverly designed initial guesses, the optimal trajectory is obtained. As for the fixed-wing UAV maneuver, it is hard to extract the trajectory features merely from position information. It is necessary to comprehensively consider the position and attitude features and design the corresponding key-frames for various maneuvers. Motion primitives also have a satisfactory effect on the decomposition of complex maneuvers [36]. Mueller and D'Andrea [15] proposed a framework for the efficient calculation of motion primitives. McGill University has presented a series of representative works based on the idea of motion primitives and maneuver optimization [37][38][39]. Dynamic motion primitives (DMPs) is a typical primitive which has been successfully utilized in the design of car driving motion libraries [40]. Inspired by the former works, we studied the dual quaternion-based dynamic motion primitives (DQ-DMPs) [41], which have the ability to learn and generalize maneuvers in SE(3) space. Nevertheless, it is hard for the dynamic motion primitive algorithm to ensure the kinodynamic feasibility. Inspired by the above discussion, we propose a data-model-driven framework, which is shown in Figure 1, for fixed-wing UAV maneuvering optimization. The framework is based on a global model identified by teaching data, and uses an optimization method based on positional, attitude, and compound key-frames. Through numerical optimization, the time-optimal maneuver is finally obtained. Through comparison, the actions generated by this algorithm take less time than professional pilots. In complex actions, different primitives can be flexibly concatenated, which simplifies the generation of complex actions. The main contribution of this paper is listed as follows: (1) We proposed a novel data-driven approach for model identification and key-frames extraction using the learning from demonstration principles. Then, complex maneuvers are decomposed into multiple motion primitives; (2) Based on the motion primitives, the optimal maneuver generation issue is formulated into a time-optimal problem considering key-frames which the UAV must pass by. The connection method of different primitives was also considered in this paper for practicability; (3) The proposed framework was verified thoroughly in simulation experiments, and it was possible to deduce that this framework is applicable for flight maneuvers in reality. Figure 1. Hybrid-driven optimal maneuvering flight planning framework. We start with pilot demonstration and maneuver data collection and then perform model identification and maneuver key-frame and motion primitive analysis. Finally, based on the global model and key-frames, the optimal primitives are generated, and the corresponding primitives are concatenated into a complete maneuver. In the next section, we discuss the basic knowledge. Sections 3 and 4 introduce data collection and acrobatic maneuver optimization, respectively. The experimental results are introduced in Section 5, and the conclusion in Section 6. Global Fixed-Wing Model The aircraft model plays an important role in our algorithm framework. In order to obtain feasible maneuvers, we need to establish a relatively accurate rigid-body model. Due to the singular problem of Euler angle during the big maneuver, this paper adopted the quaternion model [42] to calculate: For translational kinematics equation, the p i = [x, y, z] ∈ R 3 is the position of the inertial coordinate system, and v b = [u, v, w] is the speed of the body coordinate system. The two state quantities are connected by a conversion matrix, which is composed of q, where q = [q 0 , q 1 , q 2 , q 3 ] T ∈ SO(3) is a unit quaternion given q = 1. The rotation matrix is: In rotational kinematics equation, is angular rate, we can write: In the translational dynamic equation, T = [F T , 0, 0], F T is the thrust, and where F A represents the term of aerodynamic force and F g represents gravity-related items, F x , F y , F z are the aerodynamic forces in the x-, y-, and z-axes, respectively. The last one is the rotational dynamics equation, where τ = [M x , M y , M z ] T is the aerodynamic moment, and J is the moment of inertia which is composed by: There are many unknown coefficients in this model as well as the margin of state quantities. These quantities have a great impact on the performance of a UAV and the completion of the maneuver. In Section 3, we identify these unknown quantities in a data-driven way. Acrobatic Maneuver Acrobatic maneuvers are generally summarized based on the pilot's actual flight experience and have strong practical significance. Acrobatic flights are a competitive sport or also a performance event. Therefore, remote control flight is also a way to achieve it. Figure 2 shows a typical fixed-wing maneuvering process. The International Federation of Aeronautics (FAI) is the main maker of acrobatics rules. FAI also provides a number of basic maneuvers for acrobatic events such as the Cuban eight [43]. This article mainly focuses on the realization of these basic maneuvers on UAVs. In contrast, manned pilots are restricted by physiological limits, remote control flight is safer and more flexible, but there are communication delays and the impact of visual distance. For autonomous drones, it removes the limitations of visual range and physiological limits and has the potential to achieve acrobatics. Acrobatic Maneuver Data Collection Maneuver attitude and other information can be obtained by collecting flight data. A maneuver has high requirements for data accuracy and frequency. With the improvement in sensor accuracy and miniaturization, a large amount of manual flight experience can be accurately recorded through data. UAV design, modeling, and flight testing can be realized through data collection. In actual flight, accelerometers, magnetometers, and other sensors can be used to record the aircraft's position, attitude, control inputs, and other data [43]. On the other hand, flight simulation technology is also an idea to solve the problem of flight maneuvers. We built a flight simulation system that can be used to collect flight data. As shown in Figure 3, the simulation system was built by the open-source flight control px4 and the flight simulation software X-Plane. The remote controller sends the control instructions to the flight controller. The control signal is processed by the internal program and then sent to the X-Plane simulator; this is a typical hardware-in-the-loop simulation system. For different maneuvers, we collected the position, attitude, and other information of the aircraft through expert teaching. Model Parameter Identification As mentioned in Section 1, the global fixed-wing model contains many unknown coefficients, mainly in aerodynamic forces and moments: where ρ is the air density, S re f , b re f , c re f are the reference wing area, reference wing span, and average aerodynamic chord length, respectively, C x , C y , C z , C l , C m , C n are the aerodynamic coefficient and moment coefficient, δ e , δ a , δ r are the deflection of the elevator, aileron and rudder, respectively. The calculation method of the angle of attack and the angle of sideslip are α = arctan(w/u) and According to [44], the aerodynamic coefficients could be expressed using the global aerodynamic model in Equations (7) and (8). where C x 0 , C x α , C x α 2 , C y 0 , C y β , C y p , C y r , C y δa , C y δr , C z 0 , C z α , C z δe and C z α 2 represent the coefficients related to the aerodynamic force, and C l 0 , C l β , C l p , C l r , C l δa , C l δr , C m 0 , C m α , C m q , C m δe , C m α 2 , C m 0 , C m α , C m q , C m δe , C m α 2 , C n 0 , C n β , C n p , C n r , C n δa , C n δr are the coefficients related to the aerodynamic moments. A least squares method was carried out to estimate the unknown coefficients in Equation (1). The optimization object was set as follows: subject to: It is worth noting that the optimization problem in Equation (9) could be solved utilizing the nonlinear least squares optimization methods such as the Levenberg-Marquardt algorithms. However, the divergence of the optimization problem or the convergence to the suboptimal solution might occur. Moreover, in order to diminish the side effects of over-fitting and the errors during the integration, we chose 10 s of flight data for the identification by trial and error. The results of system identification are listed in Section 5. Optimal Acrobatic Maneuver Design and Generation In this section, we propose a maneuver optimization algorithm based on teaching data and accurate models. We first introduce the two basic concepts of the algorithm in maneuver design: key-frames and motion primitives. Then different kinds of key-frames are listed and the entire optimization problem for maneuver is formulated. Finally, we conducted a detailed analysis of the solution of the proposed optimization problem. Key-Frames and Motion Primitives The concept of key-frames is often used in computer animation and simultaneous localization and mapping (SLAM) to represent frames that are decisive over a period of time. As mentioned in Section 1, this concept is also used to represent the necessary waypoints in motion planning. We introduced it into maneuvers. As we can see in Figure 4, the same maneuver has the same typical characteristics of position and attitude changes. The key-frames are used to indicate the position and attitude that play a key role in a maneuver. The momentary state is a short-term key-frame, and the continuous state is a long-term key-frame. Motion primitives explain the execution of complex motions based on action units. In maneuvers, simple maneuvers can be regarded as a single motion primitive without segmentation. For complex maneuvers, since it contains a variety of different pose change segments, it will be difficult to calculate if viewed as a whole, and different optimization goals should be considered for different segments, so it is necessary to divide it into multiple motion primitives. Motion primitives explain the execution of complex motions based on action In maneuvers, simple maneuvers can be regarded as a single motion primitive wi segmentation. For complex maneuvers, since it contains a variety of different pose ch segments, it will be difficult to calculate if viewed as a whole, and different optimiz goals should be considered for different segments, so it is necessary to divide i multiple motion primitives. Maneuver Optimization In maneuver or maneuver primitives, a drone is required to complete a ce position and attitude change in the shortest time; inspired by [24,[31][32][33][34][35], we formu this problem as a keyframe-based time optimization problem. A schematic diagram o problem is shown in Figure 5. Maneuver Optimization In maneuver or maneuver primitives, a drone is required to complete a certain position and attitude change in the shortest time; inspired by [24,[31][32][33][34][35], we formulated this problem as a keyframe-based time optimization problem. A schematic diagram of the problem is shown in Figure 5. Motion primitives explain the execution of complex motions based on action In maneuvers, simple maneuvers can be regarded as a single motion primitive w segmentation. For complex maneuvers, since it contains a variety of different pose c segments, it will be difficult to calculate if viewed as a whole, and different optimi goals should be considered for different segments, so it is necessary to divide multiple motion primitives. Maneuver Optimization In maneuver or maneuver primitives, a drone is required to complete a c position and attitude change in the shortest time; inspired by [24,[31][32][33][34][35], we form this problem as a keyframe-based time optimization problem. A schematic diagram problem is shown in Figure 5. First, we set the state quantity to x dyn = [p, q, v, ω], input u = [δ e , δ a , δ r , F T ] T which needs to meet the constraints of the kinematics and dynamics model in Equation (1). The state quantity must first satisfy the start and end constraints and must be given or set free. The control input and some state quantities need to meet the upper and lower limits of UAV performance such as u min ≤ u k ≤ u max and ω min ≤ ω k ≤ ω max . We selected the direct multiple shooting method to solve the optimization problem. Suppose the total time is t N , discretize it into N segments, dt = t N /N, and the index is k. Meanwhile, the state quantity needs to be discretized. In order to reduce the calculation error, the 4th order Runge-Kutta was used for numerical integration. where k 1 , k 2 , k 3 , k 4 are integral terms composed of x dyn,k , u k , and dt. Suppose the number of key-frames is set to M, indexed by i, and M-dimensional process state variables λ and M-dimensional process change variables µ are introduced to record the completion of key-frames, which meet the constraints: As is shown in Figure 5, process state variables λ saves the state of event completion. λ is 1 at the beginning and 0 at the end. µ is a process change variable, which can be used to record the occurrence of instantaneous events. Through the constraint that µ i multiplied by the state quantity part is equal to 0, it is required that when the event does not occur, the state quantity part is not 0 and µ i is 0; when the event occurs, the state quantity part is 0 and µ i is 1, then the corresponding λ i changes from 1 to 0 permanently. Under this framework, λ i can be used to represent the time period scale, which is used to record the requirements that need to be met during the process of two instantaneous events. Corresponding to the sphere in Figure 5, the slack variable needs to be added due to the influence of discrete calculation, and different types of key-frames are introduced below: (1) Positional key-frame (KF-P) Short-term position constraints: where i ∈ [1, M], k ∈ [1, N + 1], and in incomplete position constraints, only a part of the position variables are constrained; take flight altitude as an example: During the flight, some state variables will be constrained for a long time such as altitude maintenance. Process state variables λ i divides t N into M + 1 fragments; take altitude maintenance as an example, we can set long-term position constraints as: (2) Attitude key-frame (KF-A) Short-term attitude constraints: If the requirements for posture are not so strict, we can set incomplete attitude constraint by Appendix A, for example, only constrain a certain angle: In the calculation process, the range of pitch angle is generally [−π/2, π/2], and the range of roll and yaw is [−π, π]. The above key-frames of roll angle and yaw angle are not one-to-one correspondence. The calculation will produce singularities, the constraint can be changed to the following equation, which will increase a certain amount of calculation. (3) Compound key-frame (KF-C) In some special scenarios, there are requirements for the position and attitude of the drone. We proposed a compound key-frame, if the same µ i is used, the position and attitude can be constrained at the same time. In addition to these constraints, if we want to ensure the order of key-frames, we need to set the following constraints. Based on the above statements, our optimization variables are integrated as x opt = [t N , x 0 , . . . , x N ], where: The most basic goal of the designed maneuver is to minimize the time and require the maneuver to be completed in the shortest time. In order to improve the execution effect, we added the minimum energy term, and every motion primitive need to be adjusted according to its characteristics. Algorithm 1 flow is summarized as follows: Algorithm 1. Fixed-wing UAV optimal maneuver generation. 1: Input: L pieces of maneuver trajectory obtained by demonstration 2: Output: optimal trajectory of single maneuver ξ (divided into primitives ξ j ) 3: for ∀l ∈ [1, L] do 4: split the teaching trajectory into multiple maneuvers ξ 5: extract the key-frames p i , q i of each maneuver 6: decompose ξ into motion primitives ξ j according to certain principles 7: end for 8: for ∀j ∈ [1, J] do 9: if ξ j exists, or similar ξ j exists 10: use the related ξ j directly 11: else 12: constructing ξ j as keyframe-based optimization problems 13: end for 24: process ξ j and concatenate ξ j to ξ 25: return primitives ξ j concatenated maneuver data ξ Experiments and Discussion In this section, the identification results of the UAV model were obtained based on the flight data and three types of maneuvers are studied in this section. As mentioned in Section 4, the corresponding key-frames and motion primitives could be extracted for a specific maneuver. Therefore, we propose a technique to construct different minimum-time maneuver problem through adjusting the parameters of the boundary and key-frames. This algorithm could help to find the optimal and physical-realizable flight maneuvers. Model Identification Results The main parameters of the UAV are: mass m = 3.24 kg, air density ρ = 1.225 kg · m −3 , reference area S re f = 0.56 m 2 , wing span b re f = 1.83 m, mean chord length c re f = 0.30 m, moment of inertia J xx = 0.22 kg · m 2 , J yy = 0.31 kg · m 2 , J zz = 0.48 kg · m 2 , J xy = J xz = J yz = 0 kg · m 2 , and the gravity acceleration was assumed to be g = 9.8 m · s −2 . According to Section 3, the structure of the aerodynamic coefficients were defined in Equations (7) and (8). The estimation of the unknown parameters was performed from the flight using the least square approach. During the flight demonstration, the pilot performed several types of maneuvers. Both of the control surfaces' deflections and the output of the system are recorded in time domain. Furthermore, to satisfy the kinematic and dynamic constraints, the upper and lower limits of the control inputs and system states are listed in Table 1. Optimization Simulation Setup In this section, we investigate several types of maneuvers and conducted simulation experiments separately. First, we evaluate the loop maneuver which contains a single primitive. Meanwhile, only the positional key-frames were employed for this motion. Then, another classical maneuver named the Immelmann turn was taken into consideration. In this motion, two parts of the primitives, which contains the positional and attitude keyframes, respectively, are concatenated. Thirdly, the half Cuban eight, which was similar to the former one, was also evaluated, and the compound key-frames were proposed. Furthermore, the concatenation of two half Cuban eight results in a whole Cuban eight, which is a sophisticated motion containing multiple positional key-frames and compound key-frames. As is shown in Figure 1, maneuver optimization is an important part of the framework, which can be referred to in detail in Algorithm 1. In this paper, all the maneuver optimization problems were carried out on CasADi [45] with Ipopt [46] optimization approach. The initial flight status and parameters of the optimization are listed prior to the results. For ease of presentation, we chose the original point at the initial position before the aerobatic flight. Loop Maneuver The Loop is a maneuver mainly performed in the vertical plane. At the beginning of the motion, the UAV keeps trim flight and starts to pitch up. After the pitch angle finishes a 360-degree turn, the UAV returns to the beginning position and maintains the trim condition. It is worthy to mention that, even though the human pilot is well trained, the geometrical size and shape of different demonstrated trajectories are not completely consistent. Therefore, it is almost impossible for the human pilot to realize an optimal maneuver. However, the non-optimal trajectories still share similar features. Through sufficient trialand-error, we found that the dominant feature of the loop was the position of key-frames. Nevertheless, the number/values of the positional key-frames are essential to obtain a reasonable circular trajectory. The set of the positional key-frames and the initial flight conditions are listed in Table 2. As for the optimization, we set the number of interval points N = 210, the positional tolerance d tol_p = 0.4 m, the parameters of the Equation (29) are set as σ = 1, τ = 0.1, and the simulation results are illustrated in Figure 6. As shown in Figure 6, the drone passed through all the positional key-frames in a short period of time (t N = 3.22 s). Once the drone meets the tolerance of each key-frames, the corresponding λ i changes from 1 to 0. Furthermore, the UAV returns to the initial position after finishing the whole maneuver, which is difficult for a human pilot. Even though only one primitive is considered in this scenario, the half loop maneuver can be extracted from the results naturally. As shown in Figure 6, the drone passed through all the positional key-frames i short period of time ( = 3.22s N t ). Once the drone meets the tolerance of each key-fram the corresponding i λ changes from 1 to 0. Furthermore, the UAV returns to the ini position after finishing the whole maneuver, which is difficult for a human pilot. E though only one primitive is considered in this scenario, the half loop maneuver can extracted from the results naturally. The Immelmann Maneuver The Immelmann maneuver is also known as upside-down half-roll. During maneuver, the UAV first performs a half loop from trim flight. As soon as the airc reaches the top of the circular trajectory, it spins around the x-axis and executes a 1 roll. Finally, the UAV resumes to trim flight with an opposite direction compared to initial condition. We decompose the Immelmann into two concatenated primitives: half Loop a 180  roll for a better description. For the first primitive, there are two ways to obtain near-optimal form: extracting from the Loop maneuver directly and modeling this mot into a two-point boundary value problem (BVP). Modeling the optimization problem of 180  roll is significantly different from former cases. Rolling with high angular rates will not only result in a large displacem in the forward direction, but also lead to deviations both in height and heading an The Immelmann Maneuver The Immelmann maneuver is also known as upside-down half-roll. During the maneuver, the UAV first performs a half loop from trim flight. As soon as the aircraft reaches the top of the circular trajectory, it spins around the x-axis and executes a 180 • roll. Finally, the UAV resumes to trim flight with an opposite direction compared to the initial condition. We decompose the Immelmann into two concatenated primitives: half Loop and 180 • roll for a better description. For the first primitive, there are two ways to obtain its near-optimal form: extracting from the Loop maneuver directly and modeling this motion into a two-point boundary value problem (BVP). Modeling the optimization problem of 180 • roll is significantly different from the former cases. Rolling with high angular rates will not only result in a large displacement in the forward direction, but also lead to deviations both in height and heading angle. Firstly, we set a short-term attitude key-frame (KF-A) which contained a 90 • roll to ensure the motion completion. Then, in addition to the time factor, the displacement in the forward direction is also considered. The optimal solution is expressed as follows: Meanwhile, the final states (position, attitude, velocities, and angular rates) of the half loop maneuver was set as the initial condition of the rolling primitive. Furthermore, the y-axis and z-axis constraints were also added to the final states of the rolling primitive. The initial condition of the second primitive is listed in Table 3. We set N = 100, d tol_q = 0.04, the results can be obtained by calculation and the result of concatenated primitives for the whole Immelman maneuver is shown in Figure 7. Key-Frame Value Short-term angle key-frame θ = −π/2 As is shown in Figure 7, the entire Immelmann maneuver takes only 2.6155 s, and each state quantity is well connected. It can be seen that the entire roll primitive takes only 0.9631 s, the drone smoothly passes the 90 degree roll angle key-frame and flies forward 26.1 m, finally flies out horizontally. The entire Immelmann maneuver takes only 2.6155 s, and each state quantity is well connected. It is worth noting that the concatenated primitives might be intrinsically sub-optimal. However, the sub-optimal solutions are sufficient for practical applications and capable of generalizing more maneuvers that never demonstrated. Half Cuban Eight and Cuban Eight Maneuver The Cuban eight is a sophisticate maneuver consisting of two 3/4 Loops followed by 180 • rolling. From the view of the ground, the UAV's trajectory is a vertical figure of eight. Due to the symmetric property, the left/right part of the motion, named half Cuban eight, can be evaluated first. To facilitate the calculation, the half Cuban eight was decomposed into a half loop (the same as the Immelmann turn's) and a special rolling primitive. For the rolling primitive, we set a compound key-frame (KF-C), which requires the UAV to pass through the center of the Cuban eight with a 90 • roll angle. Due to the inertia of rotation, the UAV will continue to rotate around the x-axis. Furthermore, the angular limitations are considered to avoid large angle deviations. The initial flight conditions are listed in Table 4. The two key-frames can be formulated as follows: In the rolling primitive, we set N = 100, d tol_p = 0.1m, d tol_q = 0.04, the coefficients in Equation (29) are set as σ = 1, τ = 0.1 for both primitives, and we can connect the two primitives into a half Cuban eight, and the results are shown in Figure 8. The results of the optimization of the half Cuban eight are illustrated in Figure 9. It takes 1.4118 s for the rolling primitive and 3.0642 s for the half Cuban eight. It can be seen that the UAV accurately crossed the center point with a 90 • roll angle, and the roll angle was always less than 5 • in the subsequent primitive. Nevertheless, the velocity at the end of the half Cuban eight maneuver is much larger than the one at the start point. This phenomenon will hinder the completion of the concatenate loop. Therefore, we considered limiting the speed to [20,0,0] of the drone at the end of the first rolling primitive, although this will increase the entire maneuver time to some extent. Even though the two identical half-Cuban eights cannot be joined directly, there are still many similarities between them. Modeling a mirroring problem from the original problem and using data symmetry facilitate to obtain the optimization results. As shown in Figure 9, with the limitation on the terminal velocity, the duration time of the rolling primitive increased by 0.3898 s, and the engine thrust maintained at 0 N. Basically, the sub-optimal Cuban eight maneuver is completed, which takes 6.4856 s in total, and the primitives are well connected. Benchmark Comparisons To further illustrate the superiority of our approach, a comparison with three other methods is conducted. The first strategy is through collecting the maneuvering trajectories performed by the human pilot. The second one is the Gaussian pseudo-spectral method proposed in Morales's work [24]. Moreover, the 3-DOF model introduced [23] was also utilized in our approach, formulated in Appendix B. The simulation results are depicted in the Figure 10. It can be seen from Figure 10; Figure 11 that the trajectories obtained from manual flights are inferior to the other methods. Even though Morales's method's performance is slightly better than the manual flight, it cannot solve the period constraints. The flight time is significantly longer than those of our proposed method and the 3-DOF model. The method proposed in this paper was based on the global model, and motion segmentation is needed increasing computational costs. It is worth noting that our approach can flexibly connect those segments, resulting in various maneuvers. In the result of the mass point model, since our model was simplified, and there was no need to segment the trajectories, the maneuvering time optimized was slightly shorter than our method. Therefore, the optimization results based on the 3-DOF model can be utilized as an initial guess in our approach or for benchmark comparison. Since the 3-DOF model was simplified by ignoring the kinodynamic constraints, it cannot be directly used for maneuvering planning. In order to conduct a more thorough analysis, we performed statistics on the running time of several methods. All the experiments were executed on an Intel Core i7-4710MQ CPU with 8 GB RAM that runs at 2.50 GHz. It can be seen from Table 5 that the minimal-time maneuver optimization is timeconsuming. Our work decomposes the optimization into the calculation of the 3-DOF model (initial guess) and the re-optimization of the global model. Compared with Morales's work, the optimized solution can be obtained faster. Analysis of Maneuver Optimization Algorithm It worth noting that the optimization process of maneuvers is quite time-consuming due to the inherent highly non-convex property. There are several methods that can accelerate the calculations and avoid the infeasible problem in Ipopt solver. (1) Global Fixed-wing model Our optimization algorithm requires an accurate model in order to obtain the extreme motion of the UAV. Although the simplification of the model was conducive to simplifying the calculation, it was difficult to get the most realistic movement conditions. In our multiple maneuvering experiments, the performance of the drone was pushed to the limit, which was in line with the minimum time movement characteristics. (2) Key-frames and primitives design To the best knowledge of the authors, it's unfeasible to propose a unified approach of key-frame extraction and motion decomposition for various maneuvers. Choosing the number of the key-frames and primitives requires a balance between computational complexity and geometric accuracy. Furthermore, most of the maneuvers can be decomposed into several longitudinal and lateral primitives. (3) Initial guess The initial guess is essential for the nonlinear programming problem. In this paper, a reasonable initial guess was obtained either from the optimization results based on the point-mass model ( . p i = a, with input u = a, a ≤ a max ) or simply from the demonstration data using interpolation. (4) Optimization constraints The small slacking variables can be added into the nonlinear constraints to accelerate the calculation and avoid the infeasible solutions. For instance, the complimentary constraint can be adjusted into: In addition, excessive slacking variables can be added into nonlinear constraints to obtain an initial solution, which can be considered as the initial guess in the next step of optimization iteratively. Conclusions In this paper, a hybrid-driven flight maneuver optimization framework was proposed. Based on the demonstrated maneuvers performed by an experienced pilot, the parameters of the UAV model along with the features of maneuvers were extracted. Then, the idea of key-frames and maneuver primitives were proposed which can design and concatenate different maneuvers more flexibly. Based on the above work, we formulated maneuver planning into a time-optimal problem, the feasibility of the framework was fully verified through designing the positional, attitude, and compound key-frames in the loop, Immelmann, half Cuban eight, and Cuban eight maneuvers. Through comparison, it can be concluded that the designed action was faster than professional pilots, and the results provide verifications on the optimality of the solutions and the effectiveness of the proposed approach. Future works will concentrate on realizing our algorithm on a UAV platform experimentally. Acknowledgments: Thanks to the research environment provided by the Institute of Unmanned Systems, the National University of Defense Technology, and the Nanjing Institute of Telecommunications Technology, National University of Defense Technology, for its administrative support. Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results. Appendix B Aircraft Point-mass model: where κ is the aerodynamic roll angle, D, C, L are the components of the aerodynamic forces, described in the aerodynamic reference system, calculated as follows: −1/2 · ρV 2 a S re f C x −1/2 · ρV 2 a S re f C y −1/2 · ρV 2 a S re f C z    (A5)
9,044
2021-09-23T00:00:00.000
[ "Engineering", "Computer Science" ]
RMEL3, a novel BRAFV600E-associated long noncoding RNA, is required for MAPK and PI3K signaling in melanoma Previous work identified RMEL3 as a lncRNA with enriched expression in melanoma. Analysis of The Cancer Genome Atlas (TCGA) data confirmed RMEL3 enriched expression in melanoma and demonstrated its association with the presence of BRAFV600E. RMEL3 siRNA-mediated silencing markedly reduced (95%) colony formation in different BRAFV600E melanoma cell lines. Multiple genes of the MAPK and PI3K pathways found to be correlated with RMEL3 in TCGA samples were experimentally confirmed. RMEL3 knockdown led to downregulation of activators or effectors of these pathways, including FGF2, FGF3, DUSP6, ITGB3 and GNG2. RMEL3 knockdown induces gain of protein levels of tumor suppressor PTEN and the G1/S cyclin-Cdk inhibitors p21 and p27, as well as a decrease of pAKT (T308), BRAF, pRB (S807, S811) and cyclin B1. Consistently, knockdown resulted in an accumulation of cells in G1 phase and subG0/G1 in an asynchronously growing population. Thus, TCGA data and functional experiments demonstrate that RMEL3 is required for MAPK and PI3K signaling, and its knockdown decrease BRAFV600E melanoma cell survival and proliferation. INTRODUCTION Melanoma is the most aggressive form of skin cancer. Targeted therapies against BRAF V600 mutations, which are present in ~50% of metastatic melanomas, achieve impressive initial clinical responses and benefit, but the development of acquired resistance to these agents is almost universal [1]. The identification of additional melanoma oncogenic mechanisms initiated by oncogenic BRAF will facilitate the development of more long-term effective therapeutic approaches. Among different molecular candidates, there is growing data to support that long noncoding RNAs (lncRNAs) play a significant role in this disease [2]. A diversity of lncRNAs was described to promote cell proliferation, migration and metastasis in melanoma cells [3][4][5]. lncRNAs commonly exhibit contextdependent activity and cell type-specific expression [6], reinforcing their possible application for therapeutic targeting. Previous work from our laboratory identified RMEL3 (ENSG00000250961) as a potential lncRNA with Research Paper extremely enriched and specific expression in melanoma [7]. Analysis of melanoma cells also suggested a positive correlation between RMEL3 expression and the presence of the BRAF V600E mutation [7]. In the present study, we have investigated RMEL3 interaction networks to elucidate its significance in this disease. This study supports that RMEL3 knockdown inhibits MAPK and PI3K pathways in melanoma. RMEL3 expression is enriched in melanoma and varies across disease progression We analyzed the publicly available melanoma TCGA data to identify significant clinical and molecular associations of RMEL3 expression. Analysis of RNA expression data from 472 TCGA melanomas, 16 normal tissues (from Illumina Body Map Project) and 2 melanocytes (GSE38495) [8] confirmed significantly increased expression of RMEL3 in the tumors ( Figure 1A). Also, RMEL3 expression is significantly greater in melanoma than a diversity of other tumors ( Figure 1B). In clinical samples representing melanoma progression [primary tumors (n=102), subcutaneous tumors (regional cutaneous and in-transit metastasis, n=74), regional lymph node (n=221) and distant metastasis (n=68)], RMEL3 expression was increased in subcutaneous tumors compared to primary tumors ( Figure 1C). RMEL3 expression was also significantly increased in melanomas with a BRAF V600E mutation compared to those with a wild type BRAF or triple wild type for BRAF/ RAS/NF1 [9] (Figure 1D), an association also observed in a panel of human melanoma cell lines ( Figure 1E). RMEL3 knockdown decreases clonogenic capacity RMEL3 knockdown in BRAF V600E melanoma cells, such as the A375-SM cell line, which has high RMEL3 expression (Figure 2A), markedly reduced colony formation ( Figure 2B and 2C). BRAF V600E RMEL3-low expressing cells (Figure 2A) are also affected ( Figure 2B and 2C). RMEL3 knockdown in a BRAF wild type cell line also reduced colony count, however less dramatically. In contrast, SKOV3 ovarian cancer cell line, which has no RMEL3 expression, was not affected, demonstrating that the observed effects were not due to siRNA overall cytotoxicity or non-specific targeting ( Figure 2B and 2C). RMEL3 expression alters melanoma cell expression profile To identify molecular features that are associated with RMEL3, two groups were separated according to RMEL3 expression levels from the total set of TCGA. RMEL3 Low group (n=105), constituted of patients with RMEL3 expression below 25th percentile and RMEL3 High group (n=117), constituted of patients with RMEL3 expression above the 75th percentile ( Figure 3A). RNA-seq data was used to identify differentially expressed genes between the groups (log2 fold change <-2 or >2, adj. p-value <0.00001, n= 260 genes; Figure 3B and 3C; Supplementary Table S1). Microarray analysis was performed following RMEL3 knockdown in the human melanoma cell line A375-SM ( Figure 3D) to functionally validate genes regulated by RMEL3 (Supplementary Table S2). RMEL3-regulated genes identified by gene knockdown (log2 fold change <-0.5 or >0.5, p-value <0.01, n=2942; Supplementary Table S2) were compared to the melanoma TCGA data (log2 fold change <-0.5 or >0.5, adj. p-value <0.05, n=3445; Supplementary Table S3) to provide a high confidence list of RMEL3-correlated genes (Supplementary Table S4). A total of 177 genes positively differentially expressed in RMEL3 High tumors in the TCGA exhibited decreased expression following RMEL3 knockdown in the A375-SM cells. A set of 139 genes negatively differentially expressed in RMEL3 High group increased expression after RMEL3 knockdown. Pathway enrichment analysis of the validated genes implicates RMEL3 in protein kinase A signaling, molecular mechanisms of cancer, FGF signaling, regulation of epithelial-mesenchymal transition, inhibition of matrix metalloproteases, among others ( Figure 3E, Supplementary Table S5). These pathways compose biological networks, as the example of cell-to-cell signaling and interaction, which has as central constituent the MAPK pathway ( Figure 3F). Indeed, several validated genes are constituents of the PI3K-Akt, MAPK and Ras pathways ( Figure 3G). RMEL3 influences melanoma critical proteins to promote cell cycle progression and survival The effects of RMEL3 knockdown on protein networks were assessed by Reverse Phase Protein Array (RPPA) analysis. RMEL3 knockdown in A375-SM melanoma cells altered a total of 91 proteins (p<0.05; Supplementary Table S6). We observed a decrease of the MAPK and PI3K-Akt components BRAF, Akt, pAkt (T308) and increased levels of PTEN ( Figure 4A). RMEL3 knockdown reduced cell cycle regulators just as pRB (S807, S811) and Cyclin-B1, and increased p27 ( Figure 4B). Pro-apoptotic proteins were increased, as the example of caspase-8 and p38, and anti-apoptotic proteins reduced, such as Bcl2 ( Figure 4C). Consistent with the impact of RMEL3 knockdown in the clonogenic ability, we showed by western blot an increase of the p21 and p27 G1/S cyclin-Cdk inhibitors, and a decrease of cyclin B1 ( Figure 4D). RMEL3 knockdown caused an accumulation of cells in the G1 cell cycle phase and an increase in cell death rates (subG0/G1) ( Figure 4E). The observed DISCUSSION In this work we confirmed previous findings of our group [7] that RMEL3 is highly restricted to melanoma ( Figure 1A and 1B). A close correlation between RMEL3 and BRAF V600E (Figure 1D and 1E) was observed and suggested that RMEL3 may be involved in cell proliferation and/or survival. Consistently RMEL3 knockdown induced a potent blockage over BRAF V600E melanoma cell growth and survival (Figure 2A-2C). RMEL3 validated genes are involved in protumorigenic pathways, such as protein kinase A signaling, FGF signaling and regulation of epithelial-mesenchymal transition ( Figure 3E). These results provide support to a possible involvement of RMEL3 in the in-transit metastasis process, evidenced by the increase of RMEL3 expression in the subcutaneous tumors ( Figure 1C). Importantly, several validated genes constitute components and transcription targets of the Ras-MAPK and PI3K-Akt pathways ( Figure 3G), which are commonly active in melanoma [10]. Shared molecular signatures between RMEL3 knockdown and BRAF suppression also highlights the association of RMEL3 with BRAF. Common alterations are the upregulation of FOXD3 transcription factor [11]; WNT5A [12]; JUN, STAT3 [13]; fibronectin [14]; and other molecules involved in energy metabolism [15]. We also demonstrated, at the proteins levels, that RMEL3 knockdown decreased BRAF levels and increased the tumor suppressor PTEN concentration ( Figure 4A), which commonly blocks BRAF V600E -induced malignancy [16,17]. As expected, we detected reduced levels of pAkt (T308) and Akt ( Figure 4A). This strong correlation with PI3K genes ( Figure 3E and Figure 4A) would provide a powerful advantage to melanoma progression, since parallel activation of the PI3K may offset the negative feedback induced by ERK [18,19] in BRAF V600E cells [20]. RMEL3 knockdown also altered protein levels of cell cycle and apoptosis regulators. For instance, decreased protein levels of phosphorylated RB and cyclin-B1, as well as increased levels of the major effectors of G1 cell cycle arrest p21 and p27 were detected ( Figure 4B and 4D). These results are consistent with the drastic reduction of clonogenic ability ( Figure 2B-2C), G1 arrest and an increase in cell death rates (subG0/G1) ( Figure 4E) caused by RMEL3 knockdown. Consistent with cyclin-B1 decrease, FOXM1 transcription factor that peaks in G2/M and activates cyclin-B1 expression [21] is also intensely downregulated ( Figure 4B). On the other hand, the increase of cyclin-E1 after knockdown appears to be contradictory, however when targeted to the cytoplasm, cyclin-E1 promotes G1 arrest and senescence [22], which could be also reinforced by increased PAI-1 protein concentration ( Figure 4B), a mediator of Ras senescence [23]. Interestingly, several changes tend to shift the balance towards apoptosis, including decrease of antiapoptotic Bcl2 [24] mRNA expression (Supplementary Table S2) and protein levels ( Figure 4C); an intense downregulation of transglutaminase protein that can inhibit bax-induced apoptosis [25]; increased proapoptotic p38 [26] and caspase-8 [27] protein levels; and decrease of YAP (pS127), which is phosphorylated by Akt to reduce p73-mediated induction of Bax expression [28]. Altogether, these protein changes may reflect inactivation of the critical MAPK and PI3K signaling pathways after RMEL3 knockdown. In conclusion, this work provides strong evidence that RMEL3, initially implicated by its specificity to melanoma, can affect malignancy through MAPK and PI3K stimulation. Further characterization is ongoing to determine the mechanisms of RMEL3 functions and its potential value as a therapeutic target. To compare expression levels, we used the transcripts per million (TPM) [31] results of RSEM. Melanocytes sequencing read data (GSE38495) were obtained from [8]. For differential expression analysis between RMEL3 High and Low expression groups it was used EdgeR package [32]. In all differential analysis, p-values were adjusted for False Discovery Rate (FDR) <0.05 as multiple hypothesis test correction method. Gene ontology was assessed with DAVID [33] and pathway enrichment with KEGG [34]. Melanoma cell lines, RNA extraction and cDNA synthesis Human melanoma cell lines were grown in RPMI media supplemented with 5% of fetal bovine serum (Life Technologies) and incubated at 37°C and 5% of CO2. For RMEL3 expression analysis, melanoma cells were seeded in 10 cm plates overnight, followed by cell lysis and RNA purification using the mirVana™ miRNA Isolation Kit (Ambion AM1560) according to manufacturer's instructions. Isolated RNA was treated with Dnase I (DNA-free kit, Ambion), quantified by spectrophotometer (NanoDrop® ND-1000 UV/ Vis Spectrophotometer) and the amount of 1μg of the purified RNA was converted into cDNA using the High-Capacity cDNA Reverse Transcription kit (Applied Biosystems, CA). Expression analysis by qRT-PCR Equal amounts of each cDNAs were used to verify RMEL3 expression by qRT-PCR with specific primers (5′>3′ sense ATGTGCTCCAAGAAAACCAGAG and antisense CTTTGTCACAGGAATACCCAAC) and SYBR Green PCR Power Mix 2x as detector agent (Applied Biosystems, CA) in a Mastercycler ep realplex real-time PCR system (Eppendorf). Cycle threshold (Ct) was converted to relative expression according to the 2 − ΔΔCT method, using TBP (TATA-box binding protein) as endogenous control. siRNA transfection Cells seeded in 6-well plates a day before were transfected with different concentrations of siRNA using two transfection reagents, Dharmafect 1 (Dharmacon) or Xtremegene (Roche Applied Science). The parameters of transfection were set for each cell line and checked by Real Time PCR. Before leading to the subsequent experiments, the optimal condition was defined when the level of gene knockdown was higher than 90%. After appropriate treatments and incubations, cells were harvested for qPCR, western blotting, RPPA or used for clonogenicity analysis. The siRNA Stealth Universal Negative Control (Invitrogen) was used as a control, for RMEL3 a Stealth siRNA system (Invitrogen) was used. RMEL3 siRNA sequence: CCACUGCAGGGUUUCAGUCACAUGA. Gene expression profiling -microarray Total RNA extracted as described above was subjected to whole-genome gene expression profiling using Sentrix HumanHT12 v4 beadchip arrays (Illumina). RNA amplification (TotalPrep RNA Amplification Kit, Life Technologies), array hybridization and data acquisition were performed at the University of Texas Health Science Center at Houston Microarray Core laboratory. The background was subtracted and arrays were quantile normalized. Differential expression analyses were performed among controls (cell lines transfected with siRNA Negative Control) and siRNA targeting RMEL3 gene using the GenomeStudio software (Illumina, Inc.). This data was uploaded to NCBI GEO platform as GSE72675. Reverse phase protein array Cells were seeded in six well plates overnight and transfected in triplicates with control siRNA and RMEL3 siRNA. Protein lysates extracted from melanoma cells after 72 hours of transfection were quantified, normalized, denatured and submitted for RPPA analysis. Normalization and quality control were applied to RPPA data before further analyses (detailed description of the RPPA method and data normalization are available at http://www.mdanderson.org/ education-and-research/resources-for-professionals/scientificresources/core-facilities-and-services/functional-proteomicsrppa-core/index.html). Western blots The same protein lysates used for RPPA were also used for Western Blot analysis. Briefly, 30μg of protein was fractionated by 10% SDS-PAGE and transferred to a nitrocellulose membrane. The membranes were incubated with blocking solution (5% Milk in TBS-T) for 1 hour at room temperature, and then blotted with relevant antibody (anti-p21; anti-p27 and anti-CCNB1 from Cell Signaling) diluted on a BSA 1% solution overnight at 4°C in the concentration of 1:1000. HRP-conjugated secondary antibody was detected by using the Enhanced Chemiluminescence Kit (GE Healthcare). Clonogenic assay Cells transfected with control siRNA and RMEL3 siRNA were transferred into 6 well plates (330 cells/well) and cultured for 14 days at 37°C, 5% of CO2 atmosphere. Subsequently, the cells were fixed with 4% paraformaldehyde and stained with crystal violet 0.2% for colony visualization. Cell cycle For cell cycle analysis, percentage of cells in each phase of the cycle was detected by propidium iodide staining. Cells kept for 72 hours after transfection with siRNAs in 24 wells plates were trypsinized, washed twice in PBS and fixed in ethanol overnight. Two hours before submitting the cells for analysis, the cells were washed in PBS twice following by incubation with propidium iodide solution (EMD Chemical) for 20 minutes at 37°C. Then, cells were analyzed on FACS Canto II flow cytometer (BD Biosciences) using BD FACSDiva software.
3,348.8
2016-05-04T00:00:00.000
[ "Biology" ]
The Chemorepulsive Activity of the Axonal Guidance Signal Semaphorin D Requires Dimerization* The axonal guidance signal semaphorin D is a member of a large family of proteins characterized by the presence of a highly conserved semaphorin domain of about 500 amino acids. The vertebrate semaphorins can be divided into four different classes that contain both secreted and membrane-bound proteins. Here we show that class III (SemD) and class IV semaphorins (SemB) form homodimers linked by intermolecular disulfide bridges. In addition to the 95-kDa form of SemD (SemD(95k)), proteolytic processing of SemD creates a 65-kDa isoform (SemD(65k)) that lacks the 33-kDa car-boxyl-terminal domain. Although SemD(95k) formed dimers, the removal of the carboxyl-terminal domain resulted in the dissociation of SemD homodimers to monomeric SemD(65k). Mutation of cysteine 723, one of four conserved cysteine residues in the 33-kDa fragment, revealed its requirement both for the dimerization of SemD and its chemorepulsive activity. We sug-gest that dimerization is a general feature of semaphorins which depends on class-specific sequences and is important for their function. (FlagSemD D CTD) it migrated at a size similar to that of FlagSemD. The molecular mass of native FlagSemDP1b as determined by gel filtration chromatography was 190 kDa, consistent with the expected size for homodimeric SemD(95k). Little immunoreactivity was detected at higher molecular masses (data not shown), indicating that no additional oligomeric complexes were formed and that the SemD(95k) homodimer represents the majority of recombinant protein secreted by HEK 293 cells. The axonal guidance signal semaphorin D is a member of a large family of proteins characterized by the presence of a highly conserved semaphorin domain of about 500 amino acids. The vertebrate semaphorins can be divided into four different classes that contain both secreted and membrane-bound proteins. Here we show that class III (SemD) and class IV semaphorins (SemB) form homodimers linked by intermolecular disulfide bridges. In addition to the 95-kDa form of SemD (SemD(95k)), proteolytic processing of SemD creates a 65-kDa isoform (SemD(65k)) that lacks the 33-kDa carboxyl-terminal domain. Although SemD(95k) formed dimers, the removal of the carboxyl-terminal domain resulted in the dissociation of SemD homodimers to monomeric SemD(65k). Mutation of cysteine 723, one of four conserved cysteine residues in the 33-kDa fragment, revealed its requirement both for the dimerization of SemD and its chemorepulsive activity. We suggest that dimerization is a general feature of semaphorins which depends on class-specific sequences and is important for their function. The semaphorins are a large family of secreted and membrane-bound proteins that are involved in axonal navigation (1). To date, sequences of 15 vertebrate semaphorins have been published, and these can be divided into four classes (2)(3)(4)(5)(6)(7)(8)(9)(10)(11) based on the similarity of their semaphorin domains and the presence of distinct sequence motifs in their COOH-terminal domains such as Ig homologies (classes III and IV), thrombospondin repeats (class V), and transmembrane segments (classes IV, V, and VI). The best studied vertebrate semaphorins are the murine SemD 1 and its chick ortholog collapsin-1 (3,6,(12)(13)(14)(15). When added to cultures of dorsal root ganglia, both induce a rapid and reversible collapse of sensory growth cones (3,12,15). Gradients of SemD originating from aggregates of cells transfected with an expression vector repel sensory and sympathetic axons in collagen gel co-cultures, demonstrating that semaphorins have the ability to exclude axons from regions expressing these proteins (6,13,15). The secreted class III semaphorins are synthesized as pro-proteins that are processed proteolytically to 95-or 65-kDa isoforms (designated 95k and 65k, respectively) at several conserved dibasic cleavage sites (15). Semaphorins SemA, SemD, and SemE act as repellents for specific populations of axons, and the potency of this repulsion is regulated by proteolysis (15). Cleavage of pro-SemD at a COOH-terminal processing site generates the 95k isoform (SemD95k)) and is required to activate its repulsive activity. Further cleavage of SemA, SemD, or SemE to a 65k form reduces their repulsive activities by at least 1 order of magnitude (15). Semaphorins display specific and highly dynamic expression patterns in the developing nervous system as well as in nonneural tissues (2-6, 8, 9, 13, 14, 16 -18). In vitro, specific subsets of spinal sensory afferents display a differential responsiveness to SemD which is regulated developmentally (5,13,14). It therefore has been proposed that SemD patterns spinal sensory innervation by dividing the spinal cord into dorso-ventrally organized subregions, one of which is accessible only to the prospective propioceptive fibers and excludes thermo-and nociceptive axons. The phenotype of mice homozygous for an inactivated semD gene supports this hypothesis (19). In addition, it reveals functions of semD in the differentiation of other tissues such as heart and skeleton. Although the biological effects of semaphorins have been studied in some detail, the structural requirements for their function have not been analyzed to a similar extent. Here we show that the class III and IV semaphorins form homodimers linked by intermolecular disulfide bridges. Proteolytic cleavage of dimeric SemD(95k) results in its dissociation to monomeric SemD(65k). Mutation of a single cysteine residue in SemD both prevents dimerization and abolishes its repulsive activity. We propose that dimerization is an essential step in the maturation of the chemorepulsive guidance signal SemD and may have a similar functional importance for other classes of semaphorins. EXPERIMENTAL PROCEDURES Expression Vectors and Transfection-To express recombinant semaphorins, cDNAs were cloned into the pBK-CMV expression vector (Stratagene). An epitope-tag (Flag: DYKDDDDK) was introduced between the signal peptide and semaphorin domain of SemD or fused to the carboxyl terminus of SemB by polymerase chain reaction as described previously (15). Cysteine to alanine mutations (FlagSemDP1bC1: C567A and FlagSemDP1bC1: C723A) were introduced into the FlagSemDP1b sequence using oligonucleotides including the intended mutation (15) and verified by DNA sequencing. Amino acids Ser 580 -Val 772 were deleted to generate FlagSemD⌬CTD. To replace the COOH-terminal domain of SemD by the hinge-CH2-CH3 (Fc) region from the human G1 immunoglobulin, FlagSemDFc and FlagSemDP1bFc were constructed by replacing the sequences corresponding to amino acids Ser 580 -Val 772 by oligonucleotides (5Ј-GGAACAGGTAAGTGGATCC-3Ј) containing a splice donor. The resulting semD sequence was cloned into pBK-Fc (generously provided by S. Heller), which contains a genomic BamHI-NotI fragment encoding a splice acceptor and the Fc region (20). In SemB⌬TCFlag, amino acids His 682 -Ala 760 were replaced by DYKD-DDDKRS using a polymerase chain reaction-based strategy. Western Blot Analysis-Human embryonic kidney 293 (HEK 293) cells (ATCC CRL 1573) were transfected by calcium phosphate coprecipitation (21) and the transfected cells grown in serum-free medium. Conditioned medium from transfected cells was collected 3 days after transfection and concentrated using Centriplus-30 concentrators (Amicon). Transfected cells were lysed in 400 l of 2% (w/v) cholate (Sigma) in phosphate-buffered saline. Samples of recombinant proteins were separated by SDS-PAGE, transferred to nitrocellulose, and Western blots probed with the monoclonal anti-Flag antibody M2 (Kodak) or a polyclonal anti-COOH-terminal domain antiserum (anti-CTD) (15) using horseradish peroxidase-coupled anti-mouse or anti-rabbit IgG (Dianova) as secondary antibody in combination with the ECL detection system (Amersham Pharmacia Biotech) (15). Cross-linking with Sulfhydryl-oxidizing Agents-For cross-linking studies, recombinant proteins were expressed in HEK 293 cells, and conditioned media were concentrated using Centriplus-10 concentrators (Amicon). Concentrated media were supplemented with leupeptin (Sigma) to a final concentration of 10 g/ml. Free thiols were acetylated by incubation of samples with iodoacetamide (Sigma) at a final concentration of 100 mM for 15 min at room temperature. Proteins were concentrated by methanol/chloroform precipitation (22), and the protein pellet was dissolved in 10 mM Tris/HCl, pH 7.5, supplemented with leupeptin. Dithiothreitol was added to a final concentration of 100 mM, and samples were incubated for 15 min at room temperature. The protein solution was precipitated as described above and the pellet dissolved in 10 mM Tris/HCl, pH 7.5. Aliquots of this solution were incubated with increasing concentrations of freshly dissolved (o-phenanthroline) 2 -Cu 2ϩ for 30 min at room temperature. Cross-linking was terminated by the addition of 1 mM EDTA. Samples were taken after ultrafiltration, acetylation, and reduction with dithiothreitol and crosslinking, then they were dissolved in non-reducing sample buffer and analyzed by SDS-PAGE and Western blotting as described (15). Gel Filtration Chromatography-Gel filtration chromatography was performed on an Amersham Pharmacia Biotech XK 16/70 column with a Sephadex G-200 gel matrix (Sigma). Size standards (MW-GF-100, Sigma) were reconstituted in 50 mM Tris/HCl, pH 7.5, containing 50 mM NaCl as specified by the manufacturer. Recombinant proteins were expressed in HEK 293 cells, and concentrated conditioned media were loaded onto the column. Size standards and recombinant proteins were eluted at a constant flow rate of 0.2 ml/min, and 2.5-ml fractions were collected. Fractions were precipitated with trichloroacetic acid and analyzed by SDS-PAGE and Western blotting. Co-culture Assay-HEK 293 cells were transfected by calcium phosphate co-precipitation (21). Cell aggregates were formed essentially as described previously (6). Briefly, cells were treated with trypsin 5 h after transfection, washed with Eagle's minimum essential medium and 10% fetal calf serum (Life Technologies, Inc.), and resuspended in 0.4 ml of medium. Aggregates of HEK 293 cells were formed overnight in a hanging drop culture by placing drops of the cell suspension (20 l) onto the lids of 35-mm dishes. Clusters of cells were harvested into medium and trimmed with tungsten needles for use in explant culture. Sympathetic ganglia from 9-day chick embryos were dissected into primary culture medium (23) and co-cultured with aggregates in a 10:1 mixture of collagen (Boehringer Mannheim) and Matrigel (Collaborative Research) in the presence of 50 ng/ml murine nerve growth factor (generously provided by H. Rohrer). After polymerization of the matrix for 60 min at 37°C, primary culture medium was added, and the culture was incubated at 37°C. Relative repulsive activities were determined as described previously (15). RESULTS SemB and SemD Form Homodimers-Semaphorins are characterized by the presence of several conserved cysteine residues in both the semaphorin domain and the COOH-terminal domain which may form intra-or intermolecular disulfide bridges. To analyze this possibility we investigated the biochemical properties of two semaphorins. Recombinant SemB and SemD (Fig. 1) were expressed in HEK 293 cells, and cell lysates or concentrated conditioned media were analyzed by SDS-PAGE under reducing and non-reducing conditions followed by Western blotting. We have shown previously that SemD is processed proteolytically at several dibasic sequences (15). FlagSemD is secreted as a 65-kDa protein from transfected HEK 293 cells, whereas FlagSemDP1b contains a mutation in one of the cleavage sites (PCS1) and consequently displayed a molecular mass of 95 kDa (Fig. 2, A and B). Under reducing conditions SemBFlag, FlagSemD, and FlagSemDP1b were detected at the expected molecular masses of 94, 65, and 95 kDa, respectively ( Fig. 2A, lanes 2 and 3; Fig. 2B, lanes 1 and 3). The apparent molecular mass of FlagSemD changed only slightly under non-reducing conditions (Fig. 2B, lane 2), but SemBFlag and FlagSemDP1b migrated at a molecular mass of about 190 kDa ( Fig. 2A, lanes 5 and 6). This result indicates that both SemB and SemD form homodimers that are linked by one or several disulfide bonds. Because the 65-kDa isoform of SemD behaved as a monomer under non-reducing conditions, it appeared likely that the cysteine(s) responsible for dimerization of SemD reside within the COOH-terminal domain. This is supported by the observation that the COOHterminal domain remained dimerized after proteolytic processing (Fig. 2C). Efficient dimerization of SemB depended on the presence of the transmembrane segment and/or the cytoplasmic domain, as the majority of SemB⌬TCFlag migrated as a monomer of 94 kDa after deletion of its carboxyl-terminal end (amino acids His 682 -Ala 760 ; Fig. 2, lanes 1 and 4). Dimerization of SemD Requires Cysteine 723-The different behavior of FlagSemD and FlagSemDP1b upon non-reducing SDS-PAGE indicates that the COOH-terminal domain of SemD is important for dimerization. To identify the residues that form intermolecular disulfide bonds we substituted four cysteine residues in the 33-kDa SemD COOH-terminal domain fragment which are conserved among all class III semaphorins by alanine. Three of these residues lie in the COOH-terminal domain and one at the carboxyl terminus of the semaphorin domain. Mutation of cysteines 598 and 650 had no effect on dimerization (data not shown), and their location in the Ig homology suggests that they might form an intramolecular disulfide bond similar to other peptide sequences of this type (24). Also, mutation of Cys 567 did not change the molecular mass of SemD determined by SDS-PAGE under non-reducing conditions (Fig. 3, compare lanes 3 and 6). In contrast, muta- tion of Cys 723 prevented the formation of dimers completely (Fig. 3, lanes 4 and 7). To substantiate these observations further, cross-linking studies were performed with a sulfhydryl oxidizing agent. Samples of recombinant semaphorins were reacted with iodoacetamide directly after harvesting the conditioned media to acetylate free thiols and avoid unspecific disulfide formation. As expected, FlagSemD, FlagSemD⌬CTD, and FlagSemDP1bC2 migrated as monomeric proteins with apparent molecular masses of 65, 65, and 95 kDa, respectively, whereas FlagSemDP1b formed dimers of 190 kDa (Fig. 4, lanes 1 and 2). Incubation of acetylated dimeric FlagSemDP1b with dithiothreitol resulted in the reduction of disulfide bonds as indicated by a shift in the molecular mass of FlagSemDP1b to 95 kDa (Fig. 4, lane 3). Cross-linking of the reduced proteins by incu-bation with increasing concentrations of the sulfhydryl oxidizing agent o-phenanthroline) 2 -Cu 2ϩ resulted in the re-formation of the 190-kDa complex only for FlagSemDP1b but not the other proteins analyzed (Fig. 4, lanes 5 and 6). Thus Cys 723 is responsible for the formation of an intermolecular disulfide bond, and its mutation blocks the dimerization of SemD completely. The 65-and 95-kDa Forms of SemD Differ in Their Ability to Dimerize-The size of the different recombinant proteins was determined by gel filtration chromatography, to verify the molecular mass observed by SDS-PAGE under non-reducing conditions (Fig. 5). The fractions were analyzed by SDS-PAGE and Western blotting using an anti-Flag antibody, and the protein concentration of the peak fractions was measured ( Fig. 5A and data not shown). FlagSemD was processed to the 65-kDa form and migrated with an apparent molecular mass of 70 kDa, consistent with the expected size for an SemD monomer. The same fractions were analyzed for the presence of the COOHterminal domain by using the anti-CTD antiserum raised against a fragment of the SemD COOH-terminal domain. In Western blots this antibody revealed a protein that migrated at a molecular mass of approximately 50 kDa, which probably corresponds to a dimer of the 33-kDa COOH-terminal domain (Fig. 5B). SDS-PAGE under non-reducing conditions confirmed that the COOH-terminal domain existed as a disulfide bond-linked dimer (data not shown). When the semaphorin domain was expressed without the COOH-terminal domain (FlagSemD⌬CTD) it migrated at a size similar to that of FlagSemD. The molecular mass of native FlagSemDP1b as determined by gel filtration chromatography was 190 kDa, consistent with the expected size for homodimeric SemD(95k). Little immunoreactivity was detected at higher molecular masses (data not shown), indicating that no additional oligomeric complexes were formed and that the SemD(95k) homodimer represents the majority of recombinant protein secreted by HEK 293 cells. Introduction of the C723A mutation into FlagSemDP1b (FlagSemDP1bC2) prevented dimerization as indicated by a reduction of the molecular mass to approximately 100 kDa. These experiments confirm that Cys 723 in the COOH-terminal domain is essential for dimerization of SemD and that processing at PCS1 results not only in the removal of the COOHterminal domain but also in the conversion of SemD to a monomeric form. These results also revealed that SemD(65k) and the dimerized COOH-terminal domain do not appear to associate as they run at a molecular mass of 70 and 50 kDa, respectively, which is inconsistent with the formation of aggregates including both fragments. Cys 723 Is Essential for the Repulsive Activity of SemD-To analyze the functional consequences of SemD dimerization, the repulsive activity of SemDP1bC1 (C567A) and SemDP1bC2 (C723A) was determined in a co-culture assay. Mutation of Cys 560 resulted in a 7-fold reduction of repulsion compared with FlagSemD (Fig. 6A). In contrast, mutation of Cys 723 almost completely abolished the repulsive activity of SemD; FlagSemDP1bC2 was almost 100-fold less active than Flag-SemD (Fig. 6A). Similar results were obtained with constructs that did not contain a mutation in PCS1 (data not shown). Thus, Cys 723 appears to be essential for the repulsive activity of SemD. The COOH-terminal domain of class III semaphorins might function exclusively by promoting the formation of SemD dimers, or it may have additional functions. To distinguish between these possibilities, the COOH-terminal domain was replaced by the constant part of the human ␥-immunoglobulin, and the activity of the resulting hybrid protein was analyzed in a co-culture assay. These chimeric proteins displayed almost no detectable repulsive activity irrespective of processing at PCS1 (Fig. 6C). Similarly, FlagSemD⌬CTD did not show any repulsive effects despite a significantly higher expression compared with that of the other tested SemD variants (Fig. 6, B and C; note that five times less medium was loaded in lane 4). Western blot analysis confirmed that with the exception of FlagSemD⌬CTD all three proteins were expressed at comparable levels (Fig. 6B). DISCUSSION The function of semaphorins as chemorepulsive axonal guidance signals has been analyzed in some detail (1, 3, 5-7, 12-15, 19, 25). However, the structural requirements for the activities of these proteins are still poorly understood. Here, we present evidence that two distinct semaphorins, the membrane protein SemB and the secreted SemD, form homodimers linked by intermolecular disulfide bonds when expressed in HEK 293 cells. Upon SDS-PAGE under non-reducing conditions, SemB and the 95-kDa form of SemD migrate with a molecular mass consistent with that of a dimer. A similar observation has been made for CD100, another class IV semaphorin (11,26). Gel filtration chromatography of SemD(95k) also indicates a size of 190 kDa. Although semaphorins SemD and SemB both form dimers, the specific residues involved appear to be located at different positions. Dimerization of SemD depends on its COOH-terminal domain and is abolished by mutagenesis of Cys 723 . Cysteine residues are found at a similar position in other class III semaphorins, and it is likely that these will also form dimers. In contrast, SemB does not contain a cysteine residue at a corresponding position. Deletion of the putative membrane-spanning segment and cytoplasmic domain reduced the amount of SemB dimerization dramatically. Therefore, intermolecular interactions dependent on these sequences might precede the formation of disulfide bonds. Thus, although dimerization may be characteristic for semaphorins, the sequences responsible for it could be class-specific. Mutational analysis of SemD revealed an important role of Cys 723 not only in dimerization but also in its chemorepulsive activity. When analyzed in a co-culture assay, mutation C723A (FlagSemDP1bC2) almost completely abolished the repulsion of sympathetic axons. The inactivity of this mutant therefore is likely caused by its inability to form dimers as loss of both activities coincides. Thus, dimerization appears to be a prerequisite for SemD to display its repulsive activity. The function of Cys 567 is less clear. Its mutation also resulted in a reduction in repulsive activity, and this residue may therefore be important for proper folding of SemD. Previously, we have reported that proteolytic processing of SemD at PCS1 to the 65-kDa form reduces its repulsive activity (15). Here we show that cleavage not only removes the COOHterminal domain but also results in dissociation of SemD dimers. Determination of the molecular mass of SemD by gel filtration chromatography showed that SemD(65k) behaves as a monomer and does not remain associated with the dimerized COOH-terminal domain. Thus, dissociation of SemD homodimers might explain the reduced activity of SemD(65k) compared with SemD(95k). However, monomeric SemD(65k) is still significantly more active than FlagSemD⌬CTD which is equivalent in sequence to the 65-kDa fragment of FlagSemD. The COOH-terminal domain may be required as a co-factor in addition to the semaphorin domain to activate putative SemD receptors or, alternatively, to promote the adoption of an active conformation. Artificial dimerization of the semaphorin domain by replacing its COOH-terminal domain with the constant part of human IgG1 did not result in active SemD and thus cannot substitute for the effects mediated by the COOHterminal domain. In this respect semaphorins display a behavior different from that of another group of axonal guidance molecules, the ephrins, which are activated by fusion to an Fc fragment (27,28). In contrast, replacement of the SemD COOH-terminal domain by that of other class III semaphorins allows the synthesis of at least partially active SemD. 2 Our results show that dimerization mediated by the COOH-terminal domain is necessary but not sufficient for the formation of active SemD (15). In addition, processing of SemD at the carboxyl-terminal PCS3 and PCS4 is essential for producing active SemD (15). Because FlagSemDP1bC2 is as inactive as unprocessed SemD it appears possible that dimerization may be required for correct processing of pro-SemD to its active form. For example, dimerization could be required for recognition of PCS3/4 by the processing enzyme(s). In summary, dimerization of SemD appears to be an essential part in its maturation process. Formation of dimeric molecules is not restricted to the class III semaphorins but can be found in at least one other class of these proteins. Although it may be a general characteristic of this family, different classes of semaphorins probably depend on different molecular mechanisms to accomplish it.
4,912.6
1998-03-27T00:00:00.000
[ "Biology", "Computer Science", "Chemistry" ]
3D printed beam splitter for polar neutral molecules We describe a macroscopic beam splitter for polar neutral molecules. A complex electrode structure is required for the beam splitter which would be very difficult to produce with traditional manufacturing methods. Instead, we make use of a nascent manufacturing technique: 3D printing of a plastic piece, followed by electroplating. This fabrication method opens a plethora of avenues for research, since 3D printing imposes practically no limitations on possible shapes, and the plating produces chemically robust, conductive construction elements with an almost free choice of surface material; it has the added advantage of dramatically reduced production cost and time. Our beam splitter is an electrostatic hexapole guide that smoothly transforms into two bent quadrupoles. We demonstrate the correct functioning of this device by separating a supersonic molecular beam of ND3 into two correlated fractions. It is shown that this device can be used to implement experiments with differential detection wherein one of the fractions serves as a probe and the other as a reference. Reverse operation would allow to merging of two beams of neutral polar molecules. INTRODUCTION Since the first demonstration of the deceleration and trapping of polar neutral molecules [1,2] a large number of experiments have been performed that control the forward velocity and velocity distribution of neutral molecules in a similar manner as with charged particles. [3] Stark and Zeeman deceleration make use of electric or magnetic field gradients along the direction of propagation of a molecular beam to control the beam velocity. [1,[4][5][6][7][8][9] Electrostatic quadrupole and hexapole guides have been used in molecular beam experiments for several decades, primarily to filter specific rotational states from within a broader distribution and obtain oriented samples. [10,11] Recently, such guides have also been used to velocity-filter a molecular ensemble in a curved guide, [12][13][14][15] to decelerate a molecular beam in a rotating spiral-guide, [16] and to merge two molecular beams for low-energy scattering studies. [17][18][19] In the present paper we use electrostatic guides to split a molecular beam into two fractions, thus acting only on transverse velocities without changing the longitudinal ones. In a previous experiment, a microscopic beam splitter on a chip has been demonstrated. [20,21] The dimensions of the present, macroscopic, beam splitter match those of most molecular beams experiments and can be used for new types of differential measurement experiments. For example, one molecular beam can be used as a reference beam while the other is manipulated according to the experimental requirements. This can lead to a substantial reduction of measurement times because it avoids the necessity of long data averaging, often needed due to the mechanical properties of the pulsed supersonic valves leading to pulse-to-pulse variations. In a differential measurement, the effect of these variations is reduced because fluctuations in the parent beam affect probe and reference beams in the same way. Our device will also be used to superpose two beams by operating it in the reverse configuration to yield precisely the arrangement required for merged-beam studies, [17][18][19] which to date have not been possible with two beams of polar molecules due to the difficulty of transversely injecting a beam into an electrostatic guide without the molecules being deflected. In a similar manner the present arrangement can be used, for example, to load an electrostatic storage ring for polar molecules [22,23] which to date has been possible only by switching the ring off momentarily which imposes odious physical restrictions on the temporal profile of the packet stored inside the ring. Guides for polar molecules like ND 3 require strong inhomogeneous electric fields which, through the Stark effect, generate a transverse confining force. In the case of macroscopic guides, the fields are produced by kilovoltelectric potential differences between cylindrical electrodes a few mm apart. Stringent quality requirements apply to the manufacturing of these electrodes: a machining precision of better than 50 µm is essential, and the surface finish is critical; impurities, scratches, and similar defects on the electrode surfaces amplify the risk of electric arcing. [24] For simple components these requirements can be met by traditional precision machining, but the bent and split shape of the electrodes required here to transform the hexapole guide into quadrupole guides is very challenging to make by traditional manufacturing techniques. [26,27] To produce a device that fulfils the mechanical, geometric, and electrical criteria, we apply a modern production method, namely the stereolithographic 3D printing of the entire electrode structure as a single plastic piece which then is selectively electroplated with a metal layer ≈10 µm thick. Selective electroplating of the piece allows two electrically independent electrodes to be produced in the correct geometry (see Fig. 1C). The metal plating not only allows the almost free choice of surface material, including some that would be very hard to machine, it also produces a surface devoid of scratches, recesses or abrasions. The fabrication method introduced here enables many new avenues for research, beyond gas phase dynamics studies, since 3D printing imposes practically no limitations on possible shapes, and the metal-plating produces chemically robust, conductive construction elements. It has the added advantage of dramatically reduced production cost and time: all components used in the present study were printed within less than 48 hours (see video in the supplementary material [25]); electroplating is completed in one day, and the bottle neck of the entire process was the shipping to and from the plating company. This dramatic acceleration in comparison with traditional manufacturing which, for pieces such as these, can require several months to produce results of lower quality, allows for a very fast turnover and more flexibility in the development and testing of new components. Furthermore, the entirely digital workflow has the advantage that an inherently exact replica of a complete experimental setup can be produced in any laboratory, simply by transferring a small file, and making use of local production infrastructure. Several technologies are available for the 3D printing, and the reader is referred to the literature for a full discussion of all the different flavors (27 and references therein). We here use of a technology called stereolithography (SLA). [28] In SLA a three-dimensional structure is produced in a layer-by-layer process where each layer is painted by a moving laser that photo-polymerizes a precurser inside a resin-filled bath. The next layer is then added after moving the piece away from the laser by a defined distance and ensuring new resin covers the previous layer. This process is repeated until the piece is completed. Our preference of SLA over alternative technologies is rationalized by the following considerations. The most commonly known method is fused deposition modelling, where a plastic wire is melted from a fine nozzle that draws the 3D component on a layer-by-layer basis. Extruded molten plastic solidifies and fuses with previous layers upon cooling. Common materials used here generally can be metal plated, but the resolution is inferior to that of SLA and limited by minimum required nozzle diameters and the inherent imprecisions imposed by the melting and fusing of plastic in an atmosphere. Selective metal sintering is a technology that produces metallic pieces by selectively melting very fine metal powder in the focus of a laser. [29] This technology would avoid the requirement of the electroplating, but it is not able to include insulating areas and produces relatively rough surfaces. Mechanical polishing would eliminate one of the principal advantages of our approach. Several other techniques exist that can provide resolution superior to that of stereolithography, but they use materials that can not be metal coated and are thus not applicable in our A laser beam crosses the molecular beams behind the guide structure to ionise molecules emerging from guides Q1 and Q2. Detectors D1 and D2 record signals from each of the guides individually. Guides can be blocked by retractable beam flags BF1 or BF2. e: Complete guide assembly. experiment, or that are porous and therefore not vacuumcompatible. EXPERIMENTAL The beam splitter itself is shown in figure 1a, and the combination of guides used in the present experiment is displayed in 1e (standard stereolithography (STL) files of the beam splitter and guides are provided as supplementary material to this paper [30]). The top view in figure 1d shows the experimental arrangement, housed inside a high-vacuum chamber: a pulsed (General Valve Series 9) supersonic beam of ND 3 (5% seeded in Ne, stagnation pressure 2 bar, 20 Hz) is skimmed (diameter 3 mm) and experiences free flight of 235 mm before being fed into the first straight hexapole segment on the right of the figure. Molecules fly through the hexapole and reach the region where the hexapole splits into two quadrupoles. Two separate beams are formed and are detected individually, using two separate channeltron detectors located behind each of the quadrupole guides. A single focused laser beam crosses both molecular beams and ionizes the ND 3 by rotationally resolved resonance-enhanced multiphoton ionization (REMPI). The base pressure obtained in the guide chamber was 8 × 10 −8 mbar, and rose to 5 × 10 −6 mbar during operation, sufficient for the present experiments. While the material used here does not allow for baking as the structural properties break down above ≈80 • C, a new resin has become available since these experiments were performed, producing pieces that can be heated to over 250 • C. [31] Preliminary tests with this material using plastic components without metal plating permit a base pressure of 6 × 10 −10 mbar after baking at around 200 • C for two days. [32] Different REMPI transitions of ND 3 have been extensively studied in the past. [33][34][35][36] The [2+1] scheme chosen here, namely through selected B(ν 2 ) vibronic levels, was used in previous studies in the context of Stark deceleration and velocity filtering of ammonia. [14,15] In this scheme, two photons around 315 nm excite a transition from the X ground state to a selected B(ν 2 , J) level which is then ionized by the third photon. To characterize the guided sample in our experiment we make use of selection rules governing the B←X transition (as sketched at the top of figure 3): the two components of the inversion doublet in ammonia have differing parity, and the parity of sequential vibrational levels of the ν 2 umbrella mode in the B-state alternates. Consequently, transitions from each of the inversion components can be excited solely to either even or odd vibrational levels. By making use of this technique, we are able to differentiate between ND 3 molecules which emerge from the beam splitter and those which are un-guided and originate directly from the supersonic expansion. A top-view of the electric field distribution in the main guide element is shown in figure 1b. The single electric field minimum in the hexapole is converted into two field minima in the region of the two quadrupoles, and the ND 3 molecules are kept on the axes by the edge fields from the guides. The entire guide structure in figure 1a was printed (Formlabs Form 2) as a single poly-methyl methacrylate (PMMA) piece. A principal advantage of the 3D printing is that it allows the entire structure to be produced as a single plastic piece. This plastic piece is selectively electroplated to produce two separate and electrically isolated electrodes that, during operation, are kept at opposite polarity. Of the four support structures visible in figure 1a and 1c to the right and to the left, two (one on either side) are supporting one set of electrodes while the remaining two support the other set of electrodes. The cone-shaped supports in the middle are connected to a single electrode each. This enables the separate application of voltages to each of the electrode groups. Printing the structure as a single piece ensures the relative electrode positions are produced at a precision defined by the resolution of the printer. The 3D printer used here polymerises a proprietary methyl methacrylate resin and has a specified vertical resolution of 25 µm. With regards to the horizontal resolution, the supplier specifies the laser spot size to 140 µm, but the printing precision at feature sizes of >200 µm is substan- tially below 10 µm because the spot size is stable and taken into account during the printing process. [37] Electric conductivity is obtained from the PMMA piece by selectively coating certain regions with a layer of nickel a few 10s of µm thick using a combination of chemical and electrolytical procedures (performed by Galvotec GmbH, Switzerland [38]), thus producing very high quality surfaces. Since galvanization is a solution-based process, any kind of shadowing is strongly reduced and an even application of the metallic layer is ensured. Local electric field enhancements at edges or protrusions can lead to extraction of electrons from the bulk metal and thus to arcing. [24] In order to test the high voltage compatibility of the printed parts, we constructed identical test electrodes once from stainless steel and once through printing and galvanisation. The geometry of the electrodes was such as to mimic a short section of the guides: the closest approach was through rounded, 4 mm diameter edges at a minimum distance of 2 mm. The electrodes were tested with the same power supplies and in the same high vacuum chamber. Both electrode setups were conditioned up to ±20 kV (≈200 kV cm −1 ), reaching the limit of our power supplies. It is important to note that the conditioning of the printed piece required more time and smaller voltage increments than that of the stainless steel piece. Both electrodes exhibited intermittent arcing during conditioning, but this left no visible trace on either of the electrodes upon subsequent inspection. Figure 2 shows electron micrographs of 3D printed, metal-coated pieces (panels a-d) and of polished stainless steel pieces for comparison (panels e and f). Panels a and b show that the electroplated piece is not perfectly flat and reveals minor defects, the major part of the surface is however devoid of any defects, and in particular, edges and protrusions are absent. Defects are either relatively large rounded bumps in the surface, as shown in the inset in panel A, that do not dramatically alter the electric field distribution, or they resemble the cavity shown in the inset of panel b, where the field distortions lead to protected pockets. These forms of defects are not believed to efficiently promote electrical breakdown. [24] Panels c and d show very smooth surfaces that ensure very good performance of the printed electrodes in strong electric fields. In contrast to this, the metal surface in panels e and f shows scratches at a nm scale, and these scratches can ultimately limit the performance of the electrodes in electric fields. It should be noted that the polishing of these metal pieces was not chosen to produce the highest possible quality but to be representative of the resultant surface quality of complex electrode structures and typical time constraints. Better surface qualitites undoubtedly are possible, but they require very tedious and slow procedures, and in some cases are limited in terms of structures they can be applied to. RESULTS The principal results of our study are collected in figures 3 and 4. Panels 3a and 3b show the REMPI signals recorded for the direct supersonic expansion (black traces) and the guided molecules (red traces). At the electric field strength used experimentally, the two levels from the inversion doublet split into Stark states with different projections of the rotational angular momentum vector J on the electric field axis. Using the quantum number M J to label these states, we can express the energy of each level as [3] where W inv is the zero-field splitting of the inversion doublet (W inv = 0.053 cm −1 for ND 3 [39]), µ 0 is the permanent electric dipole moment of ND 3 (1.5 D [40]), and E is the electric field magnitude. The force that the states feel is given by the gradient of the potential energy, and points towards stronger fields for levels with M J K > 0, and towards weaker electric fields for levels with M J K < 0. This divides the states into high-field seeking states and low-field seeking states. [3] Levels with M J K = 0 in ammonia have nearly field-independent energies. Only the upper component of the tunnelling doublet in ammonia, labelled X(1), produces low-field seeking states. In the electrostatic guides presented here, the electric field is zero in the center and increases towards the edge of the guide, as can be seen in figure 1b. Lowfield seeking states thus feel a force towards the center of the guide and are radially confined, while high-field seeking states are expelled from the center and subsequently ejected from the guide. Both tunnelling components are essentially equally populated in the original supersonic expansion and the guiding dynamics ensures that only the low-field seeking component is populated after exiting the electric guides. The REMPI spectrum of the guided sample can no longer contain signal from any transitions through the even ν 2 levels, [14] thus producing a spectroscopic fingerprint of the correct functioning of the beam splitter: as explained above and sketched at the top of figure 3, selection rules allow the two-photon excitation from the X(1) state of B(ν 2 =odd) levels only, while only B(ν 2 =even) levels can be excited from the high-field seeking X(0) state. Several transitions are observed in the black spectra in panels 3a and b, mainly from the J=1, K=0 and 1 rotational levels. In contrast, only guided states are visible in the red traces, thus simplifying the spectrum of 3a to essentially transitions from the J=1, K=1 state, and reducing the spectrum in 3b to pure background. To force the ND 3 molecules around the bend in the quadrupole guides, a minimum electric field is required a: A comparison of the correlation between the normalized signal intensity recorded on detectors D1 and D2 (red) and uncorrelated signal intensity (black). Each point represents one laser shot. b: Histrogram of the ratio between the signals on D1 and D2 for the correlated data (red) and uncorrelated data (black). The correlation coefficient for the correlated data in a is 0.89. The standard deviation of the (un)correlated distribution in b is (40%)15%. to overcome the centrifugal force. [13] In figure 3c the potential difference on the guide electrodes is gradually increased to the maximum of 22 kV (corresponding to ≈110 kV cm −1 ). Two principal effects lead to the observed increase in signal: 1. the transverse fields in the entire, straight and curved, guide keep the molecular beam collimated and transport it to the detection region, and 2. in the curved quadrupole section the electric fields compensate for the centrifugal force. The molecules from the supersonic expansion have a forward velocity of several 100 m s −1 , and a considerable field is required to bend them around the slight curvature in the current experimental setup. The dynamics leading to the observed signal dependence from the applied voltage is reminiscent of the transmission curves recorded for guide-based velocity filters. [12][13][14] Higher voltages lead to a deeper trap that allows the guiding of faster molecules. In contrast to the velocity-filter experiments in references 12-14, the present velocity distribution is very narrow, and the transmission curve converges when the entire distribution is successfully guided. The inset in figure 3c shows the signal on detectors D1 and D2 when the beams from Q1 and Q2 are selectively blocked. By selectively inserting the beam flags in one or both beam paths we confirm that the detected molecules indeed originate from the molecular beam. In order to perform differential measurements, a correlation is required between probe and reference signals. In figure 4a the simultaneously measured, normalised signal intensities from 5000 single-pulse measurements on detectors D1 and D2 are plotted against each other (red dots). Both signals individually fluctuate, due to shotto-shot variations of density and detection laser, however there is a clear correlation, and all data points lie near the 45-degree diagonal. To compare to the expected result from a single shot measurement, the same data were used to produce an uncorrelated data set by offsetting the D2-signal by one laser shot while keeping that from D1 unaltered (black dots in figure 4a). As expected, the signal levels are uncorrelated and fill the full plot range of figure 4a. Histograms of the D1/D2 ratios for both the correlated (red) and uncorrelated (black) data sets are shown in figure 4b. The uncorrelated data has a standard deviation of 40% while the correlated data has a standard deviation of 15% and a correlation coefficient of r = 0.89 The prime source of the remaining fluctuations in the correlated data is believed to be the detection scheme itself. The efficiency of a [2+1] REMPI scheme depends non-linearly on the laser power. While a single laser was focused in the center between the two emerging beams and used for the detection of molecules from both guides, fluctuations in the beam profile, laser power, and position of the focal point would affect the two signals differently. SUMMARY A streamlined development and manufacturing technique making use of 3D printing technology has been used to produce a new type of electrostatic electrode geometry for use in molecular beam experiments. The 3D printing method itself opens a plethora of new possibilities for scientific studies since 3D printing imposes practically no limitations on the shapes that can be produced, and the metal-plating converts any plastic piece into a chemically robust conductive construction element, with the added advantage of reduced cost and production times. The entirely digital workflow employed here furthermore offers the possibility to produce an inherently exact replica of a complete experimental setup in any laboratory, simply by transferring a small file and employing local production infrastructure. This infrastructure is a moderately priced piece of equipment that requires no specialized personnel and can thus also be operated at places that do not offer sophisticated workshop infrastructure. We demonstrate that our device allows to cleanly split a molecular beam into two fractions of comparable and correlated densities and thus is is capable of reducing shot-to-shot signal fluctuation. These measurements point towards the possibility of using the device for probe and reference type experiments, in which one of the beams is used experimentally and the other beam serves as a blank. Rotating the beam splitter by 180 degrees and injecting two separate supersonic expansions will enable the merging of two beams of polar molecules and allow low-temperature reactivity studies between them. Merged beam experiments allow reactions to be studied at collision energies substantially below 1 K. [17,18,41] The current merged beam experiments superpose a magnetically guided beam with either an electrostatically guided or a non-guided secondary beam. Because the electric fields from a guide are present both inside and outside the electrode structures, the analogous injection of a polar beam into an electrostatic guide would be very difficult. In contast, merging a beam using the inverted beam splitter will be a much less challenging experiment to perform, and the detailed investigation of dipole-dipole interactions in merged molecular beams is now within reach.
5,402.6
2016-11-07T00:00:00.000
[ "Physics" ]
Search-and-Attack: Temporally Sparse Adversarial Perturbations on Videos Modern neural networks are known to be vulnerable to adversarial attacks in various domains. Although most attack methods usually densely change the input values, recent works have shown that deep neural networks (DNNs) are also vulnerable to sparse perturbations. Spatially sparse attacks on images or frames of a video are proven effective but the temporally sparse perturbations on videos have been less explored. In this paper, we present a novel framework to generate a temporally sparse adversarial attack, called Search-and-Attack scheme, on videos. The Search-and-Attack scheme first retrieves the most vulnerable frames and then attacks only those frames. Since identifying the most vulnerable set of frames involves an expensive combinatorial optimization problem, we introduce alternative definitions or surrogate objective functions: Magnitude of the Gradients (MoG) and Frame-wise Robustness Intensity (FRI). Combining them with iterative search schemes, extensive experiments on three public benchmark datasets (UCF, HMDB, and Kinetics) show that the proposed method achieves comparable performance to state-of-the-art dense attack methods. I. INTRODUCTION In recent years, deep neural networks (DNNs) have shown great performance in various tasks such as image classification [1]- [4] and object detection [5]- [8]. Despite the success of the modern deep neural networks, DNNs are known to be vulnerable to adversarial attacks, which are carefully crafted perturbations to fool machine learning models. Even though generating optimal adversarial perturbations is an NP-hard problem [9], [10], simple gradient-ascent based attack algorithms [11]- [14] have proven effective to deceive deep neural networks. To generate small and possibly imperceptible adversarial perturbations, various constraints (e.g., l ∞ , or l 2 -norm) are often imposed. While much effort has been devoted to adversarial attacks on the image, adversarial attacks have been less studied The associate editor coordinating the review of this manuscript and approving it for publication was Rosalia Maglietta . in the video domain. Recently, several studies [15]- [17] have been conducted to investigate adversarial attacks on videos, but they do not fully leverage the relation of frames. More precisely, those studies have either attacked all frames or regularly sampled frames without considering temporal dependency between frames. This line of approaches is suboptimal to attack video classification models that utilize the temporal information of videos. Even though most stateof-the-art video classification models use 3D convolution [18]- [20] to capture the temporal information, the models differently cope with the temporal dynamics by using different strides along the time direction for their efficiency and flexibility. SlowFast Networks [21] use even two separate 3D convolutional neural networks with two different frame rates. Exploiting both the common trait of video classification models such as 3D convolution and architecture-specific vulnerability is a key to develop less perceptible (or temporally sparse) and stronger adversarial attacks. Overview of our Search-and-Attack pipeline. At the first stage (frame selection), with the target model parameter θ and the input x, our algorithms find the most vulnerable frames with respect to the surrogate losses: MoG (Magnitude of the Gradient) or FRI (Frame-wise Vulnerability). Then, the multi-step frame selection method, either greedy search or section search is applied to generate the index set I. Here, Greedy Search iteratively updates its input video and Section Search selects each frame from equally divided sections. Finally, at the second stage (perturbation generation), our algorithms create an adversarial perturbation δ I by FGSM or PGD, where the perturbation exists only on the frames selected by the search stage. In this paper, we propose a simple yet effective method for temporally sparse adversarial attacks against video classification models. We propose a two-stage pipeline called the Search-and-Attack, which finds the most vulnerable frames of a video at the Search stage, and then perturbs only the frames at the Attack stage. We formally define the temporally sparse attacks and the most vulnerable frame set. Identifying the most vulnerable frames involves a nonlinear mixed integer programming. For efficient optimization, instead of the conventional loss function, we propose surrogate objective functions and find vulnerable frames with respect to the surrogate losses. With the surrogate losses, we explore a single-step method and two iterative methods for the frame selection. In the attack stage, we generate a frame-wise perturbation with a modified version of FGSM [12] and PGD [13] which only perturb the frames selected in the search stage. To test the performance of the proposed method, we carry out experiments on HMDB, UCF, and Kinetics datasets with widely used action-recognition models such as I3D, R3D, R(2+1)D, SlowFast, and IRCSNv2. The experiments show that attacking only a few vulnerable frames is as strong as attacking the entire frame. To sum up, our main contributions are fourfold. • We formally introduce temporally sparse adversarial attacks for video action recognition models. • We propose frame search algorithms with surrogate objective functions to identify vulnerable frames. • We study model-specific temporal vulnerability of video classification models. • Our experiments on HMDB, UCF, and Kinetics datasets demonstrate that the proposed methods successfully search vulnerable frames and generate strong temporally sparse adversarial perturbation on videos. A. ACTION RECOGNITION Many studies have investigated various models for action recognition. One approach is CNN+RNN based models, which train RNN models on a sequence of the frame features over time with CNNs [22]. Another common approach is 3D convolutional neural networks that take a set of frames as an input tensor. Some models learn spatial and temporal features concurrently [18], [19], while some models consist of separable 3D convolutions in which temporal and spatial convolutions are implemented separately [20]. In addition, there are two-stream CNNs with RGB images and optical flows. They use the information of short-term motions to supplement with the information of object appearance [18]. B. ADVERSARIAL ATTACK Adversarial attack has been studied by a variety of approaches. The attack methods have been mostly demonstrated in image classification. Gradient-based methods are one of the most common methods for adversarial attack and they modify the input images with respect to computing the gradients of the cost function [12], [13]. Although these attacks usually densely change the input values, recent works have shown that DNNs are also vulnerable to sparse perturbations. For example, Jacobian-based Saliency Map Attack (JSMA) constructs a saliency map, which shows the impact of each pixel on the resulting classification, through forward gradients [23]. Based on the saliency map, the JSMA VOLUME 9, 2021 iteratively attacks the most influential pixel. Moreover, [14] suggests optimization-based adversarial attacks that use distance metric l 0 -norm. Recent works have studied sparse and imperceptible perturbations to introduce more effective attack methods; for more details, see e.g., [24]- [26]. C. ADVERSARIAL ATTACK TO VIDEO Regarding the video domain, [15] has explored adversarial attacks on videos. They proposed a method that optimizes spatio-temporal sparse adversarial perturbation with a CNN+RNN [22]. However, they just sampled frames without considering temporal dependency between frames. On the other hand, [17] proposed a dense adversarial attack on a two-stream action-recognition model, which consists of optical flow and RGB. They extended FGSM and iterative FGSM attacks to the video domain [12], [27]. Unlike those optimization-based methods, attacks using a generative model have also been suggested. For example, [16] generated black-box adversarial perturbations to the real-time action-recognition model using Generative Adversarial Networks (GANs). III. METHODS In this section, we first formally introduce the temporally sparse adversarial perturbations. Then, we propose our Search-and-Attack methods that are efficient two-stage algorithms. At the first stage (frame selection), our algorithms search the most vulnerable frames with respect to surrogate objective functions, and at the second stage (perturbation generation), our algorithms attack the selected frames by FGSM or PGD. A. TEMPORALLY SPARSE ADVERSARIAL ATTACKS Let θ denote the parameters of neural networks and x ∈ R T ×C×H ×W denote an input video with its ground-truth target label y, where T , C, H , and W are the numbers of frames, the number of channels, the height, and the width of a frame respectively. The i-th frame of an input video x is denoted by Our goal is to attack neural networks for video classification by temporally sparse perturbations. The goal can be achieved by finding the most vulnerable N frames I ∈ B T and perturbing only the selected frames. This can be written as follows: where L is the loss function and δ (i) ∈ R T ×C×H ×W denotes the perturbation on the i-th frame of the video, i.e., all elements of δ (i) are zeros except those corresponding to the i th frame. In other words, the perturbation on the most vulnerable N frames δ I is defined as the sum of the frame-wise perturbation δ (i) of the selected frames where i ∈ I. Algorithm 1 Search-and-Attack Method Input: Video x and its label y, model parameters θ, the number of attacked frames N , surrogate loss J , attack method A Output: perturbed video x It is worth noting that Eq. (1) is a mixed-integer non-linear program (MINLP). Since MINLP is an NP-hard combinatorial problem [28], no polynomial-time algorithm has been found yet to seek the global optimal solution. The feasible set of (1) has T N solutions for I and even for small problems; e.g., T = 32, N = 4, it is huge. Besides, the continuous variables δ (i) need to be simultaneously optimized. Hence, in this paper, we instead decompose the problem in (1) into two subproblems: (i) search (frame selection by optimizing I), and (ii) attack (perturbation generation δ I for the selected frames). Figure 1 illustrates the overall pipeline of our approach. B. SURROGATE OBJECTIVE FUNCTIONS At the search stage, we propose an alternative formulation with a surrogate objective function J to perform the frame selection. The optimization problem in (1) is reduced to an integer programming problem as follows: Alg. 1 shows how the generalized search method works. 1) MAGNITUDE OF THE GRADIENTS (MoG) We derive a first surrogate objective function from the first-order Taylor expansion of the loss function L. The Taylor series for the original objective function is given as follows: From the maximum perturbation constraint, i.e., δ (i) ∞ ≤ ε in (1), the upper bound of the first order approximation can be derived as follows: The upper bound is obtained by the sum of the L 1 -norm of the gradient. So, we name our surrogate loss function as the sum of Magnitudes of Gradients (MoG). For a fixed data point x, since ε and L(x, y; θ) are constant, we define J MoG as follows: x p ← x p + δ (i * ) 9: end for 10: return I 11: end function This has the same optimal solution as in Eq. (4) since To propose the second surrogate loss function, we assume that the loss function is locally linear around data point x. More precisely, we assume that the increase of the loss value by δ I is approximately similar to the sum of the increases induced by δ (i) . The second surrogate loss can be derived as follows: To avoid clutter, y and θ are omitted in the equation. Similar to MoG, the surrogate loss function can be further simplified by removing the constant L(x). The second surrogate objective function J is given as: We name this surrogate loss as the sum of FRame-wise vulnerability (FRI). The frame-wise perturbation δ (i) is generated by a simple method. More details will be discussed in Section III-D. Note that the optimal solution I * with respect to MoG and FRI (given δ) is simply top N frames with the largest norm of gradients or frame-wise vulnerability. So, a single-step frame selection algorithm can be efficiently implemented by sorting. C. FRAME SELECTION As aforementioned, the proposed surrogate loss functions (e.g., MoG and FRI) yield single-step frame selection algorithms. Those surrogate losses are derived based on linearity,e.g., Eq. (3) and (6). However, if the number of frames to attack increases, the surrogate loss becomes inaccurate Algorithm 3 Section Search for Frame Selection Input: Video x, label y, model parameters θ, the number of frames to attack N , surrogate loss J , attack method A Output: Frame Index Set I 6: for n = 1 to N do 7: i * ← arg max i∈S n J (x + δ (i) , y; θ) 8: I ← I + {i * } 9: end for 10: return I 11: end function due to the non-linearity of the original loss function L, i.e., the discrepancy between the original loss and the surrogate loss gets larger so the frame selection becomes suboptimal. To address this problem, we propose iterative frame selection algorithms. Since δ I is defined as the sum of δ i , the iterative frame selection estimates only one frame perturbation per step. 1) GREEDY SEARCH We propose a Greedy Search algorithm for frame selection, which iteratively selects frames while updating input video x. The greedy search algorithm in Alg. 2 selects one frame at each iteration based on J . Here, J can be either J MoG or J FRI . Then, the selected frame i is perturbed by δ (i) , i.e., x p = x p + δ (i) as line 8 in Alg. 2. The updated video is utilized at the next iteration to find the subsequent vulnerable frame. Although the greedy algorithm is computationally more expensive than single-step frame selection, it generally provides better frame selection. The experimental results for the greedy algorithm are available in Section IV-D 2) SECTION SEARCH In the experiment of the Greedy Search, we have observed that the selected frames tend to disperse compared to the non-iterative method. More details will be discussed in Section IV-D22 and Section IV-E. Motivated by the observation, we propose a more efficient iterative method called Section Search, which selects each frame from equally divided sections. The section search algorithm divides input video into sections {S n } n∈N with an equal length r = T N , i.e., S n ← {(n − 1)r + j} r j=1 as line 4 in Alg. 3. Then, for each section S n , only one frame is selected based on J . Since the section search does not update the input, it is computationally more efficient than the greedy search algorithm. Also, they generally provide better frame selection than a single-step frame selection. Figure 2 illustrates the idea of the section search algorithm. D. PERTURBATION GENERATION We use the FGSM or PGD attack to generate adversarial perturbations. Unlike the previous adversarial attacks on video [15], [17], we applied FGSM and PGD attacks to only the selected frames I from the frame selection stage. The FGSM attack in our framework can be written as follows: where is Hadamard product (or element-wise product) and Similarly, the update step of the PGD attack in our framework for iteration k ∈ {1, · · · , N } is defined as below: where denotes the projection to the set of small perturbations . Starting from δ 0 = 0, the perturbation by PGD attack is obtained after N updates, i.e., A PGD (x, y, θ, I) = δ N (x, I). IV. EXPERIMENTS In this section, we validate the effectiveness of our attack methods on three video classification datasets: UCF101, HMDB51, Kinetics400. We first briefly introduce the datasets and provide implementation details. Also, we present the experimental results of single-step attacks, greedy search, and section search. Lastly, we discuss the motivating observation that the base networks in video classification have the architecture-specific vulnerability and in which circumstance the proposed methods are recommended. A. DATASET UCF101 dataset [29] is a widely used dataset in video action-recognition. The dataset consists of 13,320 video clips from 101 human action categories with a total length of 27 hours, average duration of 7.2 seconds. All the videos have been collected from YouTube and have a fixed frame rate of 25 FPS with the resolution of 320 × 240. Specifically, we split UCF101 according to the official split-1, which splits the dataset into 8K training set and 3K testing set. HMDB51 dataset [30] is a large collection of realistic videos from various sources, including movies and web videos. The dataset is composed of 6,849 video clips from 51 action categories (such as ''jump'', ''kiss'' and ''laugh''), with each category containing at least 101 clips. The original evaluation scheme uses three different training/ testing splits. In each split, each action class has 70 clips for training and 30 clips for testing. We use split-1 for our experiment. Kinetics400 dataset [31] is one version of the kinetics dataset, a collection of large-scale, high-quality Videos' URLs that include about 650,000 video clips with 400/600/700 human action classes. The videos include human-object interactions such as playing instruments, as well as human-human interactions such as handshaking and hugging. Each clip is annotated with a single action class and lasts around 10 seconds. We split Kinetics400 according to the official split in [31]; 250-1000 videos per class for training, 50 videos per class for validation and 100 videos per class for testing. C. SINGLE-STEP FRAME SELECTION We compare the surrogate loss functions, MoG and FRI, introduced in Section III-B. Table 2 shows the performance of various video classification models against the attacks based on MoG or FRI on UCF101. For perturbation generation, FGSM was used. Overall, the FRI chooses more vulnerable frame sets than the MoG achieving lower classification accuracy. FRI in (7) is the sum of loss changes induced by 'actually' perturbed frames with δ (i) . This allows a more accurate estimation of the vulnerability of frames than the norm of gradients. However, FRI requires relatively higher computational cost than MoG since FRI needs N inferences for evaluating a set of L(x + δ(i), y; θ). In other words, there is a trade-off between the MoG and the FRI with respect to the time complexity and the quality of frame selection (or the success rate of attacks). Also, we additionally provide experimental results about the changes of attack performance with varying N (See Fig. 3). Despite the performance of the proposed attack depends on the choice of N, the results were reported with fixed N (N = 8). Since the difference between each threat model with a lower sparsity attack (N ≤ 8) is insignificant, we fix N as 8 to clearly show the characteristics of each threat model while maintaining 25% of sparsity. D. ITERATIVE FRAME SELECTION We evaluate the effectiveness of iterative frame selection methods: greedy search and section search. Also, we provide the experimental results to show model-specific vulnerability and why the section search can achieve comparable adversarial attacks with much less computational cost than greedy search. 1) GREEDY SEARCH We compare greedy search-based frame selection with single-step frame selection methods. With the same surrogate TABLE 3. Classification accuracy on UCF 101. We compare a single-step frame search and greedy search (denoted with suffix '-G') for frame selection. The adversarial attacks are generated by FGSM after the frame selection. Overall, greedy search-based frame selection achieves a 2.37% higher success rate on average than single-step frame selection. objective functions as MoG and FRI in single-step frame selection, we applied the greedy search that chooses one frame at a time and update the surrogate objective functions. The greedy search algorithms with MoG and FRI are denoted by MoG-G and FRI-G, respectively. Table 3 shows classification accuracy on UCF101 against adversarial examples. Adversarial attacks (FGSM) with the greedy search-based frame selection method achieve 2.37% higher attack success rates than ones with single-step frame selection methods. For R3D, R(2+1)D, and IRCSNv2, the improvement by greedy search is significant for both surrogate objective functions. In particular, the largest improvement by greedy search is 7% with MoG-G and R(2+1)D. 2) MODEL-SPECIFIC VULNERABILITY We observe that the performance gain by 'Greedy Search' varies depending on base networks as shown in Table 3. Interestingly, Greedy Search overall improved the power of adversarial attacks but the improvement of I3D and SlowFast by Greedy Search is relatively small. We investigated the different behaviors and found that models have different vulnerabilities at the frame level. For some architectures, single-step methods are sufficient to find a set of vulnerable frames. To analyze this hypothesis, we visualize the L 1 -norm of each frame's gradient for baseline models (Fig. 4). In the case of I3D and SlowFast, frames can be categorized into two groups: highly vulnerable frames and others. And the vulnerable frame group has significantly larger gradients than non-vulnerable frames. For instance, in Fig. 4, I3D and SlowFast have 8 highly vulnerable frames and the norms of their gradients are larger than 0.4 and 0.6 respectively. We conjecture that the significant difference makes the frame selection problem easy and leads to the small We compare single-step frame search and section search (denoted with suffix '-S') for frame selection. The adversarial attacks are generated by FGSM after the frame selection. Overall, section search-based frame selection achieves a 3.63% higher success rate on average than single-step frame selection. performance gap between single-step methods and greedy search. Another interesting observation is that the vulnerable frames are evenly spaced. For example, in the SlowFast case, every fourth frame is a highly vulnerable frame (e.g., {1, 5, 9, · · · , 29}). Besides, in Fig. 5 the histogram shows that both FRI and MoG with FGSM mostly chooses every fourth frame {2, 7, 11, 15, · · · , 31}. 3) SECTION SEARCH Motivated by the observation, we proposed Section Search in Section III-C2. Our experiments in Table 4 demonstrate that Section Search (denoted by MoG-S and FRI-S) also significantly improves the quality of frame selection resulting in stronger adversarial attacks than ones with single-step frame selection (denoted by MoG and FRI). Section Search improves success rate by 3.63% on average in the UCF101 dataset. The largest improvement is 10.2% with R(2+1)D, which is even larger than the improvement by Greedy Search. E. MAIN RESULTS For more comprehensive comparison, we provide experimental results in UCF101, HMDB51, and Kinetics400 in Table 5, Table 6, and Table 7, respectively. Before discussing the performance, we define a metric called Frame Interval Variance (FI-VAR) to summarize the patterns of selected frames. Let I denote the set of selected indices of frames, and I i be the i-th smallest element in I. Given the indices of selected frames , Frame Interval Variance (FI-VAR) is defined as the variance of intervals between the indices, which is given as: where d i = I i+1 − I i where i = 1, · · · , N − 1, We added d N to construct a cyclic interval. FI-Var quantifies how evenly the selected frames are spread out. When the frames are evenly spread, e.g., {1, 5, · · · , 29} as discussed with SlowFast in Section IV-D2, the FI-VAR ≈ 0. If selected frames are congregated, then thanks to d N , the FI-VAR increases. Also, randomly scattered indices may have a large FI-VAR. FIGURE 6. Attack Success Rate with a particular FI-VAR on UCF101. We randomly selected frames with a particular FI-VAR and attack the frames to the R3D model. The horizontal line shows the baseline success rate without particular FI-VAR. In the process of changing the FI-VAR, it can be seen that the smaller the FI-VAR, the stronger the randomly selected attack. TABLE 5. Classification accuracy on UCF101 dataset against baseline and various Search-and-Attack methods. '-G' and '-S' denote the greedy search and section search respectively. For PGD attack, we use α = 0.5, ε = 2, n = 4. * and † mean the best performance and the second best performance respectively. We select 8 frames for each video. In FGSM, the FRI-S shows the best performance or the gap with the best performance is very small. Unlike in FGSM, in PGD, the breakthrough of iterative methods is significant. Interestingly, the FI-Var of the FRI-G method decreases much more in PGD than FGSM. In the process of updating the frame for each iteration, it can be seen that if the attack intensity is strong, the frame is naturally selected to increase the distance between frames. TABLE 6. Classification accuracy on HMDB51 dataset against baseline and various Search-and-Attack methods. '-G' and '-S' denote the greedy search and section search respectively. For PGD attack, we use α = 0.5, ε = 2, n = 4. * and † mean the best performance and the second best performance respectively. We select 8 frames for each video. Like UCF101, the FRI-S generally shows the best performance. We observe that frame selection with a smaller FI-VAR often leads to stronger attacks. In other words, the more the frames are dispersed, the better candidate frames to perturb. Fig. 6 shows the relation between FI-Var and accuracy. Here, we randomly select frames and attack the frames by FGSM. The Fig. 6 shows that the attack success rate is negatively correlated with FI-VAR. Overall, the surrogate objective function FRI shows better performance than the MoG. In addition, iterative algorithms, Greedy Search and Section Search, outperform the single-step frame selection method. Especially, the combination of Greedy Search for frame selection and PGD for perturbation generation achieves the best performance on average in various settings. Table 5 shows that in UCF101 dataset Greedy Search with PGD, specifically FRI-G with PGD, achieved the highest attack success rate (or the lowest classification accuracy) over all of the base neural networks. Similarly, in HMDB51, Table 6 demonstrates that FRI-G with PGD achieves the best performance in four out of five settings. On Kinetics 400 shown in Table 7, MoG-G with PGD achieves the best performance three out of five settings whereas FRI-G with PGD is the best-performing construction in two settings. But the performance gap between FRI-G and MoG-G with PGD is marginal. In short, FRI-G with PGD is overall the best Search-and-Attack method. When considering efficient schemes for temporally sparse attacks, FGSM is preferred to PGD. Interestingly, in this case, Section Search is a better option than Greedy Search. Table 5, Table 6, and Table 7 show that with FGSM, FRI-S outperforms FRI-G by 0.72%, 0.26% and 0.2% on average TABLE 7. Classification accuracy on Kinetics400 dataset against baseline and various Search-and-Attack methods. '-G' and '-S' denote the greedy search and section search respectively. For PGD attack, we use α = 0.5, ε = 2, n = 4. * and † mean the best performance and the second best performance respectively. We select 8 frames for each video. Like UCF101 and HMDB51, the FRI-S generally shows the best performance. in UCF101, HMDB51, and Kinetics 400. It is worth noting that Section Search (FRI-S) is ×1.9 ∼ ×2.3 faster than Greedy Search (FRI-G). We conjecture that if the perturbation method is strong (e.g., PGD), then the loss may change drastically after the perturbation. So the iterative method, especially Greedy Search is more effective. On the other hand, when the perturbation generation is relatively weak (e.g., FGSM), then Section Search is more efficient and achieves a comparable attack success rate. V. CONCLUSION In this work, we propose a Search-and-Attack framework for temporally sparse adversarial perturbations on videos. The Search-and-Attack framework has two stages: frame selection (Search) and perturbation generation (Attack). To identify the most vulnerable frames in the Search stage, we explore Single-step Search methods with surrogate objective functions: MoG and FRI. In addition, for more accurate frame selection with more computational power, we propose Greedy Search and Section Search. Our extensive experiments on three benchmark datasets, e.g., UCF101, HMDB51, and Kinetics400, show that Greedy Search with FRI and PGD (FRI-G + PGD) achieves the best attack success rate on average. We observe that neural networks in video classification have model-specific vulnerability in the time domain and evenly-spaced selected frames are often effective. Motivated by the observation, we propose Section Search and interestingly, with FGSM, the Section Search with FRI (FRI-S + FGSM) shows the best attack success rate although it is computationally more efficient than Greedy Search. In our experiment, we assume that the number of frames to attack N is given, not dynamically estimates N . However, since a strategy on how to determine such parameters in an unsupervised manner is necessary, clustering techniques for complex networks [33]- [35] could be used to determine those parameters. The future directions of this work include studying more imperceptible and efficient adversarial perturbation for videos and robust neural network architectures in video classification. Specifically, since the proposed attack targets the neural networks on videos, it can be used to verify the robustness of holistic video understanding systems using neural networks. For example, it can verify robustness or help to build a more robust system for a video tagging system [36] or autonomous car [37] which uses video neural networks to make proper decisions.
6,706
2021-01-01T00:00:00.000
[ "Computer Science" ]
Intelligent Feature Selection for ECG-Based Personal Authentication Using Deep Reinforcement Learning In this study, the optimal features of electrocardiogram (ECG) signals were investigated for the implementation of a personal authentication system using a reinforcement learning (RL) algorithm. ECG signals were recorded from 11 subjects for 6 days. Consecutive 5-day datasets (from the 1st to the 5th day) were trained, and the 6th dataset was tested. To search for the optimal features of ECG for the authentication problem, RL was utilized as an optimizer, and its internal model was designed based on deep learning structures. In addition, the deep learning architecture in RL was automatically constructed based on an optimization approach called Bayesian optimization hyperband. The experimental results demonstrate that the feature selection process is essential to improve the authentication performance with fewer features to implement an efficient system in terms of computation power and energy consumption for a wearable device intended to be used as an authentication system. Support vector machines in conjunction with the optimized RL algorithm yielded accuracy outcomes using fewer features that were approximately 5%, 3.6%, and 2.6% higher than those associated with information gain (IG), ReliefF, and pure reinforcement learning structures, respectively. Additionally, the optimized RL yielded mostly lower equal error rate (EER) values than the other feature selection algorithms, with fewer selected features. Introduction Security issues have been considered as a critical factor for the Internet of Things (IoT) owing to privacy challenge concerns [1][2][3][4]. For instance, a smart health card generated based on an IoT platform may enhance patient security and privacy information. However, when it is hacked, security issues are raised, such as theft risk, loss, insider misuse, and unintended behavior. Knowledge-based authentication methods rely on users' memories, whereas token-based authentication methods utilize an external device [2,5]. For example, knowledge-based authentication methods use a personal identification number (PIN) and an identity (ID)/password, and token-based ones provide one-time passwords (OTPs) and short message services (SMSs) to the users. However, both approaches could be vulnerable to a brute-force dictionary attack, that is they can be guessed, duplicated, lost, or stolen. In particular, knowledge-based authentication methods could be attacked by hackers who may guess the users' family name, birthday, or anniversary, and token-based methods could be critically risky when the external device is lost or stolen [6,7]. To solve these issues, researchers are investigating different personal authentication approaches using biometric data. With a biometric authentication system, users do not have to remember complex passwords or hold tokens, but may access the system using unique features of their own bodies that would be difficult to be cloned, lost, or stolen [6][7][8][9][10][11][12][13]. However, some types of biometrics, such as fingerprints, irises, and faces, are still vulnerable to attack. Fingerprints could be imitated and duplicated with silicone [14,15]; the iris features could be reproduced with contact lenses and printing [16]; the face could be easily fabricated with a photograph [17]. In addition, these biometric features have a critical flaw in that they cannot be remedied if they are damaged [6,13]. The electrocardiogram (ECG) is used as one of the biometrics. It is an electrical signal generated by the sinoatrial node in the heart to stimulate the cardiac muscle to contract and relax. It consists of various peaks referred to as P, Q, R, S, and T waves (see Figure 1). Compared with other biometrics, the ECG signals cannot be easily reproduced and have higher reliability, entropy, and randomness [18][19][20][21][22]. Additionally, ECG signals are affected by various other factors, including age, gender, physical condition, structure, and obesity [23][24][25][26]. To extract ECG signal features for the implementation of the authentication system, data-driven convolutional neural network (CNN) models have either been designed [27][28][29] or feature-engineering approaches have been applied based on predefined fixed models [30][31][32][33]. The extracted features from a single-lead ECG signal have been proven to provide reliable authentication results [34][35][36]. It has also been reported that the long-term stability of the features is guaranteed for several days or even years [34,37,38]. This study also explored the long-term stability of the ECG features for a personal authentication system using ECG signals that were recorded for six days. Additionally, this study identified the ideal ECG features that were considered the most significant for the classification of a user among others. It was found that the biometric authentication task that uses a high number of significant features, also known as "costly features", performed better than the one that used all the features extracted from the biometric signals without taking into consideration their significance in relation to the task [39][40][41][42]. However, authentication with these ECGs has some limitations. Violent activities such as exercise may change the ECG features [43]; drugs such as caffeine may change the ECG features [19]; emotional changes may cause difficulties in ECG-based authentication [44]; the heart rate may change every day [45]. In this paper, experimental data were created to design robust models for problems caused by ECG that vary daily among these challenges. Unlike conventional data, the data used in this paper comprise different cardiac data over a continuous six-day period of one subject. The model optimized through the data will have the strength of having relatively robust results for daily varying ECG signals. Among the algorithms used to search the costly features, we mainly applied the reinforcement learning (RL) algorithm [46] to ECG-based personal authentication. Recently, RL has achieved considerable performance improvements with the help of deep learning models, yielding state-of-the-art results in various areas, such as healthcare, autonomous driving, and resource management [47][48][49][50][51][52]. In addition, many studies have been conducted for feature selection using RL owing to its promising performance for the optimization [53][54][55]. The deep neural networks in RL, commonly referred to as deep Q-learning [56], were manually constructed in previous studies [48,51,57]. However, the performance of deep neural networks will vary depending on their architectures; additionally, they were developed mostly based on the developer's experience and intuitions and may have suboptimal architectures. Thus, the networks in RL are automatically optimized using the Bayesian optimization hyperband (BOHB) method [58]. In this study, BOHB optimized the layers of the neural networks, the number of nodes in each layer, the learning rate, and the optimizer in the RL algorithm. As a benchmark test, the costly conventional feature selection algorithms, namely ReliefF [59,60] and information gain (IG) [61], were compared. The former is a Manhattan-distance-based feature selection algorithm that selects the significant features by calculating the sum of the distances among the instances of the features. The latter is an entropy-based feature selection algorithm computing the entropy of each feature and determines the significant features based on the calculated entropy. This study is structured as follows. Section 2 elaborates on the RL deep Q-network (DQN) and BOHB algorithms for optimal feature selection. Section 3 describes the experimental methods, preprocessing, and feature extraction. The conducted experiments to demonstrate the effectiveness of the optimization of DQN as a model-independent classifier via BOHB are described in Section 3.3. Some experiments using different models are described in Section 3.4 for comparison with the optimized RL model and other costly feature selection algorithms. In Section 4, the authentication results (using RL and BOHB) are provided based on the benchmark tests with the conventional methods for both experiments. Costly Features in IoT Environment In an IoT environment, limited resources such as memory, computation, and power have always been issues [62][63][64][65][66][67]. In particular, the classification problem for a personal authentication system pertaining to wearable devices is also limited by these issues; this is referred to as classification with costly features (CwCF). Previously, the RL algorithm was designed to solve the CwCF issue to minimize the expected classification errors with incurred costs [46]. Deep Q-Network RL is a machine learning approach in which an agent finds the optimal action and policy based on rewards from the environment. RL consists of Markov decision processes (MDPs) [68]. The elements of an MDP have a state s, action a, reward r, and depreciation rate of γ. Specifically, s denotes the current state, and a is the action taken in s. In turn, r is the reward obtained from the environment when the agent takes an action, and γ is the reliability in future rewards whose values range between 0 and 1. Q-learning tries to identify the optimal policy of the MDP by updating the Q-function [69]. At the beginning of each episode, an agent moves from the current state s to the direction defined by the current action, a. The agent receives a reward r from the environment, yields a Q-value for the next action a , and obtains the maximum Q-value from the next state s . Subsequently, the Q-value is updated by multiplying the maximum Q-value and learning rate α according to Equation (1). The rewards are obtained at t + 1 based on the current state and environment. The DQN applies deep neural networks to Q-learning to approximate the Q-value in more complicated environments than that of conventional Q-learning [56,70]. The loss function of the model calculates the mean-squared error (MSE) L(θ) based on Equation (2): where θ − are the parameters of the target network, which are fixed. The target network is updated in every predefined number of epochs. In addition, the DQN utilizes the experience replay method, wherein samples, including the set (s t , a t , r t+1 , s t+1 , a t+1 ), are stored in memory and a specific number of samples are randomly chosen to train the networks. This could solve the issue of dependence on consecutive samples or avoid unnecessary feedback loops. Costly Feature Selection Using RL The costly feature selection is described as follows. The variable (x, y) ∈ D denotes one of the samples from the data distribution D, where the vector x contains n input features, f i ∈ F = f 1 , . . . , f n , and y is its class label. In one episode, the environment randomly selects one data sample from D, and the agent sequentially selects the features and classes with the highest Q-value [46]. The environment is represented by a partially observable MDP (POMDP) [71], which, unlike an MDP, provides the agent with limited information about the environment. State s = (x, y,F) ∈ S is denoted by a sample (x, y), the state space S, and the agent-selected featuresF. In action a ∈ A(A = A c A f ), A f is an action taken to conduct the classification, and A f is an action taken to select a feature in a feature set. The episode ends when the agent selects a classification action, A c , and receives a reward of 0 if it is correctly classified and -1 if it is incorrect. When the agent selects an action A f , to select a feature, it receives a reward of −λc( f i ), where c( f i ) is the cost for f i . The reward function r : S × A → R is in accordance with S and A and is represented mathematically as follows. The value of λ provides a trade-off between the precision and average cost for this RL model. As λ increases, the cost is reduced and the focused episode becomes shorter. The transition function is defined as t : where T is the terminal state. When the agent selects a feature as an action, it adds the currently selected feature toF. If the agent selects an action to derive the classification result, it ends the episode. In this paper, we designed a feature selection model using the DQN algorithm one of the promising RL models. If only the feature is placed in the action of the DQN model, the model acts as an optimizer [72], but by giving both feature and subject number, the model could possibly perform both feature selector and classifier functions as a pure RL model [46]. The procedure of this algorithm is shown in Algorithm 1. Algorithm 1 Procedure of DQN Optimizer and Classifier. 1 : Initialize replay memory 2 : Initialize action value function Q with random weights 3 : for = 1, M do 4 : for t = 1, T do 5 : With probability epsilon, select a random action 6 : if random action is feature: 7 : Execute action in emulator, and observe reward 8 : Set state and preprocess policy 9 : Store transition in replay memory 10 : Perform a gradient descent step 11 : if random action is subject number: 12 : Execute action in the emulator, and observe reward 13 : Set state and preprocess policy 14 : Store transition in replay memory 15 : end for 16 : end for Hyperparameter Optimization The performance of machine learning algorithms relies on internal hyperparametric settings. A machine learning algorithm could be represented as a function g : X → R and its hyperparameters x ∈ X . The hyperparameter optimization (HPO) task aims to identify the optimal hyperparameters x * ∈ argmin x∈X g(x). However, most machine learning algorithms cannot observe g(x) owing to its randomness and uncertainty and, thus, assume that it is observable only based on noisy observations y( Bayesian Optimization In each iteration i, Bayesian optimization (BO) builds a probability function p(g|D) to model the objective function g using the Gaussian process, which is based on the already known (observation) dataset D = {(x 0 , y 0 ), ..., (x i−1 , y i−1 }) [58,73,75]. BO applies the acquisition function a : X → R based on the current model p(g|D), and the model considers a tradeoff between the processes of exploration and exploitation; iterations are conducted based on the following three steps: (1) Select an observation at which the acquisition function is maximum x select = argmax x∈X a(x); (2) Evaluate the objective function y select = g(x select ) + ; (3) Augment the dataset with the selected observation, During the process, the model tries to identify the best observation x best = argmin x∈D g(x). Hyperband Hyperband is a resource allocation problem-solving method executed in a purely exploration adaptive manner and constitutes a configuration evaluation approach based on the formulation of the hyperparameter optimization [58,76]. This method uses a principled early stopping strategy to allocate resources; the strategy aims to quickly identify superior hyperparameters by examining larger-scale hyperparameter configurations instead of using a strategy based on the uniform training of all configurations. BOHB Hyperparameter Optimization BOHB [58] is an HPO method that combines BO and the hyperband (HB). The BO process in BOHB uses a tree Parzen estimator (TPE) [77], which models a density function using a kernel density estimator. Algorithm 2 displays the procedure of the BOHB algorithm. Both feature selection using the BO algorithm and hyperparameter optimization using the HB algorithm are conducted simultaneously. Although the algorithm follows the budget selection approach of the HB, it guides the search by replacing a random sampling using a BO component. BOHB often searches for a good solution at a much faster rate than BO and converges to the best solution at a much faster rate than hyperband. In this study, the hyperparametric optimization method was applied to determine the number of hidden layers, learning rate, and optimizer for the DQN. The entire procedure of this algorithm is shown in Figure 2. State s consists of tuples (x, m), wherex is the masked vector of the original η, and is defined by the mask vector m; the latter is composed of (0, 1) and is responsible for the index of the selected feature. for i = 0 to s do 6 : Select hyperparameter Configuration A i 7 : Get loss L using Configuration A i 8 : A = min(L(A), L(A i )) 9 : for t = 1 to T do 10 : Calculate a probability function p(g|D) using Gaussian process 11 : Select observation where x s elect = argmax x ∈x a(x) 12 : Evaluate the objective function y s elect = g(x s elect) + 13 : Add dataset D = D ∪ ( x s elect, y s elect ) 14 : Update best observation x b est = argmin x ∈ Dg(x) 15 : end for 16 : end for The agent selects Q class or Q f eature corresponding to the current state. The previously selected features cannot be chosen again owing to the mask vector m. ECG Measurement Experiments An experiment was conducted to generate a dataset to train and evaluate the proposed model. To record the ECG signals from the subjects, a commercially available real-time recording system was used (MP36, Biopac Systems, Goleta, CA, USA) at a sampling rate of 1000 Hz. Eleven subjects were invited and their ECG signals were recorded for 10 min for six days at random times from 10:00 a.m. to 4:00 p.m. The subjects were seated in a comfortable chair in an enclosed space and kept in a relaxed state. During the experiment, ECG signals were recorded from the left wrist with reference to the right wrist and with a ground electrode on the ankle, a configuration known as the driven-right leg [30,78]. A bandpass software filter with a finite impulse response (FIR) filter between 1 Hz and 35 Hz was used to minimize the ambient noise components [79][80][81]. Feature Extraction The features of the ECG signals were extracted using the information of the P, Q, R, S, and T peaks. For the automatic peak extraction, the Pan and Tompkins [82] algorithm was applied to the ECG signals. They were defined based on the amplitudes, intervals, slopes, and angles of the peaks; in total, 31 features were derived in combinations with all peak points [30][31][32][33]. Figure 1 displays a typical ECG pattern with the five peaks; the extracted features are listed in Table 1. The features of the amplitude were extracted by the ratios among the peaks. Features Amplitude R-P Amplitude R-S Amplitude R-T Amplitude P-S Amplitude P-T Amplitude S-T Amplitude R-Q Amplitude Q-T Amplitude Q-S Amplitude P-Q Amplitude Interval R-P Interval R-Q Interval R-S Interval R-T Interval P-Q Interval P-S Interval P-T Interval Q-S Interval Q-T Interval S-T Interval R-R Interval R-T Interval Slope P-R Slope R-S Slope S-T Slope Q-R Slope P-Q Slope Q-S Slope Angle Q Angle R Angle S Angle Evaluation of BOHB-Optimized DQN Authentication Algorithm Experiments were conducted to investigate whether the BOHB optimization of the DQN could improve the authentication performance. These experiments evaluated the independent performance of the DQN model with BOHB used as an RL classifier. BOHB was applied to optimize the DQN for the RL-based, costly feature selection algorithm. In this experiment, the hyperparameters in the DQN (to be optimized) included the number of layers, nodes in each layer, learning rate, optimizer, and stochastic gradient descent (SGD) momentum, as summarized in Table 2. The minimum budget of BOHB was set to one and the maximum budget to nine. Table 3 shows the number of beat data generated by each subject for training and evaluating the proposed model. During the training process, the synthetic minority oversampling technique (SMOTE) [83], which is an oversampling method for the data augmentation, was applied to improve performance during the training process [45]. It is the method of generating a new sample using the distance between selected samples within the same group by applying the K-nearest neighbor (KNN) [84] algorithm. The dataset recorded from the 1st to 5th days was trained, and the 6th day of recordings were tested. Additionally, five-fold cross-validation was used to evaluate the generalization of the trained DQN model. Evaluation of Costly Feature Selection Algorithms This experiment was designed to evaluate the costly feature selection performance of the RL model. In this experiment, the proposed RL-based, costly feature selection algorithm was compared with the conventional feature selection methods, including ReliefF [59,60] and IG [61]. Therefore, the RL algorithm model (DQN) was only utilized for the selection of costly features; the selected features were then fed into the conventional classifiers for evaluation. Furthermore, an effective classifier for the authentication problem was also evaluated using support vector machines (SVMs) [85] and random forest (RF) [86]. The SVM and RF were chosen based on their promising performances in various machine learning problems, such as featured-based classification [87,88], image classification [89,90], and anomaly detection [91,92]. To evaluate the SVM and RF machine learning algorithms, personal authentication for input on the 6th day was conducted based on the sequentially cumulative trained model from 1 to 5 days of 11 subjects' data. It was utilized as a training and verification dataset from the 1st to 5th days of subject data, and the model's performance was tested with data from 6th day of ECG signals. Figure 4 shows the testing accuracy of the ECG-based authentication task using the optimized and non-optimized DQN models. The feature selection and authentication were performed simultaneously through the DQN models, and each training dataset was incrementally increased with the following day's dataset. The results are the averaged authentication results across all subjects. Overall, the average accuracy of the non-optimized RL method was 95.3%, while that of the optimized method was 97.4%. In particular, note the significant improvement of the F1-score using the optimized DQN (95.2%), which was 10.9% higher than that of the non-optimized DQN (84.2%). Results of Costly Feature Selection Algorithms Figures 5 and 6 depict the classification results including the performance indices, accuracy, and F1-scores, using the SVM and RF, respectively, where the indices were the averages of 10 simulation repetitions for the test dataset. In the figures, four different costly feature selection algorithms are compared: DQN with BOHB optimization (optimized RL), DQN without any optimization process (RL), and the ReliefF and IG feature selection algorithms. For the optimized DQN classifier, the model with the highest validation accuracy was chosen, and the feature selection was conducted. The x-axis of the subplots in Figures 5 and 6 displays the average numbers of the selected features for the model's final decision, while the y-axis displays the accuracy and F1-score performance indices. In Figure 5, the use of the SVM classifier of the optimized DQN algorithm outperformed the other feature selection algorithms with accuracies of 96.5%, 97.2%, 98.1%, 98.3%, and 98.5%, and F1-scores of 75%, 75.9%, 74.8%, 84.6%, and 91.1%. The numbers of the selected costly features were approximately equal to 3.9, 3.7, 2.9, 4.5, and 5.2. (a,b) 1st days, (c,d) 1st-2nd days, (e,f) 1st-3rd days, (g,h) 1st-4th days, and (i,j) 1st-5th days. In Figure 6, the accuracy and F1-score are shown to be higher using the 1st-3rd day training dataset based on the use of the RF as the classifier and the optimized RF as the feature selection algorithm, despite the fact that the accuracy and F1-score of ReliefF (using the 1st-5th days training dataset) were higher than those of the optimized DQN model and that they required more features, that is 4.3, 5.2, and 6.2 in the case of the optimized DQN and 6.5, 8.2, and 7.8 in the case of ReliefF. As shown in Figures 5 and 6 the "Optimized Reinforcement Learning" method proposed in this paper reported higher accuracy and F1-score when using the same number of features compared to other methods. These were the result of the model's selection of the most-optimized features from possible combinations of ECG features, demonstrating that the model's optimization through reinforcement learning was effective to improve the authentication task. The equal error rate (EER) was determined by the false acceptance rate (FAR and false rejection rate (FRR) when they are equal [93]. The FAR and FRR are calculated using Equation (7), where FP, TN, FN, and TP denote false positive, true negative, false negative, and true positive outcomes, respectively. Figure 7 illustrates the EER results of all combinations among the costly feature selection algorithms and classifiers. The x-axis of the subplot in Figure 7 displays the average number of selected features for the model's final decision, while the y-axis displays the EER value. The best performance with the fewest number of features and lowest EER could be obtained using the optimized DQN with SVM using the training dataset recorded from the 1st to the 3rd day, that is using approximately three features and an EER of 4.7%. Although the lowest EER was obtained with ReliefF and RF using all five-day training datasets, more features (1.5-times) were used than those of the second-best method (see Figure 7e). Discussion In this study, a personal authentication task was conducted based on ECG signals recorded for 6 days. The ReliefF and information gain algorithms are representative conventional feature selection methods, which are simpler than the optimized and pure RL methods. Although the accuracy incrementally improved as shown in Figures 5 and 6, they are not reliable in terms of the F1-score and equal error rate performances. The optimized model based on the method proposed in this study yielded high performance compared with the other conventional feature selection approaches. This demonstrates that the ECG signal could be feasible in implementing the biometric authentication system in our daily lives. The optimized RL using BOHB produced the most-efficient and best performance in selecting costly features compared with other conventional methods, as proven by the accuracy, F1-score, and EER outcomes of the authentication tasks. This also proved the effectiveness of the model optimization process, commonly referred to as the automatic machine learning (Auto ML) process [94], based on the feature selection tasks using the RL algorithms. The results in Figures 5 and 6 show that the proposed costly feature selection method could yield different performances depending on the classifier of the machine learning algorithm. The proposed approach was clearly improved by the SVM model compared with the RF model. This implies the optimal combination of the costly feature selection method and the classifier for the ECG-based authentication task. The RL model performed best with the two machine learning classifiers, thus implying that the costly feature selection method proposed in this study could be optimal in yielding the improvement of the authentication performance. This could be supported by the various optimization studies based on the RL algorithms, which typically perform better than other approaches [8,48,95]. It is noted that the suggested feature selection method outperformed the others (see Figures 5 and 6). In particular, from the perspective of the F1-score, the non-optimized RL model yielded similar results to the other traditional feature selection methods, while the optimized BOHB-based model yielded improved results. This trend may indicate that the optimized DQN model could select significant features even with a small number of datasets. In addition, the proposed model selected a relatively small number of features compared with those selected by the other methods. During the learning process of the RL algorithm, the received rewards decreased as the number of learning episodes increased; this resulted in an automatic termination of the feature selection process at the appropriate level of training, while the traditional models require stopping the selection of features manually based on the experience of the model designer. This automatic stopping property of the RL algorithm could provide an efficient approach to saving learning and resources. Figure 8 displays the number of subjects who selected the features through the optimized RL by increasing the training dataset. Note that some specific features, such as the "QS slope", include more subjects than the others as the training data increase. The model selected the QS slope, F1-score, and EER using the training dataset recorded from the 1st to 5th days and produced the best results with the highest accuracy. Additionally, the selection of the QR slope, RT amplitude, and PT amplitude gradually increased as more training datasets were included. We recorded ECG data from the subjects for six days. Among them, the data from the 6th day were used as the test dataset and were evaluated based on various feature selection and classification algorithms. Among the various optimal feature selection algorithms, the BOHB-optimized DQN algorithm produced the most-improved results compared with the SVM model. When there were adequate data for the training, the accuracy converged to values greater than 90%. The results produced by the optimal number of features could suggest the implementation of the ECG-based personal authentication model with a tightsized structure in edge devices, such as smartwatches and mobile devices. To demonstrate this implementation, the machine-learning-based algorithm (SVM, RF) algorithm proposed in this paper was run on the Raspberry Pi4 board, confirming that it can be processed in less than 10 seconds per about 1000 heartbeats. The reason this can be implemented is that we optimized the costly features and classified them using relatively light-sized classifiers, rather than optimizing complex neural networks. Thus, this study demonstrated that this personal authentication model could be utilized in various embedded equipment types or low-power environments. Conclusions In this study, an RL-based personal authentication model and its optimization were proposed. These yielded significant performance enhancements compared with the conventional methods. Furthermore, they can be applied to various embedded systems with machine learning classifiers with relatively low resource consumption, such as the SVM and RF algorithms. In a follow-up study, the proposed model will be investigated further to identify the physiological meanings of the ECG features, such as the QS slope and RR interval, when used for personal authentication purposes. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The data are not publicly available due to ethical issues.
6,867
2023-01-20T00:00:00.000
[ "Computer Science" ]
Compressive Time-of-Flight 3D Imaging Using Block-Structured Sensing Matrices Spatially and temporally highly resolved depth information enables numerous applications including human-machine interaction in gaming or safety functions in the automotive industry. In this paper, we address this issue using Time-of-flight (ToF) 3D cameras which are compact devices providing highly resolved depth information. Practical restrictions often require to reduce the amount of data to be read-out and transmitted. Using standard ToF cameras, this can only be achieved by lowering the spatial or temporal resolution. To overcome such a limitation, we propose a compressive ToF camera design using block-structured sensing matrices that allows to reduce the amount of data while keeping high spatial and temporal resolution. We propose the use of efficient reconstruction algorithms based on l^1-minimization and TV-regularization. The reconstruction methods are applied to data captured by a real ToF camera system and evaluated in terms of reconstruction quality and computational effort. For both, l^1-minimization and TV-regularization, we use a local as well as a global reconstruction strategy. For all considered instances, global TV-regularization turns out to clearly perform best in terms of evaluation metrics including the PSNR. Introduction Time-of-Flight (ToF) camera systems rely on the time of flight (or travel time) of an emitted and reflected light beam to create a depth image of a scenery. They offer many advantages over traditional systems (e.g. lidar) such as compact design, registered depth and intensity images at a high frame rate, and low power consumption [14]. This makes them ideal for mobile usage, for example, on a mobile phone. On such devices, the computational resources for the required image reconstruction algorithms are limited. While there are several technologies allowing 3D imaging, in this paper we will focus on cameras that use a modulated light source to calculate the phase shift (encoding the depth image) between the emitted and received signal [18]. High spatial and temporal resolution requires a large amount of data to be read-out and transferred from ToF cameras. In order to determine a depth image, typically four different phase images per frame have to be collected with the ToF camera. However, even from four phase images, the depth image is unique only up to a certain maximal distance from the camera. To measure larger distances one needs additional phase images that have to be read-out and transferred. Also in multi-camera systems, where the depth image is calculated outside the camera, the amount of data can be very high. If the data rate or the read-out process is a limiting factor, either the spatial or the temporal resolution has to be reduced in a conventional ToF camera. 16 ADCs used by our ToF camera. Right: Proposed compressive read-out. In each row, instead of 16 sequential read-outs per ADC, m/K combinations of pixel values are read-out. Note that compared to the standard camera design, the main difference is the used multiplexer that combines the read-outs within a single block according the used measurement matrix. Proposed compressive ToF imaging approach To address the issues mentioned above, in this article we propose a compressive ToF camera that allows a reduced amount of data to be read-out and transferred while preserving high spatial and temporal resolution. Instead of individual pixels of the phase images, the compressive ToF camera reads out combinations of pixel values that are transferred to an external processor. As shown in Figure 1.1, the only additional element that has to be added to the existing camera design is multiplexer per block, which combines the pixels read-out according to the used measurement design. Note that multiplexers are standard element in CMOS (complementary metal-oxide-semiconductor) sensors [37]. Therefore, the proposed camera design only requires small modification of the existing camera design shown in Figure 1.1, left. As in the existing used ToF camera we only use combinations of elements in the same row which yields to block-structured sensing matrices. The actual manufacturing of such a compressive camera design is beyond the scope of this paper. Note that by using modern multi-layer circuit routing technology [23], one could also use combinations from different overlapping routing paths. We restrict our ourselves to the block-structured sensing matrices because they are compatible with the existing ToF camera design. In order to reconstruct the original phase and depth images we use techniques from sparse recovery using 1 -minimization and total variation (TV)-regularization. For both methods we implemented a block based local approach as well as global approach. In all instances the global TV-regularization turns out tot outperform the other tested reconstruction methods in terms of RMAE (relative mean absolute error), PSNR (peak signal to noise ratio) as well as visual inspection. For example, using a compression ratio of 4.7, global TV-regularization yields a PSNR of 31.5 for the recovered depth image of a typical scenery, opposed to a PSNR of 26.4, 28.0 and 27.3 for block 1minimization, global 1 -minimization and block TV-regularization. We address this to the following issue. Depth images and phase will typically be piecewise homogeneous. Hence the depth images have sparse gradients which exactly is what TV-regularization tries to recover. We point out that the proposed compressed sensing framework aims to accelerate the camera read-out and not to reduce the number of sensing pixels itself, which would be another interesting line of research. Different compressive ToF camera designs have been proposed in [24,12,22,19]. The compressive designs in [24,19] use a spatial light modulator and multiple pulses, whereas [22] uses coded aperture to gather multiple measurements. In [12], compressed sensing ToF imaging has been studied in the spatial and temporal domain. All these works use unstructured sensing matrices. In contrast to that, we use block-structured sensing matrices which are easier to implement in existing ToF camera designs. Additionally, the reduction of read-outs allows high spatial and temporal resolution. Some results of this paper have been presented at the International Conference on Sampling Theory and Applications (SampTA) 2017 in Tallinn [3]. The present paper extends the theoretical recovery results stated in [3] from the standard basis to a general sparsifying basis. Additionally, all numerical studies presented in this paper are new and extended significantly compared to [3]. In particular, the TV-regularization studies for the proposed ToF compressed sensing scheme are completely new. Outline In Section 2 we give a short introduction the ToF imaging. In Section 3 we present the type of measurements that we propose for compressive ToF imaging. We thereby start with details on the classical (non-compressive) and the new (compressive) designs. Additionally, we prove that the used matrices fulfill the RIP under suitable conditions. Moreover, in Section 3 we also introduce the proposed block based image reconstruction approach using 1 -minimization and total variation. In the special cases of single blocks, we obtain the global counterparts. In Section 4 we give details on the numerical algorithm and present extensive studies of our two-step reconstruction approach of recovering the depth image from the compressed measurements. We compare the block-based as well as global version for 1 -minimization and TV-regularization. The results are evaluated in terms of RMAE and PSNR. Visually as well as in terms of these quality measures global TV-regularization outperforms the other methods in all instances. 2 Basics of 3D imaging using ToF cameras A ToF camera measures the distance of a scenery to the camera. By sending out a diffuse light pulse and measuring the reflected signal, the camera is able to record depth information of the entire scenery at once. To acquire depth information, the sent out light is modulated and can be generated by an LED. The scenery reflects the light which is recorded by the camera as depicted in Figure 1.2. The emitted pulse can be modeled as a time-dependent function g(t) = C cos(ωt), where C is the amplitude, ω the modulation frequency (or carrier frequency), and t the time variable. The signal is reflected, and the camera receives, for any individual pixel i ∈ {1, . . . , n}, a phase and amplitude shifted signal Here ϕ i is the phase shift depending on the distance d i between the camera and the scene mapped at pixel i , A i the amplitude depending on the reflectivity, and B i an offset. The phase shift is related to the distance d i via the relation d i = ϕ i c/(2ω). At each pixel of the ToF camera, the cross-correlation between the reference and the reflected signal is measured, where the cross-correlation between two signals f : R → R and g : R → R is given by In our case, c f ,g (·) can be calculated analytically [1,29,18] which yields Here K i incorporates constants accounting for noise and the background image generated by ambient light. By sampling the cross-correlation function at the sampling points s ∈ {0, π/(2ω), π/ω, 3π/(2ω)} we get four so-called phase images Here we have set ϕ (ϕ j ) n j=1 ∈ R n , K (K j ) n j=1 and A (A j ) n j=1 ∈ R n , and all operations are taken point-wise. Under the common assumption that K i is independent of the pixel location we can estimate the phase shifts ϕ bŷ Here α = arg(z) ∈ [0, 2π) denotes the argument of the complex number z defined by z r e iα . In particular, the depth image is given byd =φ c/(2ω). Since the phase shifts are contained in [0, 2π), the maximal distance that can be found by To overcome this ambiguity, several methods have been proposed in the literature (see, for example, [18,13]). One such approach consists in capturing two sets of phase images with different modulation frequencies ω 1 = ω 2 , and then comparing the two depth images. In this paper, we will not address the ambiguity problem further. The compressive ToF camera that we propose below can be extended to multiple modulation frequencies in a straightforward manner. Sparse recovery can be applied to any of the phase images, even if they are affected by phase wrapping. Further research, however, is needed to thoroughly investigate such extensions, in particular, to investigate sparsity issues, accuracy and noise stability. Compressive ToF sensing and image reconstruction In this section we present the proposed compressive ToF 3D sensing design compatible with existing ToF cameras. Additionally, we describe an efficient block-wise reconstruction procedure based on sparse recovery. Compressive ToF sensing As mentioned in the introduction, in a conventional ToF camera, all pixel values of all phase images have to be read-out and large amounts of data have to be transferred. To reduce the amount of data, in this paper, we propose a compressive ToF camera, which reads out and transmits linear combinations instead of individual pixel values of the phase image. Our proposed compressive ToF camera design is based on the existing non-compressive ToF camera design, which should allow to engineer and build the new camera with low effort. The only difference between the two designs is in the way the pixels of the sensors are read-out. For the compressive ToF camera, we propose to read-out linear combinations of neighboring pixels. The data collected by the compressive ToF camera can be written in the form Here M ∈ R m×n is the measurement matrix, p (i) ∈ R n are the phase images and y (i) ∈ R m the read-out data with m n. To reconstruct the depth image from the compressed read-outs we propose the following two-step procedure: First, we estimate the differences p (1) − p (3) and p (4) − p (2) from using sparse recovery. In a second step we recover the depth image by applying (2.3) to the estimated differences. Any of the equations (3.1), (3.2) is an underdetermined system of the form y = Mx, for which in general no unique solution exists. To obtain solution uniqueness, the vector x ∈ R n needs to satisfy certain additional requirements. In recent years, sparsity turned out to be a powerful property for this purpose. Recall that x is called s-sparse, if it has at most s nonzero entries. Assuming sparsity, the vector x can, for example, be recovered by solving the 1 -minimization problem In order for (3.3) to uniquely recover x, the matrix M needs to fulfill certain properties. One sufficient condition is the restricted isometry property (RIP). The matrix M is said to satisfy the s-RIP with constant δ > 0, if holds for all s-sparse z ∈ R n . If the s-RIP constant is sufficiently small, then (3.3) uniquely recovers any sufficiently sparse vector (see, for example, [10,15,7]). Although some results for deterministic RIP matrices exist [20,6], matrices satisfying the RIP are commonly constructed in some random manner. Realizations of Gaussian or Rademacher random matrices are known to satisfy the RIP with high probability [15]. Rademacher random variables take the values -1 and 1 with equal probability. It has been shown [8] that if m ≥ Cδ −2 s log(n/s), then δ s ≤ δ with high probability for both types of matrices. Note that up to the logarithmic factor, this bound scales linearly in the sparsity level s. In this sense, the bound is optimal in the sparsity level, because m ≥ 2s is the minimal requirement for any measurement matrix to be able to recover all s-sparse vectors [15]. More generally, all matrices with independent sub-Gaussian entries satisfy the RIP with high probability. A sub-Gaussian random variable X is defined by the property with constants β, κ > 0. It is easy to show that Rademacher and Gaussian random variables are sub-Gaussian. Sub-Gaussian matrices M are also universal [15] which means that for any unitary matrix Ψ ∈ C n×n the matrix MΨ also satisfies the RIP. Thus one can also recover signals that are not sparse in the standard basis but for which Ψ * x is sparse, where Ψ * is the conjugate transpose. This property is very useful in applications since many natural signals have sparse representations in certain bases different from the standard basis. In general, if the restricted isometry constant of MΨ is small, then we can recover the signal by solving (3.3) with MΨ instead of M and, in the noisy case, by Here η equals the noise level of the measurements. Similar results hold if Ψ is a frame (see [9,17,27,34]). In practical applications, such unstructured matrices cannot always be used. Either there are restrictions on the matrix preventing us from using a random matrix with i.i.d. entries or the storage space is limited, such that storing a full matrix would be too expensive. There are different methods for constructing structured compressed sensing matrices that satisfy the RIP. For example, such matrices can be constructed by random subsampling of an orthonormal matrix [15,Chapter 12] or deterministic convolution followed by random subsampling [30] or using a random convolution followed by deterministic subsampling [33]. In the next section we will examine the latter type since its application to ToF imaging and the existing camera designs they have beneficial properties. More specifically, such measurement matrices can be constructed to only use entries form a small set, such as {−1, 0, 1}, can efficiently be implemented by using Fourier transform techniques, and require litte information to be stored. Compressive 3D Sensing Using Block Partial Circulant Matrices The hardware requirements in our case prevent us from using arbitrary matrices since for the analog-to-digital converters (ADC) the weights 0 and ±a for some fixed constant a ∈ R should be used. Further, any individual ADC can only be wired with a limited number of pixels (compare Figure 1.1) which imposes a particular block-structure of the measurement matrix. Thus, the measurement matrices that we use in our approach take the block-diagonal form Here, each sub-matrix M k ∈ R m k ×n k operates on a certain subset Ω k {1, . . . , n} with n k = |Ω k | elements coming from a single row in the image. For simplicity we consider the case that n k = n/K and m k = m/K for each k. The particular measurements in each row block are constructed in a certain random manner satisfying the requirements above. A particularly useful class of such row-wise measurements in the compressive ToF camera can be modeled by partial circulant matrices. A circulant matrix C v ∈ R n×n associated with v = (v 1 , . . . , v n ) ∈ R n is defined by where j i (j − i ) mod n is the cyclic subtraction. In particular, for all v, w ∈ R n , we have Definition 3.1. The partial circulant matrix associated to v ∈ R n and Ω ⊆ {1, . . . , n} is defined by 1 Further, recall that a random vector v with values in {±1} n is called a Rademacher vector if it has independent entries taking the values ±1 with equal probability. Partial circulant matrices satisfy the RIP. Such results have been obtained first in [33] and have later been refined in [26, Theorem 1.1] using the theory of suprema of chaos processes. These results have been formulated for sparsity in the standard basis. For our purpose, we formulate such a result for the general orthonormal bases. Here F is the discrete Fourier matrix and Ψ ∈ C n×n is any unitary matrix. Proof. For the case that Ψ is the identity matrix the result is derived in the original paper [26]. The generalization to arbitrary Ψ can be shown analogously to the original results. Such a proof is worked out in [2,4]. Theorem 3.2 shows that random partial circulant matrices yield stable recovery of sparse vectors using (3.3). Recall that the proposed compressive ToF camera read-out uses block diagonal measurement matrices of the form (3.6). Taking each block as a random partial circulant matrix and applying Theorem 3.2 yields the following result. Theorem 3.3. Let M ∈ R m×n be of the form (3.6), where each block on the diagonal is a partial circulant matrix M k = 1 √ m k R Ω k C v k associated with independent Rademacher vectors v k and subsets Ω k ⊆ {1, . . . , n/K} having m k = m/K elements that are selected independently and uniformly at random. If, for some s ∈ N and δ ∈ (0, 1), we have m ≥ KCδ −2 µ 2 s(log s) 2 (log(n/K)) 2 , then, with probability at least (1 − (n/K) − log(n/K) log 2 s ) K , M k Ψ has the s-RIP constant of at most δ for all k = 1, . . . , K. Here the constant µ is given by µ = max i,j | F j− , Ψ i− | and Ψ ∈ C (n/K)×(n/K) is any unitary matrix. Proof. As m/K ≥ Cδ −2 s(log s) 2 (log(n/K)) 2 , we can apply Theorem 3.2 with n and m replaced by n/K and m/K to each block. Thus the restricted isometry constant of each block is at most δ with probability at least 1 − (n/K) − log(n/K) log 2 s . As the generating Rademacher vectors for each block are independent, the s-RIP constants of all blocks are uniformly bounded by δ with probability at least (1 − (n/K) − log(n/K) log 2 s ) K . Theorem 3.3 yields stable recovery via (3.5) if the vector x is Ψ-block sparse, meaning that x = [x 1 , . . . , x K ] with Ψ * x k ∈ R n/K being s sparse for all k = 1, . . . , K. Note that a different compressive CMOS sensor design using partial circulant matrices has been proposed in [21]. Our design uses multiple ADCs (actually, the existing camera design suggests 16 ADCs) each of them operates on a small number of pixels. Thus our design uses parallel read-out which should yield to a faster imaging image acquisition speed. However, at the same time, the structured read-out matrices require an increased number of measurement. Investigations on the good choices of block sizes and comparison with other CMOS sensor design is interesting line of future research. Image reconstruction by 1 -minimization As presented in Section 3.1, the depth image is recovered from compressed read-outs by first estimating the differences p (1) − p (3) and p (4) − p (2) from (3.1) and (3.2), which are underdetermined systems of equations of the form y = Mx, and then applying (2.3) to the estimated differences. In this subsection we present how to efficiently solve these underdetermined systems using block-wise 1 -minimization. Suppose that the measurement matrix M ∈ R m×n has a block diagonal form (3.6), with diagonal blocks M k ∈ R (m/K)×(n/K) operating on a subset of pixels from individual lines. This type of measurement matrices reflects the current ToF camera architecture illustrated in Figure 1.1. Assuming the sparsifying basis Ψ to be block diagonal with diagonal blocks Ψ k ∈ R (n/K)×(n/K) , the full 1 -minimization problem (3.5) can be decomposed into K smaller 1 -minimization problems of the form Here y k ∈ R m/K are the data from a single block. If all M k satisfy the Ψ-RIP (i.e. M k Ψ k satisfies the RIP), then (3.7) stably and robustly recovers any Ψ−block-sparse vector x = [x 1 , . . . , x K ] with y = Mx. Theorem 3.3 shows that this, for example, is the case if M k are realized as random partial circulant matrices. By solving (3.7) we exploit sparsity within a single row-block. While one can expect some row-sparsity, (3.7) does not fully exploit the level of sparsity present in two-dimensional images. As shown in [36], using row-sparsity yields artifacts in the reconstructed image. In this work, we therefore follow a different approach that is described next. For that purpose, we consider an additional partition of all pixels for = 1, . . . , n/b 2 . We further assume that the sparsifying basis Ψ = Ψ 1 ⊗ · · · ⊗ Ψ n/b 2 is block diagonal with Ψ k ∈ R b 2 ×b 2 . In such a situation, (3.3) can be decomposed into n/b 2 smaller 1 -minimization problems, (3.10) The advantage of (3.10) over (3.7) is that Ψ can now be chosen as a two-dimensional wavelet or cosine transform instead of their one-dimensional analogous. Two-dimensional wavelet or cosine transform are well known to provide sparse representations of images. In particular, the sparsity level relative to the number of number of measurements is larger than in the one-dimensional case. A similar argumentation applies when a TV is applied as sparsifying transform. On the other hand, (3.5) is still decomposed into smaller subproblems which enables efficient numerical implementations. The optimization problems (3.10) can be solved in parallel which further decreases computation times. Using a global sparsifying transformation might be better in terms of sparsity, but the resulting problem is less efficient to solve. For the actual numerical implementation we use 1 -Tikhonov-regularization whereB = B Ψ l can be calculated by (Ψ B ) . The two problems (3.10), (3.11) are equivalent [16] in the sense that every solution of (3.10) is also a solution of (3.11) for λ depending on ε and vice versa. For minimizing the unconstrained 1 -problem (3.11) we use the fast iterative soft thresholding algorithm (FISTA) introduced in [5], a very efficient splitting algorithm for 1 -type minimization problems. For the numerical results, we also consider global 1 -minimization, where we apply (3.11) to the complete phase difference images. TV-regularization In many imaging applications, total variation (TV)-regularization [35] has been shown to outperform wavelet based 1 -minimization for compressed sensing [28,32,25]. Therefore, as an alternative approach for reconstructing the blocks x of the phase difference images, we implemented TV-regularization Here D : R b 2 → (R b 2 ) 2 denotes the discrete gradient operator and µ is the -regularization parameter. For the numerical results we also considered global TV-regularization, where we use (3.12) without partitioning into individual blocks. In order to numerically solve (3.12), we use the primal dual algorithm of Chambolle and Pock [11]. While the above theoretical results cannot be applied for (3.12), similar to other applications, we found TV to empirically yield better results than 1 -minimization for ToF imaging. Experimental Results In this section, we present some experimental results using raw data captured by an existing standard ToF camera. An example of such data (phase difference images of books scenery) is shown in Figure 3.1; the depth image computed from the difference images via (2.3) is shown in Figure 3.2. From the raw data of the standard ToF camera, we generate the compressive sensing measurements synthetically. For image reconstruction from compressed sensing data we use the block-wise and global 1 -minimization (3.11) as well as block-wise and global TV-regularization(3.12). Compressed ToF sensing For compressive ToF sensing, we initialized the measurement matrices M (the block circulant matrices; see Section 3.2) randomly with the entries of a random vector generating the partial circulant blocks taking values in {−1, 1, 0} with equal probability. The blocks have size m × 14 which implies that the compression ratio is 14/m. In the experiments we observed that usually not all blocks of our measurement matrix yield adequate reconstruction properties. This indicates that the size of the single blocks is not large enough to guarantee recovery in each block with high probability. Using bigger blocks would overcome this issue (according to Theorem 3.3), but this is not possible with our camera design. We therefore propose the following alternative strategy. We start with a set of several candidates for the blocks of the measurement matrix from which we choose the ones with the lowest reconstruction error on a set of test images. For the following results we have chosen the parameters in the FISTA for 1 -minimization and the TV algorithm by hand and did not perform extensive parameter optimization. On most images the parameter choice had a moderate influence on the reconstruction error. Thus we used λ = 0.05 and µ = 0.1 for all presented results. For the basis Ψ we use the 2D-Haar wavelet transform and, as described in Section 3.3, we executed the reconstruction block wise with a block size of 28×28. Additionally, for 1 -minimization as well as TV-regularization we performed global reconstruction corresponding to a single block. We use 300 iterations for the block wise 1 -minimization, 1000 for the global 1 -minimization, 100 for block wise TV, and 300 for global TV. To measure the error between the uncompressed depth image d ∈ R n 1 ×n 2 = R 168×224 and the reconstructed depth image d rec ∈ R 168×224 we use the relative mean absolute error (RMAE) and the peak signal to noise ratio (PSNR) defined by Numerical results For the first set of experiments, we consider phase difference images of a scenery with a couple of books and folders shown in Figure 3.1, which is less than 1.2 meters away from the camera (see Figure 3.2). The measured phase images have been calibrated by subtracting a reference image of constant distance (see [12] for additional calibration techniques). Figure 4.1 shows the reconstructed phase difference images from compressed sensing data using a compression ratio of 2 using block-wise and global 1 -minimization (3.11) and block-wise and global TV-regularization (3.12). The corresponding depth reconstructions are shown in Figure 4.2. To investigate the reconstruction quality when only using a very small amount of data, we generated a measurement matrix with m = 3. This results in a compression ratio 14/3 of around 4.7. In this example, we also increased the probability for zeros to 2/3 and the resulting matrix had around 57 % zeros. This means that the images can be captured very quickly since zero entries in the measurement matrix imply that the camera can skip the corresponding pixel. The reconstructed phase difference images using block-wise and global 1 -minimization and block-wise and global TV-regularization (3.12) are shown in Discussion Inspection of Figures 4.1 and 4.2 shows that for a compression ratio of 2 all reconstruction methods perform well. Opposed to that, for the compression ratio 4.7, global TV-regularization clearly performs best. In a more quantitative way, this is demonstrated in Table 1, where the RMAE and PSNR are shown for depth image reconstructed with the various reconstruction methods. From that table, we see that global TV-regularization in any case yields the smallest RMAE and largest PSNR. For example, for the compression ratio of 4.7, global TV-regularization yields a PSNR of 31.5, opposed to a PSNR of 26.4, 28.0 and 27.3 for block wise 1 -minimization, global 1 -minimization and block wise TV-regularization. To investigate the dependence of the reconstruction error on the compression ratio more closely, we perform a series of compressed sensing measurements and reconstructions using a set of 24 test images for various compression ratios. The sceneries consist of the books-image and other similar sceneries captured with the ToF camera in an office and an apartment. In Figure 4.5, we show the resulting average RMAE and PSNR using blockwise and global 1 -minimization and block-wise and global TV-regularization. For all test images we use the same FISTA and TV parameters as above. In any case, global TVregularization clearly outperforms the other reconstruction methods. This is especially prominent in the case of a small number of measurements. Summarizing these findings tools such as dictionary learning or deep learning to find optimal sparsifying transforms is an interesting line of future research. The reconstruction times for all method are comparable: On a laptop with CPU Intel i5-3427U @ 1.80GHz, performing 100 iterations took about 0.27 s for block-wise 1 , 0.42 s for global 1 , 0.25 s for block-wise TV, and 0.33 s for global TV. Finally, note that we also tested to separately recover the four phase images, which we found to yield similar results as for the difference images. Since the latter requires only half of the numerical computations we suggest using sparse recovery of the difference images when computational resources are limited. Conclusion In this paper, we proposed a compressive ToF camera design that reduces the required amount of data to be read-out and transferred. The proposed compressed ToF camera uses measurements within rows of the image which yields a block-diagonal measurement matrix. Random partial circulant matrices as diagonal blocks have been shown to be compatible with current camera architecture. Their asymptotic recovery guarantees do not directly apply in small block sizes. One can increase the block size, which is not really practical for the ToF camera. To overcome this issue, we proposed and implemented strategy to increase the compressed sensing ability of the random partial circulant matrices. Our experimental results clearly demonstrate that it is possible to recover the original images from small measurement blocks. For image reconstruction we used either blocks-wise or global reconstruction, that both allow to exploit the sparsity of the phase images in the two dimensional wavelet basis ( 1 -minimization), or sparsity of the two dimensional gradient (TV-regularization). We empirically found TV-regularization to outperform the wavelet-based 1 -reconstruction which a comparable numerical effort. Future work will be done to further improve the image quality and to increase the reconstruction speed. Among others, for that purpose, we will investigate the use of machine learning in compressed sensing [31].
7,361.2
2017-10-28T00:00:00.000
[ "Computer Science" ]
Application of Cooperative Learning Type Group Investigation to Improve Physics learning Outcomes in Vocational Schools This study proposes to analyze the differences in learning outcomes of pupils, who learn cooperative learning models of group investigation and conventional learning. The research sample consisted of class Xa (30 people) who learned the cooperative learning model of the group investigation model and class Xb (30 people) taught using conventional learning models with the topic of learning is basic electronic (capacitors). The sampling technique uses random sampling with a non-equivalent control group design. Data collection techniques consisted of written tests, observation sheets and documentation. Meanwhile, research instrument used achievement test (objective criteria) in the form of multiple-choice questions that comprise as many as 25 of the given level (C1), understanding (C2) applying (C3) and analyzed. Students' test results were analyzed using descriptive statistical data analysis and inferential statistical data analysis. The analysis results obtained that there is a difference of learning outcomes between the experimental class and a control class with an average of 80.8 to 69.6 for the control class and experimental class. From the analysis, it was found that there was no significant difference between the experimental and control classes based on the N-gain value. So it can be concluded that the group investigation learning model does not have a significant effect on improving student learning outcomes. Introduction One of the obstacles faced by the world of knowledge is a subject of the weakness of the learning process. In the learning process, often, students are less encouraged to develop thinking skills [1] [2]. The learning method in the classroom accentuate the child's capability to memorize information [3], [4]. So when students graduate from school, they do not have high creativity and innovation. In reality, this happens to all subjects that use conventional teaching of the same item [5]. The low quality of education can be interpreted as a lack of success in the learning process [6]. Learning is the heart of progressive education. The success of the learning process is influenced by various aspects one of which is the ability of teachers to create a learning atmosphere that can arouse students' motivation to participate in this learning [7][8] [9]. Successful learning is characterized by an increase in student understanding for each subject. Vocational high school is an educational institution at higher secondary level education unit that prepares students to expertise in specific areas to enter the workforce and supplies to continue their education at a higher level [10]. Schools directly linked to teaching and learning activities are needed to increase the efficiency of the learning process, both from equipment, services and human resources, in order to enhance the quality of the learning process and to produce qualified, successful vocational graduates. The teacher's main role is to teach students, specifically to train students to actively learn so that their ability (cognitive, affective, and psychomotor) can optimally grow. With active learning by involvement in any learning activity, the students ' ability to do things that are good will eventually form a life skill as the provision of life [11]. For this to be realizing, the teacher should know how students learn to master different ways of teaching students. Group learning can be more productive with practical work activities. Based on the content contained in the vocational school curriculum, it is inevitable that laboratory activities play an essential role in learning physics, especially in the electronics field. Practical work is a proper way not only to activate students but more to help students develop their competencies. Ismayanti [12] notes that the standard of vocational graduates poses many problems. Many research results show that the work ethics of vocational students is still unsatisfactory. Many of them are less able to do work, work together, create, collaborate and argue. To develop students' learning skills, therefore, a suitable alternative model of learning was required. So the teacher offers model of cooperative learning type investigation model with the assumption that the model is effective in improving student learning outcomes. The type of cooperative learning model group investigation is a model of cooperative learning that based on a constructive illustration [13], [14]. A teacher as mediator or facilitator who helps the student in the learning process goes well [15]. This learning model retains the active engagement of students, where active involvement of the students can be seen from the first stage to the final learning level. Students involved in the preparation, both in deciding the subject and how to know it through on-the-spot analysis, presenting research finding [16], [17]. This type requires students to have the ability to think independently. Thus, learning is not only memorizing. Nevertheless, more than that, students truly understand and to able to apply the knowledge gained to solve problems, find something to wrestle with their ideas. The advantages of the Cooperative Learning Type Group Investigation model are as follows: 1. Personal in the learning process can work freely; give enthusiasm for initiative, creative and active; self-confidence can be further increasing; can learn to solve, handle a problem. 2. Socially improve learning to work together; learn to communicate both with friends and teachers; learning good communication systematically; learn to respect the opinions of others; increase participation in making a decision. 3. Academically trained students to take responsibility for the answers given; work systematically; develop and practice skills; plan and organize work; check the truth of the answers they make; always thinking about the method or strategy used so that a generally accepted conclusion is obtained. In the Group Investigation model of learning, social interaction is a critical factor for the development of a new mental scheme [14], [17]. In cooperative learning, the teacher plays a role in giving freedom to students to think analytically, critically, creative, reflective and productive [18], [19]. Implementation of the cooperative learning strategy of Group Investigation in learning is generally divided into 6 (six) steps: (1) Identifying topics and organizing students into groups (students review information sources, select topics, promote information collection for teachers), (2) grouping, (3) conducting research (students finding knowledge, analyzing data, and drawing conclusions), (4) preparation of the final report, (5) presentation of the final report (presentations are made in different forms for the whole class), (6) evaluation (teachers and students work together in the evaluation of learning, assessments aimed at assessing conceptual understanding and critical thinking skills) [20]- [22]. Group investigation is a general planning plan where students work in small groups using helpful questions, group discussions, and cooperative design and projects. In the investigation group, students not only work together but also help plan the topic to be studied [23]. This teaching pattern will create the desired learning because students as learning objects are involved in determining to learn. The character of this complex group Investigation learning is interesting to study and try to apply, especially in vocational schools for Visual-Audio majors. One of the topics taught in physics is capacitors. A capacitor is a device that stores charge [24]. Often, though not always, a capacitor consists of two electrical conductors (conductors) separated by a barrier (insulator) or dielectric capacitance. Based on observations made at the vocational school 2 Kendari, during the teaching and learning process, teachers only convey theories by linking daily life, but not with direct practice. Some researchers who study the cooperative learning model of investigation type include [25], [26], and [27]. Derlina & Hasanah [26] reported in their study that through the cooperative model of group type investment, the learning atmosphere was more effective. Cooperative relationships in study groups empower students to have the confidence to express their thoughts, connect with friends and exchange knowledge to solve learning problems. The same thing was founded in a study conducted by Santyasa 4620 Application of Cooperative Learning Type Group Investigation to Improve Physics learning Outcomes in Vocational Schools [27] that the group investment model has a greater impact than the clear learning model in achieving critical thinking skills, social attitudes, moral attitudes and student character in high school physics learning sound waves and light waves. While Arinda [25] reported that the practical work skills of students using the PhET-based community study cooperative learning model in a useful 80.01 percent and 77.3 percent respectively. It indicated that learning to use the Community Investigation (GI) model with Phet allows learners skills for scientific research in both categories [25]. Sari [28] in her study reported that the learning results of physics with an investigation of a cooperative learning type investigations are higher than conventional learning. The learning results in physics with the ability to think logically above average are more important than students with the ability to think logically below average. Based on the description that has been explained, the authors motivated to do a study with the title of the application of cooperative learning type group investigation to improve student learning outcomes. Therefore, the main objective of this study is (1) To find a description of student learning outcomes experimental class and control class before and after learning the subject matter of the capacitor; (2) To analyze the improvement in student learning outcomes between the group investigation class model and the traditional classroom model. Limitations of the problem in this study are to compare the results of student learning using cooperative learning models of group investigation type with conventional models in X class (consist of 60 students) of Vocational School 2 Kendari. Another one is the use of Cooperative learning model of group investigation type to improve learning outcomes on the topic of electronics basics (capacitor). Research Problems Based on the explanation that has been described in a research question, namely: 1. How is the description of the learning outcomes of the experimental class and control class students before and after learning on the topic of electronic basics (capacitor)? 2. Is there a significant difference between the pretest and posttest average scores of the experimental class with the pretest and posttest average scores of the control class students on the basic topics of electronics (capacitor)? 3. Is the N-gain average value of the experimental class significantly better than the average value of the N-gain control class? Research Type and Research Variable This study was included in the category of quasi-experimental. The variables in this study consisted of independent variables and dependent variables. The independent variable consists of classes taught by the Group Investigation cooperative learning model. In contrast, the dependent variable is physics learning outcomes. Population and Sample The population consisted of 2 parallel classes of class X a and X b , after being tested using the homogeneity test. So the researchers took two classes consisting of class X a students and class X b students of Vocational School 2 Kendari. In this connection, the sample set to represent the entire population is 30 students in class X a and 30 students in X b , so in total 60 students. Research Design The research design uses pretest-posttest control group design by dividing the class into two parts, namely the experimental class and the control class the experimental class was taught using a group investigation model. In contrast, the control class was taught using the traditional model (direct learning model)-research design in interpretation in Table 1. Research Procedure The steps used in the study are presented as follows. 1. Conducting preliminary observations at a vocational school two population Kendari to determine the amount that will be the object of research, the value of capacitor material physics learning outcomes and learning model applied to the teaching process. 2. Retrieving the value of test data before the primary topic of the capacitor for the homogeneity test. 3. Determine the research sample using SPSS software to determine homogeneous populations. 4. Prepare the grating test. 5. Determine test questions that will be used in the final test on experimental and control classes that qualify based on the results of expert validation. 6. Carry out pretest the control class and experimental class. 7. Conduct cooperative learning type Group Investigation (experimental class) and carry out conventional learning in class X (control class). 8. Perform posttest on an experimental class and control class. 9. Analyze the pretest and posttest. 10. Arrange of research results Data Collection Technique Data collection techniques in this study include: (1) written test that used to collect student learning outcomes data; (2) the observation that used to collect data that is held teacher learning process as the enforceability of teachers; and (3) documentation to obtain data on student learning outcomes into the study population before the learning process. Research Instrument The instrument that will be used to measure student physics learning outcomes in the form of achievement tests consists of 29 multiple choice objective test forms. The questionnaire instrument consisted of two choices. If the answer is correct, it will be one and false will be zero. The test given to the experimental group is the same as the test given to the control group. Learning outcomes measured are the cognitive aspect that is given (C1), understanding (C2) applying (C3) and analyze. Data Analysis Techniques Data were analyzed using descriptive and inferential analysis. Descriptive statistics are using to describe the values obtained by each class in the form of average values, maximum values, minimum values and standard deviations. Furthermore, to determine the value of student learning outcomes, the range of values used for objective tests in this study is 0-100 with the formula: Where, i S = the value obtained by the students to-i pi S = scores obtained by students to -i m S = The maximum score achieved Whereas inferential analysis uses a test, the data was analyzed using Microsoft Excel and SPSS. In contrast, the inferential analysis consists of normality test, homogeneity and hypothesis testing. Hypothesis testing uses independent sample T-Test which is making in the following form: Hypothesis 1 There is no significant difference between the average values of pretest student learning outcomes. Physics experimental class with the average value of pretest physics learning outcomes of control class students before learning on the subject matter of Capacitors. H o : µ 1 = µ 2 H 1 : µ 1 ≠ µ 2 Where, H o = There is no significant difference between the average value of pretest student learning outcomes in the experimental class with the average value of pretest learning outcomes of control class students. H 1 = There is a significant difference between the average value of pretest student learning outcomes in the experimental class with the average value of pretest student learning outcomes control class µ 1 = The average value of pretest learning outcomes in experimental class students µ 2 = The average value of pretest learning outcomes in control class students Hypothesis 2 The average value of the posttest results of students learning physics class experiment significantly better than the average value of the posttest learning outcomes physical control class in the subject matter of capacitors. H o : µ 1 ≤ µ 2\ H 1 : µ 1 > µ 2 Where, H o = The average value of students ' posttest results in the experimental class is less than or equal to the results of the students' posttest average control class H 1 = The average posttest value of student learning outcomes in the experimental class is greater than the average posttest student learning outcomes of the control class µ 1 = The average posttest score of students' learning outcomes in the experimental class µ 2 = The average posttest score of the learning outcomes of the control class students Hypothesis 3 The average of the N-gain value results experimental class students learn physics significantly better than the average of the N-gain value physics learning outcomes control class in the subject matter capacitor H o : µ 1 ≤ µ 2 H 1 : µ g1 > µ g2 Where, H o = The average of N-gain value student learning outcomes in the experimental class is less than or equal to the average N-gain value student learning outcomes in the control class H 1 = The average value of N-gain learning outcomes in experimental class students is higher than the average value 4622 Application of Cooperative Learning Type Group Investigation to Improve Physics learning Outcomes in Vocational Schools of N-gain learning outcomes of control class students µ g1 = The average value of N-gain learning outcomes of experimental class students µ g2 = The average value of N-gain learning outcomes of control class students Result of Descriptive Analysis The results of the descriptive analysis of pretest, posttest, and N-gain data for the experimental and control classes can be seen in Table 2. Based on Table 2, it can be seen that the average value of learning outcomes of the subject matter of the capacitors of the experimental class students and the control class students both have increased. At the pretest, the lowest and highest values were founded in the two classes, namely the experimental class and the control class. At the time of the posttest, the second-lowest value reached different class, and the highest value contained in the experimental class. This increase is due to the group investigation learning model emphasizing student choice and control rather than applying teaching techniques in the room. In this model, students are given full control and choice to plan what they want to learn and investigate; students are placed in small groups. Each group is giving a different task or project. Rutih et al., [29] states Group Investigation learning model can improve student learning outcomes better than conventional teaching methods. It is because the Group Investigation learning model can facilitate student learning in the topic chemicals in food so that learning outcomes are optimal. Fitriana [30] research results showed that students who were giving a cooperative learning model tipeGroup investigation had better academic achievement than students who were giving cooperative type STAD learning models. Nonetheless, the average difference experimental class against class average nearing significant control. The average value grade students experiment and control class, as shown in Table 2, are presented in the form of Figure 1. Figure 1 shows that the average value of the pretest experimental class students was 43.2, and an increase in posttest with the average value obtained was 80.8. Next, the two classes analyzed in the percentage for each category. Categorizing the pretest and posttest student experiment class and control class, see Table 3. Table 3 presents information that the pretest results of the experimental class students most of the students fall into the category of failing and lacking that both have the same percentage of 36.6% (11 students) and in the sufficient category of 11.11% (3 students) Good and very good category. There are no students in this category. Then the results of the posttest experimental class students explain most of the students included in the category of less with a percentage of 7.4% (2 students). Enough categories 18.51% (5 students), good category 51.85% (14 students), excellent25.92% (7 students) and the category of failure was 3.70% (1 student). For more details, we are presented in graphical form as contained in Figure 2. Figure 2, it can be seen that the average results of the control class pretest are in the low category with a percentage of 51.85% (14 Students). Then for the results of the posttest of students in the control class students who fall into the category of fail 3'7% (1 person). Next, less category, 55.56% (3 students), enough 33.33% (9 students), good category44, 44% (12 students) and very good category 18.58% (5 students). Furthermore, to see the significance of improving learning outcomes, the reference N-gain value is used. We are categorizing N-Gain subject matter of learning outcomes capacitor experimental and control classes presented in Table 3. Based on Table 4, it can be said that the N-Gain category of learning outcomes of students' main capacitors both in the experimental class and the control class. Mostly falls into the medium category of 13 people (43%) in the experimental class and five people (16%) in the control class. In the experimental class, no student has the N-Gain value in the low category, five students in the medium category, or approximately 16.66%, and 25 students in the high category, or 88.33%. At no control class students who have the N-Gain value in the low category, two students in a category are 13 0rang or approximately 16.66% of students and 15 students in the high category of 50%. Therefore, in general, it can be said that the description of the N-Gain learning outcomes of the experimental class students mostly falls into the medium category with a percentage of 44.44%. As for the N-Gain category, the learning outcomes of control class students were mostly in the low category, with a percentage of 59.26%.%. Result of Inferential Analysis a). Normality test The The results of the normality test results of student learning outcomes that are obtained using the Kolmogorov-Smirnov test provided in Table 5 via the cooperative learning model type investigation. Table 5 indicates that significant value to the experimental class (pretest and posttest) of 0,167 dan 0.210, which is higher than α = 0.05-thus concluded that the learning outcome data were studying through a normally distributed learning model. Similarly, in the control class (pretest and posttest), significant values are 0.116 and 0.127, which is higher than α = 0.05. So it can be concluded that both the learning of each class learning with cooperative learning model type normally distributed group investigation at α = 0.05 b). Homogeneity test Results of the homogeneity testing of data variance learning by group study learning models and traditional learning models written in Table 6 Table 6 shows that the significant value that is greater than α = 0.05. It can be concluded that the data pretest and 4624 Application of Cooperative Learning Type Group Investigation to Improve Physics learning Outcomes in Vocational Schools posttest students are learning through cooperative learning model investigation group is homogeneous c). Hypothesis Test Hypothesis test was analyzed using SPSS software with T-test analysis. The following are the results of the hypothesis test analysis: Hypothesis 1 The results of the T-test analysis of the pretest mean value of students learning through the cooperative group type investigative learning model and the conventional learning model are presented in Table 7. Table 7 shows that the value of ρ is higher, so it can be assumed that there is no substantial difference between the average value of student pretest learning by study group style cooperative learning models and traditional learning models Hypothesis 2 The different test results of the average posttest scores of students learned through the cooperative group type investigative learning model and the complete traditional model of learning are presented in Table 8. From Table 8 it can be shown that the value of ρ is lower than α = 0.05, so it can be inferred that there is a substantial difference between the average value of posttest students learning by study cooperative learning models type investigations and traditional learning models. Hypothesis 3 The test results of N-Gain students learning through cooperative learning model investigation and conventional learning is presented in Table. Table 9 shows that ρ is lower than α = 0.05, and it can be inferred that there is a substantial gap between the N-Gain scores of students studying by study forms of cooperative learning and traditional learning models. Discussion Based on the results of descriptive analysis (Table 5) on the pretest student learning outcomes with the average value obtained by the experimental class and control class students during the pretest, namely the experimental class was 43.2 and the control class 39.9. The low student learning outcomes before learning on the subject matter of capacitors caused by natural factors. Namely, the students who have tested have not gotten the subject matter about the capacitors in detail, so that their understanding and knowledge of the capacitor topics still very limited. The average yield of the pretest students showed that students' initial ability both to class experimental and control classes are statistically equal or homogeneous. So that both types can be treated differently, this result can also be proven inhomogeneity testing, which shows that both classes are comparable. The physics learning outcomes did not show a significant difference (Table 2). It is likely to be influenced by the ability of students during the learning process. It is consistent with studies performed by [31] that there is substantial difference in students' learning outcomes in learning model implementation form of investigation with student learning outcomes in applying the Non-Group study to the learning triangle. Furthermore, based on the results of the descriptive analysis it was seen that the increase in the average learning outcomes (N-gain) of the experimental class students was more significant than the average increase in the results (N-gain) of the control class students ( Table 4). The average N-gain of students in the experimental class increased by 59%, while the gain of students in the control class 47%. These tests correspond to inferential testing (Table 6), where inferential test results are at a confidence point of 95%. Of course that the experimental class students' mean gain value did not indicate a substantial difference with the gain value of the control class student. This result proved by the value of ρ, which is smaller than the value of α = 0.05, which is 0.0097. A similar result was obtained from the results of the T-Test inferential analysis contained in SPSS 16.0 ( Table 7, Table 8 and Table 9), confirming that the pretest results of students both taught through cooperative learning models of the type of investigation group (experimental class). Moreover, those taught using conventional learning models (control group) there is a significant difference, with the value ρ = 0.739 higher than α = 0.05. In testing the student's, posttest results showed that statistically there is a significant difference between the average value of the posttest students are learning with cooperative learning model of investigation with the average value of the posttest students learn through conventional study. In other words, it can be said that the average posttest value of students learn with the cooperative learning model of the investigative group is higher than the average posttest value of student study with conventional learning models. This result can be proved by ρ value smaller than the value of α = 0.05 is 0.0097. This increase is due to Group investigation helps teachers to link material with students' real situations and encourage students to apply knowledge in their lives [32]. In line with this, the research conducted by the Wahyuni et al., [33] showed that the application of the investigation group was able to increase the interest and student learning outcomes and help students to apply their knowledge in life. Group investigation is one of the discovery-based cooperative learning methods where each group consists of 4-6 people with a heterogeneous group composition [34]. The steps of group investigation in learning are forming groups and selecting topics, planning the completion of topics, conducting investigations, preparing reports, presenting reports, and evaluating. This result supported by a study conducted by [33] say that a group investigation model based on guided inquiry experiments is effective in increasing students' cognitive learning activities and outcomes on the topic of reflection compared to simple experimental methods. Whereas [14] described that the group investigation model could improve cognitive, psychomotor and effective learning outcomes of students. Furthermore, [35] [21] related that cooperative learning type group investigation (GI) effectively applied to students' learning outcomes on the subject of optical devices that based on the results of the T-test obtained 1,965 results then t table = 1.671, with the provisions that if t table <t count i.e. (1,658 <1,965). Whereas, [36] reported the results of the study showed that the Investigation learning method could improve students' critical thinking skills by 25.44%. On the other hand, [37] states that there is a group investigation effect on students' conceptual understanding. While Astra et al. [38] report that the outcome of this research on quality aspects of learning-student-student interaction, teacher-student interaction, and learning outcomes, is about 75%, according to the findings, it can be concluded that implementing cooperative learning type investigation can enhance learning processes and learning outcomes in physics science. Therefore, the results of the study indicate that the cooperative learning model of investigative type using is very suitable for use in the learning process in which the ability of students actively participate in all aspects, make decisions to set the direction of the goals they are working. Conclusions From the description that has described conclusions of this research are: (1) The results of students learning physics after learning increased both for the experimental class and the control class. It indicated by the average acquisition of learning outcomes which increased from 42.4 to 80.9 for the experimental class and the control class from 41.6 to 75.4; (2) overview of learning outcomes using the group investigation learning model that is an increase in learning outcomes. Nevertheless, the increase is not significant between the two classes.
6,792.8
2020-10-01T00:00:00.000
[ "Physics", "Education" ]
Regulation of F-actin and Endoplasmic Reticulum Organization by the Trimeric G-protein Gi2 in Rat Hepatocytes The roles of the heterotrimeric G-protein, Gi2, in regulating the actin cytoskeleton and the activation of store-operated Ca2+ channels in rat hepatocytes were investigated. Gαi2 was principally associated with the plasma membrane and microsomes. Both F-actin and Gαi2 were detected by Western blot analysis in a purified plasma membrane preparation, the supernatant and pellet obtained by treating the plasma membrane with Triton X-100, and after depolymerization and repolymerization of F-actin in the Triton X-100-insoluble pellet. Actin in the Triton X-100-soluble supernatant co-precipitated with Gαi2 using either anti-Gαi2 or anti-actin antibodies. The principally cortical location of F-actin in hepatocytes cultured for 0.5 h changed to a pericanalicular distribution over a further 3.5 h. Some Gαi2 co-localized with F-actin at the plasma membrane. Pretreatment with pertussis toxin ADP-ribosylated 70–80% of Gαi2 in the plasma membrane and microsomes, prevented the redistribution of F-actin, caused redistribution and fragmentation of the endoplasmic reticulum, and inhibited vasopressin-stimulated Ca2+ inflow. It is concluded that (i) a significant portion of hepatocyte Gαi2 associates with, and regulates the arrangement of, cortical F-actin and the endoplasmic reticulum and (ii) either or both of these regulatory roles are likely to be required for normal vasopressin activation of Ca2+ inflow. In most nonexcitable and in some excitable cells, depletion of the inositol 1,4,5-trisphosphate (InsP 3 ) 1 -sensitive intracellular Ca 2ϩ stores in the endoplasmic reticulum (ER) activates a Ca 2ϩ influx pathway, a process known as store-operated Ca 2ϩ influx or capacitative Ca 2ϩ entry (1). Although it has been widely accepted that the key event initiating the opening of storeoperated Ca 2ϩ channels (SOCs) in the plasma membrane is the decrease in the concentration of Ca 2ϩ in the lumen of the ER, neither the mechanism that couples these two events nor the structures of SOCs are well understood (2). The results of recent experiments indicate that an essential prerequisite for the activation of SOCs is the close association between regions of the ER and the plasma membrane (3). It is proposed that this association is maintained by cytoskeletal elements such as the F-actin (4). There is evidence that, in some cell types, dismantling of the F-actin cytoskeleton (5), stabilization of the F-actin cytoskeleton (6), or inhibition of myosin light chain kinase (7) blocks Ca 2ϩ influx via SOCs while leaving Ca 2ϩ release from the intracellular stores unaffected (but see Ref. 8). Hepatocytes are polarized epithelial cells in which the Factin cytoskeleton is distributed around the cortex, with a high concentration at the pericanalicular (apical) region (9). This cortical F-actin may play a role in maintaining subregions of the ER close to the plasma membrane (4). Evidence, including results obtained with a microinjected inhibitory anti-G␣ i2 antibody, indicates that the activation of SOCs in hepatocytes requires the trimeric G-protein G i2 (10) and a brefeldin A-sensitive protein, possibly a monomeric G-protein (11). It has been reported that some G␣ i2 co-localizes with F-actin in hepatocytes in primary culture (12). Moreover, studies with other cell types have provided evidence for an association between G␣ i2 and F-actin (13)(14)(15), and have suggested a potential role for G␣ i2 in organization of the F-actin cytoskeleton (16 -18). On the basis of these observations, we proposed that G i2 may regulate arrangement of the actin cytoskeleton and the arrangement of the ER by which both the intimate plasma membrane-ER association is achieved and the communication between different parts of the ER is maintained and allows the activation of SOCs. The aims of the present experiments were to elucidate the role of G i2 in the activation of SOCs in hepatocytes by investigating the intracellular distribution of G␣ i2 and F-actin, the association of G␣ i2 with F-actin, and the requirement for G␣ i2 -F-actin interaction in regulation of the arrangement of F-actin and in the activation of SOCs. The results indicate that a significant proportion of the cellular G␣ i2 is associated with F-actin and regulates F-actin organization (especially the cortical actin layer near the canalicular membrane) and the arrangement of the ER. To our knowledge, this is the first demonstration of the role of G␣ i2 in regulating the arrangement of F-actin in an epithelial cell type. Taken together with previous evidence that the normal function of G␣ i2 is required for the activation of SOCs in rat hepatocytes (10), these observations suggest that G␣ i2 , either through regulation of cortical F-actin organization and/or arrangement of the ER, allows the normal activation of SOCs. (John Curtin School of Medical Research, Australian National University, Canberra, Australia). Although this antibody detects both G␣ i1 and G␣ i2 , liver does not express detectable G␣ i1 (19,20), so that the Gprotein detected by this antibody in the present experiments is G␣ i2 . Peptides KENLKDCGLF and QLNLKEYNLV, synthesized as described in Ref. 10, were provided by Dr. Bruce Kemp (St. Vincent's Institute of Medical Research, Victoria, Australia). Purified phosphoprotein phosphatases 1 and 2A were kind gifts from Dr. Alistair Sim (University of Newcastle, Australia). Pertussis toxin, affinity-purified rabbit polyclonal anti-actin antibody, goat anti-rabbit IgG conjugated to alkaline phosphatase, actin standard for Western blotting, protein A-Sepharose, Triton X-100, nitro blue tetrazolium, and bromochloroindolyl phosphate were from Sigma, and Texas Red-X phalloidin, 3,3Јdihexyloxacarbocyanine iodide (DiOC 6 (3)), fura-2, and goat anti-rabbit IgG conjugated to Alexa TM 488 were from Molecular Probes, Inc. (Eugene, OR). Recombinant G␣ i2 protein was from Calbiochem (Alexandria, Australia). All other chemicals and materials were of the highest grade commercially available. Western Blot Analysis of G␣ i2 and Actin-SDS-PAGE was performed on 12% polyacrylamide resolving gels with the Laemmli discontinuous buffer system (21), and the resolved proteins were electrotransferred to nitrocellulose membranes by the method of Towbin et al. (22). Membranes were blocked with 1 M glycine containing 5% (w/v) nonfat milk powder, 5% (v/v) fetal calf serum, and 1% (w/v) ovalbumin for 1 h at room temperature and then washed three times (5 min each) at room temperature with 0.1% (v/v) Tween 20, 0.1% (w/v) nonfat milk powder, and 0.1% (w/v) ovalbumin dissolved in 137 mM NaCl, 2.7 mM KCl, 8 mM Na 2 HPO 4 , and 1.4 mM KH 2 PO 4 (pH 7.2). Membranes were incubated overnight at 4°C with either anti-G␣ i antibody (1:200 dilution in the above wash buffer) or anti-actin antibody (1:100 dilution) or, in some cases, both antibodies together followed by incubation with secondary antibody (goat anti-rabbit IgG conjugated to alkaline phosphatase, 1:1000 dilution) for 2 h at room temperature and finally developed for 5 min in 100 mM Tris-HCl (pH 9.5), 100 mM NaCl, and 5 mM MgCl 2 containing 0.33 mg/ml nitro blue tetrazolium and 0.16 mg/ml bromochloroindolyl phosphate. Quantitation of the bands was performed on a Bio-Rad model GS-700 imaging densitometer driven by the Molecular Analyst software package (Bio-Rad). SDS-PAGE in the presence of 6 M urea was conducted as described by Komatsu et al. (23). Subcellular Fractionation and Marker Enzyme Assays-Rat livers were homogenized in a medium containing 250 mM sucrose, 5 mM HEPES/KOH (pH 7.4), and 1 mM EGTA (homogenization medium), supplemented with 1 mM dithiothreitol, 0.2 mM phenylmethanesulfonyl fluoride, 10 g/ml leupeptin, and 10 g/ml pepstatin A (protease inhibitor mixture) and subcellular fractions prepared by differential centrifugation (24), with the 100,000 ϫ g supernatant being designated the "cytosolic fraction." A purified plasma membrane fraction and a nucleicontaminated plasma membrane fraction were prepared by Percoll gradient centrifugation (25). Protein concentrations were determined by the Bradford method (26) with bovine serum albumin as a standard. The activities of the marker enzymes 5Ј-nucleotidase (plasma membrane) and glucose-6-phosphatase (ER) were determined as described by Aronson and Touster (27). Treatment of a Liver Cytosolic Fraction with Phosphoprotein Phosphatases-The liver cytosolic fraction (100 l) was diluted with an equivalent volume of homogenization medium supplemented with 1% (w/v) Triton X-100, 1 mM dithiothreitol, and the protease inhibitor mixture. Either 5 l (5 units; 1 unit of the enzyme is defined as the amount that hydrolyzes 1 nmol of phosphate from the phosphorylated proteins per min at 30°C, pH 7.0) of phosphoprotein phosphatase 1 or 5 l (5 units) of phosphoprotein phosphatase 2A or 5 l of vehicle (control) was added to 25 l of the above diluted cytosolic extract. The mixture was incubated at 37°C for 1 h, mixed with 30 l of Laemmli sample buffer, boiled, and subjected to SDS-PAGE and Western blotting analysis. Triton X-100 Extraction of the Plasma Membrane Fraction to Yield a Triton X-100-insoluble Pellet and a Triton X-100-soluble Supernatant and Preparation of a Repolymerized F-actin Fraction from the Plasma Membrane Triton X-100-Insoluble Pellet-Plasma membrane pellets were resuspended in lysis buffer, which consisted of 50 mM HEPES (pH 7.4), 1% (w/v) Triton X-100, 150 mM NaCl, 1 mM EGTA, 1 mM Na 3 VO 4 , 100 mM NaF, 10 mM Na 4 P 2 O 7 , 10% (w/v) glycerol, supplemented with the protease inhibitor mixture, and incubated on ice for 1 h. A Triton X-100-insoluble pellet and a Triton X-100-soluble supernatant were obtained by centrifugation at 14,000 ϫ g for 10 min. F-actin present in the plasma membrane Triton X-100-insoluble pellet was subjected to two cycles of depolymerization and repolymerization as described by Ueda et al. (15), and the final fraction was called the "repolymerized F-actin fraction." All fractions were quantitatively mixed with Laemmli sample buffer for Western blot analysis. Immunoprecipitation-The plasma membrane Triton X-100-soluble supernatant prepared as described above was incubated on ice for 2 h with either an anti-G␣ i or an anti-actin antibody or with normal rabbit serum (as control). Samples were mixed with swollen protein A-Sepharose (5 mg, dry weight), and the incubation continued for a further 1 h. Immune complexes bound to protein A-Sepharose were collected by centrifugation (12,000 ϫ g, 1 min). The pellets were washed three times in 0.2 M NaCl, 50 mM Tris-HCl (pH 7.4), resuspended in Laemmli sample buffer, boiled for 5 min, and centrifuged (12,000 ϫ g, 1 min), and the supernatant was retained for SDS-PAGE and Western blotting analysis. Treatment of Rats with Pertussis Toxin and Isolation and Culture of Hepatocytes-Pertussis toxin (25 g in 50 mM Tris, pH 7.5, 10 mM glycine, 0.5 M NaCl, 50% (v/v) glycerol/100 g of body weight) or vehicle was administered to Hooded Wistar rats by intraperitoneal injection (28). After 24 h, hepatocytes were isolated by collagenase perfusion (29) and grown in primary culture on type I collagen-coated coverslips (30). The Localization of the F-actin Cytoskeleton, G␣ i2 , and the Endoplasmic Reticulum-The locations of the F-actin and ER were determined using Texas Red-X phalloidin and DiOC 6 (3), respectively, and confocal microscopy as described previously (31). Negative controls for ER and F-actin staining were carried out systematically by omitting DiOC 6 (3) and Texas Red-X phalloidin, respectively. Determination of the location of G␣ i2 by immunofluorescence was performed as described previously (10). Controls were performed by omitting either the primary antibody or the secondary antibody or both and by incubating the primary antibody with excess blocking peptide before use. For double labeling of F-actin and G␣ i2 in the same cell, F-actin staining was first performed as described above. The cells were then washed with phosphate-buffered saline containing 0.05% (v/v) Tween 20 and 1% (w/v) bovine serum albumin (Tween solution) and incubated overnight at 4°C with anti-G␣ i antibody (5 g/ml in Tween solution). Thereafter, cells were washed six times with the Tween solution, incubated with secondary antibody (Alexa TM 488-conjugated goat anti-rabbit IgG, 1:100 dilution in Tween solution), and washed twice with Tween solution and four times with phosphate-buffered saline before the coverslips were mounted on slides in 50% glycerol in phosphatebuffered saline. Confocal microscopy was performed using a Bio-Rad MRC-1000 laser-scanning confocal microscope system in combination with a Nikon Diaphot 300 inverted microscope and a ϫ 40 NA 1.15 water immersion objective lens. The excitation and emission wavelengths were set at 568/10 and 605/35 nm, respectively, for Texas Red-X, and at 488/10 and 522/32 nm, respectively, for DiOC 6 (3) and Alexa TM 488. To standardize the fluorescence intensity measurements among experiments, the time of image capturing, the image intensity gain, the image enhancement, and the image black level were optimally adjusted at the outset and kept constant for each of Texas Red-X, DiOC 6 (3), and Alexa TM 488. In most cases, only images of the optical sections near the middle of the z axis were collected. Quantitative examination of the captured images was performed using CoMOS (Bio-Rad) image analysis software. To quantitate F-actin distribution, for each experimental condition, 60 hepatocyte doublets were randomly selected from the images obtained from three separate cell preparations (20 doublets from each preparation), and the fluorescence (pixels) in the total doublet and in the pericanalicular area was measured. The fluorescence in the pericanalicular area was expressed as a percentage of the total doublet fluorescence. This percentage indicates the relative amount of F-actin around the bile canaliculus and hence the degree of reorganization of F-actin during primary culture (cf. Ref. 32). To avoid the subjectivity of this measurement, it was verified that the elliptical area designated as "pericanalicular area" occupied 9.95 Ϯ 0.06% (mean Ϯ S.E., n ϭ 60) of the total area of control doublets and 9.92 Ϯ 0.06% (mean Ϯ S.E., n ϭ 60) of the total area of pertussis toxin-treated doublets, respectively. Electron Microscopy-Pellets (3000 ϫ g for 2 min) of the plasma membrane fraction (ϳ1 mg) were fixed in 1 ml of 1% (w/v) glutaraldehyde in 25 mM HEPES buffer (pH 7.4) for 30 min on ice. After washing three times with 25 mM HEPES buffer, the samples were postfixed with 1% (w/v) OsO 4 in the same HEPES buffer for 1 h on ice. Freshly isolated intact hepatocytes (pelleted by centrifugation at 80 ϫ g for 30 s) were fixed for 2 h at room temperature in a solution containing 1% (w/v) OsO 4 and 0.1 M Na 2 HPO 4 /NaH 2 PO 4 (pH 7.4). Fixed samples were dehydrated by stepwise exposure to increasing concentrations of ethanol (50, 75, 85, 95, and 100% (v/v)) and embedded in Durcupan with propylene oxide as an intermediate transition medium. The ultrathin sections were cut on an ultramicrotome, stained with aqueous uranyl acetate and Reynold's lead citrate, and examined with a JEOL 1200 EX transmission electron microscope. Measurement of Ca 2ϩ Inflow-Cytoplasmic free Ca 2ϩ concentrations ([Ca 2ϩ ] cyt ) and initial rates of Ca 2ϩ inflow (measured using a Ca 2ϩ add-back protocol) in rat hepatocytes loaded with fura-2 by microinjection were determined using fluorescence microscopy (31). Nature and Distribution of G␣ i2 in Rat Liver Subcellular Fractions-When rat liver homogenates were subjected to Western blot analysis, two forms of G␣ i2 , with apparent molecular masses of 41 and 43 kDa, were detected (Fig. 1A). The plasma membrane fraction contained predominantly the 41-kDa band, which co-migrated with recombinant G␣ i2 (Fig. 1B, lanes 1 and 2), while the cytosolic fraction contained predominantly the 43-kDa band (Fig. 1B, lanes 3 and 4). Treatment of cytosolic fraction with phosphoprotein phosphatase 1 converted the 43-kDa form of G␣ i2 to a form that co-migrates with recombinant G␣ i2 (Fig. 1C, lanes 1, 2, and 5). By contrast, treatment with phosphoprotein phosphatase 2A did not alter the mobility of the 43-kDa band (Fig. 1C, lanes 1, 3, and 6). These results indicate that (i) the 41-kDa form of G␣ i2 (subsequently referred to as G␣ i2 ) corresponds to the form of G␣ i2 (nonphosphorylated) normally detected in most cell types and (ii) the species of G␣ i2 with an apparent molecular mass of 43 kDa (subsequently referred to as phosphorylated G␣ i2 ) is a phosphorylated form of G␣ i2 (cf. Ref. 33). G␣ i2 was found in the plasma membrane, the nuclearplasma membrane, and the heavy and light microsomal fractions of the liver ( Fig. 2A) but was barely detectable in the cytosolic fraction. The amount of G␣ i2 associated with the microsomes was estimated to be 40% of total cellular G␣ i2 . G␣ i2 (41-kDa) was the predominant form of G␣ i2 found in the plasma membrane and the nuclear plasma membrane fractions. Phosphorylated G␣ i2 was principally found in the cytosolic fraction, but some was also associated with the heavy and light microsomes ( Fig. 2A). In order to determine how tightly G␣ i2 is associated with the microsomal membranes, the microsomes were treated with KCl, which has been shown to cause the dissociation of loosely bound proteins from liver microsomal membranes (34). Phosphorylated G␣ i2 , but not the non-phosphorylated form, could be removed from microsomes by treat- ment with KCl (Fig. 2B). These results indicate that G␣ i2 is tightly associated with microsomal vesicles, whereas phosphorylated G␣ i2 is only loosely associated. The distribution of the phosphorylated and nonphosphorylated forms of G␣ i2 within hepatocytes was further analyzed by determining the degrees of enrichment of the liver subcellular fractions in the two forms of G␣ i2 , 5Ј-nucleotidase (a plasma membrane marker enzyme) and glucose 6-phosphatase (an ER marker enzyme) (Fig. 3). The degree of enrichment of the purified plasma membrane fraction with G␣ i2 is similar to that for 5Ј-nucleotidase, indicating that, as shown previously (12), considerable G␣ i2 is located at the plasma membrane of hepatocytes. A small amount of glucose-6-phosphatase activity was found to be associated with the purified plasma membrane fraction. This may reflect either contamination of the plasma membrane fraction with microsomes derived from the ER or the attachment of small regions of the ER to the plasma membrane (cf. Ref. 24). The degree of enrichment of the heavy and light microsomal fractions with G␣ i2 is similar to that for glucose-6-phosphatase (Fig. 3). Consideration of the degrees of enrichment of these two fractions with 5Ј-nucleotidase, together with the observation that the purified plasma membrane fraction is equally enriched in 5Ј-nucleotidase and G␣ i2 , indicates that the presence of G␣ i2 in the microsomal fractions is unlikely to be due to the contamination of these fractions by plasma membrane vesicles. The total amounts of phosphorylated G␣ i2 and G␣ i2 in the cytosolic fraction were estimated to be 84 Ϯ 5 and 13 Ϯ 3% (means Ϯ S.E., n ϭ 3 rat livers), respectively, of the total amount present in the homogenate. Evidence for the Association of G␣ i2 and Actin in a Purified Rat Liver Plasma Membrane Fraction-It has previously been shown that a purified liver plasma membrane fraction (prepared in a manner similar to that described above) contains F-actin, which is attached to the plasma membrane (35). Experiments were undertaken to determine whether G␣ i2 is associated with this plasma membrane-associated actin. First, the quality of the plasma membrane fraction was further assessed by electron microscopy (Fig. 4). This showed numerous extended sheets of membrane (large arrow), the presence of small vesicles adherent to some sheets (small arrows), and numerous other vesicles of varying size. The preparation was largely free of mitochondria and nuclei. The plasma membrane fraction was treated with 1% (w/v) Triton X-100 to solubilize membrane lipids and integral proteins and thereby to obtain, by centrifugation, a plasma membrane Triton X-100-insoluble pellet enriched in F-actin and other cytoskeletal components (15). G␣ i2 and actin were detected by Western blotting in both the Triton X-100-insoluble pellet (predominantly F-actin) and the Triton X-100-soluble supernatant (predominantly G-actin) (Fig. 5). It was estimated by densitometric analysis that approximately 27 Ϯ 3% (mean Ϯ S.E., n ϭ 4) of the total plasma membrane G␣ i2 and approximately 45 Ϯ 1% (mean Ϯ S.E., n ϭ 3) of the total plasma membrane actin were recovered in the plasma membrane Triton X-100-insoluble pellet. To further test that G␣ i2 associates specifically with F-actin among the various cytoskeletal components of the plasma membrane, a repolymerized F-actin fraction was prepared from the plasma membrane Triton X-100-insoluble pellet by a twostep depolymerization-polymerization procedure (15). Analysis by SDS-PAGE and Western blotting with anti-G␣ i and antiactin antibodies demonstrated the presence of G␣ i2 in the repolymerized F-actin fraction (Fig. 5, lane 4). Approximately 44 Ϯ 0% of the G␣ i2 and 47 Ϯ 2% of the actin in the plasma membrane Triton X-100-insoluble pellet were recovered in the final repolymerized F-actin fraction. This corresponds to 12 Ϯ 0 and 21 Ϯ 1% (means Ϯ S.E., n ϭ 3) of the total plasma membrane G␣ i2 and actin, respectively. The idea that G␣ i2 and actin associate near the plasma membrane was also investigated using a co-immunoprecipitation approach. When an anti-G␣ i antibody was used to precipitate G␣ i2 from the Triton X-100-soluble supernatant of the purified plasma membrane fraction, the precipitate was found to contain actin, identified using an anti-actin antibody and Western blot analysis (Fig. 6A). When an anti-actin antibody was used to precipitate actin from the Triton X-100-soluble supernatant of the purified plasma membrane fraction, the precipitate was found to contain G␣ i2 , identified using an anti-G␣ i antibody and Western blot analysis (Fig. 6B). When a similar co-immunoprecipitation experiment was performed with a liver cytosolic fraction (which is enriched in phosphorylated G␣ i2 ), no co-immunoprecipitation of phosphorylated G␣ i2 and actin was observed (data not shown). Distribution of F-actin and G␣ i2 in Hepatocytes in Primary FIG. 3. The relative distribution of 41-kDa G␣ i2 , 43-kDa G␣ i2 , and the plasma membrane (5-nucleotidase) and endoplasmic reticulum (glucose-6-phosphatase) markers in subcellular fractions of rat liver. The homogenization of rat liver; preparation of subcellular fractions; and determination of protein concentration, relative amounts of 41-kDa G␣ i2 and 43-kDa G␣ i2 (by Western blot analysis and densitometry), and marker enzyme activity were conducted as described under "Experimental Procedures." The degree of enrichment of a given fraction by G␣ i2 or marker enzyme was determined by dividing the amount of G␣ i2 (densitometry units) or marker enzyme (enzyme units) per mg of protein in the given subcellular fraction by the amount of G␣ i2 or marker enzyme per mg of protein in the total homogenate. The results are the means Ϯ S.E. of three separate experiments involving separate rat liver homogenates. FIG. 4. Electron micrograph of a purified liver plasma membrane fraction. The preparation of a plasma membrane fraction from rat liver, processing of the fraction for electron microscopy, and transmission electron microscopy were performed as described under "Experimental Procedures." Scale bar, 500 nm. The image shown is representative of 10 electron micrographs from two different membrane preparations. Culture-The intracellular distribution of G␣ i2 and F-actin and the interaction between these proteins was further investigated using hepatocytes attached to collagen-coated coverslips, and Texas Red-X phalloidin and immunofluorescence to detect F-actin and G␣ i2 , respectively. In freshly isolated rat hepatocytes allowed to attach to coverslips for 0.5 h, F-actin was observed around the cortex, in both single hepatocytes and in hepatocyte doublets (Fig. 7A). When cultured for a further 3.5 h, the amount of F-actin in single cells and in doublets decreased in most regions of the cortex. In single cells, areas of high F-actin remained in some small regions of the cortex. In doublets, a pronounced concentration of F-actin at the canalicular membranes was observed (Fig. 7C). This most likely corresponds to the re-establishment of F-actin polarity and cell polarity, as described previously (32,36). Hepatocytes cultured for 4 h appeared to be more flattened and to have a larger diameter compared with cells cultured for 0.5 h (Fig. 7, compare C with A). Substantial amounts of G␣ i2 (presumably both phosphorylated and nonphosphorylated forms) were found in the cytoplasmic space as well as at the plasma membrane of most hepatocytes examined, as shown previously (10, 12) (Fig. 7, E and G). In order to investigate the possible co-localization of G␣ i2 and F-actin, hepatocytes were double stained with Texas Red-X phalloidin and anti-G␣ i antibody (Fig. 8, A-C). The results indicate that there are regions of the cortex where the fluorescence signals representing G␣ i2 and F-actin overlap (indicated by the orange-yellow regions in Fig. 8C). Effects of the Ablation of G␣ i2 Function by Pretreatment with Pertussis Toxin on the Intracellular Distribution of F-actin, G␣ i2 , and the Endoplasmic Reticulum and the Activation of Ca 2ϩ Inflow-In order to further elucidate the role of G␣ i2 in regulation of the arrangement of the actin cytoskeleton and to study the roles of G␣ i2 and F-actin in the activation of SOCs, the treatment of rats with pertussis toxin was used to ablate G␣ i2 function. The effectiveness of pertussis toxin treatment was assessed by determining the degree of ADP-ribosylation of G␣ i2 , using SDS-PAGE in the presence of 6 M urea to identify ADP-ribosylated G␣ i2 (23). Pertussis toxin treatment caused ADP-ribosylation of G␣ i2 , as shown by the appearance of a new band in the urea/SDS-PAGE gel with a slower mobility than that of G␣ i2 (Fig. 9). Treatment with pertussis toxin did not result in any change in the mobility of the phosphorylated (43-kDa) G␣ i2 band (results not shown). The slower band (ADPribosylated G␣ i2 ) was observed in the plasma membrane fraction (Fig. 9A, lower panel, lane 2), the plasma membrane Triton X-100-insoluble pellet (lane 4), the plasma membrane Triton X-100-soluble supernatant (lane 6), and the heavy and light microsomal fractions (Fig. 9B). Quantitation of the bands using densitometry showed that pertussis toxin treatment resulted in ADP-ribosylation of 60, 80, and 50% of G␣ i2 in the total plasma membrane fraction, the plasma membrane Triton X-100-insoluble pellet, and the plasma membrane Triton X-100-soluble supernatant, respectively, and approximately 70% of G␣ i2 associated with the heavy plus the light microsomes. Pertussis toxin pretreatment caused no detectable changes in the total amount of actin in the plasma membrane fraction (Fig. 9A, upper panel, compare lane 2 with lane 1). Further, since the Triton X-100-insoluble pellet contains predominantly F-actin and the Triton X-100-soluble supernatant contains mainly G-actin (6,15), the results also indicated that pertussis toxin treatment did not change the relative distribution of the two forms of actin in the plasma membrane fraction ( Cells from rats treated with pertussis toxin (pertussis toxintreated cells) that had been cultured for 0.5 h exhibited no substantial differences in the intracellular distribution of Factin compared with cells from vehicle-treated rats (control cells) cultured for this time (Fig. 7, compare B and A). However, the treatment with pertussis toxin prevented the redistribution of F-actin from the cortex to the bile canaliculus and other parts of the cell observed in control cells cultured for 4 h (Fig. 7, compare D and C). To quantitatively compare the differences in the distribution of F-actin in 4-h cultured doublets from control and pertussis toxin-treated rats, the pericanalicular fluorescence due to the F-actin-Texas Red-X phalloidin complex was expressed as a percentage of the total doublet fluorescence. This value was 18.87 Ϯ 0.70% (mean Ϯ S.E., n ϭ 60) in control doublets compared with 11.27 Ϯ 0.26% (mean Ϯ S.E., n ϭ 60) in pertussis toxin-treated doublets (p Ͻ 0.001, heteroscedastic t test). Pertussis toxin treatment also inhibited the spreading of cells observed at 4 h (Fig. 7, compare D and C). Thus, the total doublet area was 1153 Ϯ 49 m 2 (mean Ϯ S.E., n ϭ 60) in control doublets compared with 936 Ϯ 25 m 2 (mean Ϯ S.E., n ϭ 60) in pertussis toxin-treated doublets (p Ͻ 0.001, heteroscedastic t test). Pertussis toxin-treated hepatocytes cultured for both 0.5 and 4 h exhibited noticeable differences in the distribution of G␣ i2 (Fig. 7, compare F and E; compare H and G). In contrast to control cells, where considerable G␣ i2 was present in the cytoplasmic space as well as at the plasma membrane, in pertussis toxin-treated cells, G␣ i2 was principally located at the plasma membrane and in the cortical region (Fig. 7, compare F and H with E and G). Pertussis toxin-treated hepatocytes exhibited more intense staining of the ER, monitored using DiOC 6 (3), than that observed in control cells (Figs. 10, compare B and D with A and C). Moreover, the DiOC 6 (3) signal was more evenly distributed in pertussis toxin-treated cells. These differences were observed in cells cultured for both 0.5 and 4 h. Examination of the cells by electron microscopy revealed that pertussis toxintreated hepatocytes had largely lost the regular parallel arrangement of sheets of rough ER that were observed in control hepatocytes (Fig. 11, compare B and A). These differences can be seen more clearly at higher magnification (Fig. 11, compare D and C). Moreover, in pertussis toxin-treated cells the smooth ER appeared less dense than that in control hepatocytes (Fig. 11, compare B and A). As shown previously, treatment with pertussis toxin inhibited vasopressin-stimulated Ca 2ϩ inflow (Fig. 12). There was no detectable effect of pertussis toxin treatment on vasopressin-induced release of Ca 2ϩ from intracellular stores (results not shown). Role of G i2 in Regulating the Organization of F-actin and the Endoplasmic Reticulum-In keeping with the observations of others (37), a 43-kDa phosphorylated form of G␣ i2 as well as the nonphosphorylated 41-kDa form were detected in hepatocytes. The present study has focused on G␣ i2 (the 41-kDa form), which is bound to the plasma membrane and ER (microsomes), rather than on the phosphorylated 43-kDa G␣ i2 , for the following reasons: (i) the phosphorylated G␣ i2 is hardly detectable in the plasma membrane fraction and is only loosely associated with the microsomes, (ii) there was no evidence from co-immunoprecipitation studies of an association between actin and phosphorylated G␣ i2 , and (iii) there was no evidence that the phosphorylated G␣ i2 was ADP-ribosylated by pertussis toxin treatment. The following observations indicate that G␣ i2 (the 41-kDa form) associates with actin at the periphery of the hepatocyte: (i) the detection of both G␣ i2 and F-actin in a Triton X-100insoluble pellet prepared from a highly purified liver plasma membrane fraction; (ii) the detection of G␣ i2 in repolymerized actin obtained after F-actin in the plasma membrane Triton X-100-insoluble fraction was de-polymerized and re-polymerized; (iii) co-precipitation of G␣ i2 and actin from the plasma membrane Triton X-100-soluble fraction using either an anti-G␣ i antibody or an anti-actin antibody; and (iv) the observed co-localization of some G␣ i2 and F-actin at the cell periphery. The results of experiments that employed pertussis toxin to ablate the action of G␣ i2 indicate that this trimeric G-protein is FIG. 6. Western blot analysis of anti-G␣ i and anti-actin immunoprecipitates from a Triton X-100-soluble supernatant prepared from a purified liver plasma membrane fraction. A plasma membrane fraction was treated with Triton X-100 and centrifuged to obtain a Triton X-100-soluble supernatant. A, co-immunoprecipitation of actin by an anti-G␣ i antibody. The Triton X-100-soluble supernatant was treated with anti-G␣ i antibody (lane 1) or normal rabbit serum as a control (lane 2), as described under "Experimental Procedures." Immunoprecipitates were resolved by SDS-PAGE, Western blotted, and probed first with an anti-G␣ i antibody and subsequently an anti-actin antibody. B, coimmunoprecipitation of 41-kDa G␣ i2 by an anti-actin antibody. The Triton X-100-soluble supernatant was treated with anti-actin antibody (lane 1) or normal rabbit serum as a control (lane 2) as described under "Experimental Procedures." Immunoprecipitates were resolved by SDS-PAGE, Western blotted, and probed with first anti-G␣ i antibody and subsequently with an anti-actin antibody. The upper band labeled IgG HC is immunoglobulin heavy chain. The results shown are those from one of two experiments, each of which gave similar results. FIG. 7. The distribution of F-actin and G␣ i2 monitored using fluorescence microscopy, in hepatocytes derived from control and pertussis toxin-treated rats. Hepatocytes derived from vehicletreated rats (Control) and rats treated with pertussis toxin (PTX) were cultured for 0.5 or 4 h, and the locations of F-actin (using Texas Red-X phalloidin) or G␣ i2 (using immunofluorescence) were determined as described under "Experimental Procedures." Panels I and J are images obtained when the anti-G␣ i antibody was omitted from the procedure used to detect G␣ i2 . Images were obtained by confocal microscopy. The scale bars correspond to 20 m. The images shown are representative of more than 300 cells examined from three separate control and pertussis toxin-treated cell preparations. FIG. 8. The localization of F-actin and G␣ i2 in hepatocytes. Freshly isolated hepatocytes from untreated rats were cultured for 0.5 h, fixed, stained first for F-actin with Texas Red-X phalloidin (A), incubated with primary and secondary antibodies for the detection of G␣ i2 (B), and then examined by confocal microscopy, as described under "Experimental Procedures." C, images in A and B are superimposed, revealing regions of double labeling, indicated by orange-yellow color. Scale bars, 20 m. The images shown are representative of more than 100 cells examined from two separate cell preparations. involved in regulating the organization of cortical F-actin in hepatocytes. Pertussis toxin specifically ADP-ribosylates and inactivates the ␣ subunit of G i1 , G i2 , G i3 , G o , and transducin (38). Since neither transducin, G o , nor G␣ i1 is expressed at detectable levels in hepatocytes (19,20), G␣ i2 and G␣ i3 are the only two known targets for pertussis toxin in these cells. Moreover, there is evidence that the time course for ADP-ribosylation of G␣ i3 by pertussis toxin treatment in vivo (72 h) is longer than that for G␣ i2 (24 -48 h) (23). Therefore, the in vivo pertussis toxin treatment employed in this study (24 h) is likely to result chiefly in inactivation of G␣ i2 . Moreover, urea/SDS-PAGE and Western blotting confirmed that the majority of the G␣ i2 on the plasma membrane, in particular the G␣ i2 associated with F-actin, was ADP-ribosylated and hence inactivated. It is clear from our results that this pertussis toxin treatment inhibited the redistribution of F-actin from the cortex to the bile canaliculus in hepatocyte doublets and the redistribution of F-actin to specific regions of the plasma membrane in single hepatocytes. Normally, cell polarity, which is lost during isolation of hepatocytes, can be restored within 3-4 h in monolayer culture (36). This re-establishment of cell polarity has been found to be closely associated with the redistribution of F-actin from the entire cortex to the canalicular pole (i.e. the polarization of F-actin) (32). The present results indicate that G i2 may be part of the machinery that governs the maintenance of a polarized distribution of F-actin in hepatocytes. The observation that pertussis toxin pretreatment prevented the spreading of hepatocytes in primary culture provides further evidence that G i2 regulates F-actin organization, since it has been shown that hepatocyte spreading in culture requires F-actin organization (39). Studies with several other types of cells have also shown that G i2 interacts with F-actin (13)(14)(15) and is likely to play a role in regulating the organization of F-actin (16 -18). For example, the degree of actin polymerization in differentiating U937 cells was found to correlate well with an increase in the amount of G␣ i2 at the plasma membrane (16). In human airway smooth muscle cells, it has been shown that G␣ i2 is required for carbachol-induced stress fiber formation (18). In experiments employing pertussis toxin, evidence has also been obtained that the dysfunction of G␣ i causes a 40 -50% decrease in the cortical F-actin content in chromaffin cells (40) and diminishes fMet-Leu-Phe-induced actin polymerization in neutrophils (41). Furthermore, evidence for a link between the activity of G␣ i , the basal concentration of intracellular cyclic AMP, and the assembly of stress fibers in primary human granulosa-lutein cells has recently been reported (42). These observations, together with our present results with hepatocytes, suggest that trimeric G-proteins such as G i2 are involved in regulating the organization of the actin cytoskeleton in a variety of cell types. Pertussis toxin treatment also caused fragmentation and redistribution of the ER, detected using DiOC 6 (3) and fluorescence microscopy and by electron microscopy. Furthermore, 40% of the total cellular G␣ i2 was found to be associated with microsomes, and approximately 70% of microsome-associated G␣ i2 was ADP-ribosylated by pertussis toxin treatment. These results indicate that G i2 is likely to be directly or indirectly involved in regulating the structure and intracellular distribution of the ER in hepatocytes. Moreover, considering the evidence of Hajnóczky et al. (43) that the luminal communication between intracellular Ca 2ϩ stores is cooperatively modulated by GTP and the cytoskeleton, an intriguing possibility is that G i2 is involved in maintaining the luminal continuity of the ER in hepatocytes, either via the actin cytoskeleton or by interaction with other proteins. Pertussis toxin treatment caused a noticeable redistribution of G␣ i2 immunofluorescence from the cytoplasmic space to the cell periphery. This observation may reflect the redistribution of some G␣ i2 from the cytoplasmic space to the cell periphery. However, others have shown, using Western blotting, that compared with native G␣ i2 , ADP-ribosylated G␣ i2 has a higher affinity for the anti-G␣ i antibody employed in the present studies (44). Therefore, some of the substantial increase in G␣ i2 immunofluorescence at the cortex of pertussis toxin-treated hepatocytes may be due to an enhanced affinity of the anti-G␣ i antibody for ADP-ribosylated G␣ i2 (compared with native G␣ i2 ). Role of Actin and G i2 in Activation of Ca 2ϩ Inflow-Pertussis toxin treatment caused a substantial inhibition of vasopressininduced Ca 2ϩ inflow with little effect on vasopressin-induced release of Ca 2ϩ from intracellular stores (present and previous (30) results). Previous studies have shown that pertussis toxin treatment completely inhibits thapsigargin-induced Ca 2ϩ inflow without a substantial effect on thapsigargin-induced release of Ca 2ϩ from the ER (45) and have shown that the effects of pertussis toxin can be mimicked by the microinjection of an anti-G␣ i2 antibody or peptide corresponding to the carboxyl region of G␣ i2 , which inhibits G i2 function (10). These results provided substantial evidence to indicate that G i2 (rather than G i3 , which is also present in rat hepatocytes and can be ADPribosylated by pertussis toxin (20,38)) is necessary for the activation of SOCs in rat hepatocytes (10). Moreover, the previous experiments also indicate that the ablation of G␣ i2 action by pertussis toxin does not substantially affect the formation of InsP 3 catalyzed by phospholipase C␤, the interaction of InsP 3 with InsP 3 receptors, the ability of InsP 3 receptors to release Ca 2ϩ from most regions of the ER, or the interaction of thapsigargin with the ER (Ca 2ϩ ϩ Mg 2ϩ )-ATPase and the inhibition of this Ca 2ϩ pump. (The possibility that ablation of G␣ i2 affects the release of Ca 2ϩ from a small region of the ER near the plasma membrane that is central to the activation of SOCs but was not detected as a reduction in vasopressin-induced release of Ca 2ϩ from intracellular stores cannot be excluded.) The present results show that two of the functions of G i2 in hepatocytes are to regulate F-actin assembly at the cortex and arrangement of the ER. It is possible that one or both of these functions is essential for the activation of SOCs. Thus, as suggested by others, the activation of SOCs may require maintenance of a region of the ER near the plasma membrane (e.g. "docking" of regions of the ER with the plasma membrane and/or the fusion of vesicles containing SOC proteins with the plasma membrane (6,46)). In this respect, it is interesting to note that the effects of G␣ i2 ablation (pertussis toxin treatment) in stabilizing F-actin at the hepatocyte cortex and inhibiting SOC activation are similar to results recently reported by Patterson et al. (6). These authors showed that, in a smooth muscle cell line, the stabilization of F-actin by different procedures (treatment with jasplakinolide or calyculin A, which induced the formation of a dense ring of F-actin around the cell periphery) also inhibited SOC activation (6). A requirement for G i2 in the activation of SOCs has not been reported in studies of most other mammalian cells (47). This suggests that the requirement for G i2 in SOC activation in hepatocytes (10) reflects one or more aspects of the specific structure and function of these cells, such as maintenance (via G i2 regulation of the actin cytoskeleton or interaction of G i2 with another ER-associated protein) of cell polarity and/or a specific distribution of the ER throughout the cell, which is critical for the activation of SOCs. This may be due to a requirement for G i2 in the regulation of F-actin organization that is more accentuated in hepatocytes than in other cell types. Another possibility is that, in the hepatocyte, the InsP 3 receptors principally involved in inducing a decrease in Ca 2ϩ in the lumen of the ER are located some distance from the SOCs so that normal intraluminal communication through the ER is required for SOC activation (cf. Ref. 48).
9,538.2
2000-07-21T00:00:00.000
[ "Biology", "Medicine" ]
Multifunctional Magnetoelectric Sensing and Bending Actuator Response of Polymer-Based Hybrid Materials with Magnetic Ionic Liquids With the evolution of the digital society, the demand for miniaturized multifunctional devices has been increasing, particularly for sensors and actuators. These technological translators allow successful interaction between the physical and digital worlds. In particular, the development of smart materials with magnetoelectric (ME) properties, capable of wirelessly generating electrical signals in response to external magnetic fields, represents a suitable approach for the development of magnetic field sensors and actuators due to their ME coupling, flexibility, robustness and easy fabrication, compatible with additive manufacturing technologies. This work demonstrates the suitability of magnetoelectric (ME) responsive materials based on the magnetic ionic liquid (MIL) 1-butyl-3-methylimidazolium tetrachloroferrate ([Bmim][FeCl4]) and the polymer poly(vinylidene fluoride-co-trifluoroethylene) (P(VDF-TrFE) for magnetic sensing and actuation device development. The developed sensor works in the AC magnetic field and has frequency-dependent sensitivity. The materials show voltage responses in the mV range, suitable for the development of magnetic field sensors with a highest sensitivity (s) of 76 mV·Oe−1. The high ME response (maximum ME voltage coefficient of 15 V·cm−1·Oe−1) and magnetic bending actuation (2.1 mm) capability are explained by the magnetoionic (MI) interaction and the morphology of the composites. Introduction In an era in which concepts such as the Internet of Things (IoT) and Industry 4.0 are key enabling technologies, miniaturized portable and multifunctional devices are becoming increasingly demanded. In this context, smart systems including sensors and actuators are essential components of the evolution of technology. However, geometrical and manufacturing complexity and the cost of commonly used materials and systems can be a drawback to their implementation [1,2]. Smart materials represent a suitable approach to develop a new generation of sensors and actuators to improve integration, cost efficiency and/or performance with respect to commonly used materials. In particular, polymer-based composites can be tailored with the ability to respond to external stimuli, including pH, temperature, stress and magnetic or electrical variations, among others [3,4]. This response is reproducible and of Nanomaterials 2023, 13, 2186 2 of 8 suitable magnitude and time-response to be used for sensors and/or actuators [5]. Among the various possible smarts materials, electroactive polymers (EAPs) have been gaining particular attention due to their light weight, mechanical flexibility, simple processing, compatibility with additive manufacturing technologies and tunability [6]. Thus, EAPbased smart materials have been applied in areas including sensors and actuators [4,[7][8][9], biomedicine [10,11] and energy storage [12,13], among others [14,15]. In this scope, much attention has been paid to piezoelectric poly(vinylidene fluoride) (PVDF) and its co-polymers such as poly(vinylidene trifluoroethylene) (P(VDF-TrFE)). PVDF is a semi-crystalline polymer with a high dielectric constant, ionic conductivity, polarity and the largest electroactive response, including piezo-, pyro-and ferroelectricity, among polymers [16]. Due to their simple processability into different forms and shapes, PVDF and its copolymers have already been implemented in many different areas such as sensors, actuators and biomedicine [14]. The co-polymer P(VDF-TrFE) has the advantage of crystallizing in the ferroelectric β-phase when processed from solution or from the melt for specific TrFE contents [4]. Further, the combination of EAPs with ionic liquids (ILs) provides the opportunity to develop hybrid materials with improved and/or new functionalities such as electromechanical, mechanical, magnetic and electric properties considering specific target applications [15,17]. ILs are salts composed of anions and cations with a melting temperature typically below 100 • C [18]. Due to the many possible combinations of ions, ILs are known for their versatility and tunable properties; negligible vapor pressure; ionic conductivity; and high electrochemical, thermal, mechanical and chemical stability [19,20]. The presence of a paramagnetic element, typically a transition metal in the cation or anion, leads to the development of magnetic ionic liquids (MILs) exhibiting a permanent magnetic response when subjected to an external magnetic field [21]. The introduction of MILs into a polymeric matrix, such as solvent-casted P(VDF-TrFE), promotes the development of a new type of magnetoelectric (ME) material with magnetoionic (MI) coupling [9,22], where an electrical variation is induced by subjecting the materials to an external magnetic field [23]. Such an ME effect is related to the ionic movement of the cations and anions in the polymer matrix instead of magnetically induced dipolar variations [5]. Hybrid IL/polymer materials have been developed for implementation in different areas such as sensors and actuators, biomedicine and energy storage, but few studies concerning the combination of MILs with piezoelectric polymers with both MI and ME effects have been reported [17,24,25]. Additionally, and to the best of our knowledge, few studies concerning the use of piezoelectric MIL/polymer-based composites as magnetic sensors and actuators have been reported, in which the influence of the IL concentration or the cationic chain size of imidazolium-based ILs has been explored, but never the influence of the solvent evaporation temperature [5,9]. [Bmim] [FeCl 4 ] was selected based on its magnetic response and P(VDF-TrFE) based on its semicrystalline nature, high ionic conductivity and porous microstructure-forming capability [ [5], because of the higher value of magnetization. Hybrid films with a thickness of 50 µm were obtained after spreading the solution on a clean glass substrate followed by the solvent evaporation in an oven (P-Selecta) at room temperature (~30 • C), 90 • C and 210 • C. The films thickness was 52.8 ± 5.0 µm, 158.2 ± 4.1 µm and 208.8 ± 22.2 µm with solvent evaporation temperatures of 210 • C, 90 • C and room temperature, respectively ( Figure 1). Samples were prepared at different solvent evaporation temperatures, as this process strongly influences a sample's morphology and functional response [15]. [5], because of the higher value of magnetization. Hybrid films with a thickness of ~50 µm were obtained after spreading the solution on a clean glass substrate followed by the solvent evaporation in an oven (P-Selecta) at room temperature (~30 °C), 90 °C and 210 °C. The films thickness was 52.8 ± 5.0 µm, 158.2 ± 4.1 µm and 208.8 ± 22.2 µm with solvent evaporation temperatures of 210 °C, 90 °C and room temperature, respectively ( Figure 1). Samples were prepared at different solvent evaporation temperatures, as this process strongly influences a sample's morphology and functional response [15]. Morphological and Functional Characterization The morphology of the [Bmim][FeCl4]/P(VDF-TrFE) films was analyzed using a scanning electron microscope (SEM, NAnoSEM-FEI Nova 200, Hillsboro, OR, USA) with an accelerating voltage of 10 kV. The samples were previously coated with a thin gold layer using sputter coating (Polaron, model SC502, Quorum, Laughton, UK). The ME effect was evaluated with Helmholtz coils powered with an AC current (Agilent 33220A signal generator, Keysight Technology, Santa Clara, CA, USA) to reach AC magnetic fields ranging from 0 to 2 Oe. The AC fields were applied along the thickness direction of the samples, and a Rigol DS1074Z (Rigol, Suzhou, China) oscilloscope was used to record the induced output voltage. Prior to the analysis, the samples were coated by 5 mm diameter gold electrodes, deposited on both sides of the samples via magnetron sputtering (Polaron SC502, Quorum, Laughton, UK). The determination of the transversal ME coefficient (α) was performed measuring the induced voltage using Equation (1), where the amplitude of the AC magnetic field is HAC, ΔV is the output voltage and d is the composite film thickness: The actuator bending response of the materials was obtained with a high-definition Logitech HD 1080p Webcam camera, (Logitech, Lausanne, Switzerland) connected to a PC with 200 µm accuracy. The film actuator was clamped with two needles and submitted to the magnetic stimulation of a moving magnet (BX0C8-N52-K&J Magnetics, Pipersville, PA, USA-with a periodic movement from a maximum distance to the composite sample Morphological and Functional Characterization The morphology of the [Bmim][FeCl 4 ]/P(VDF-TrFE) films was analyzed using a scanning electron microscope (SEM, NAnoSEM-FEI Nova 200, Hillsboro, OR, USA) with an accelerating voltage of 10 kV. The samples were previously coated with a thin gold layer using sputter coating (Polaron, model SC502, Quorum, Laughton, UK). The ME effect was evaluated with Helmholtz coils powered with an AC current (Agilent 33220A signal generator, Keysight Technology, Santa Clara, CA, USA) to reach AC magnetic fields ranging from 0 to 2 Oe. The AC fields were applied along the thickness direction of the samples, and a Rigol DS1074Z (Rigol, Suzhou, China) oscilloscope was used to record the induced output voltage. Prior to the analysis, the samples were coated by 5 mm diameter gold electrodes, deposited on both sides of the samples via magnetron sputtering (Polaron SC502, Quorum, Laughton, UK). The determination of the transversal ME coefficient (α) was performed measuring the induced voltage using Equation (1), where the amplitude of the AC magnetic field is H AC , ∆V is the output voltage and d is the composite film thickness: The actuator bending response of the materials was obtained with a high-definition Logitech HD 1080p Webcam camera, (Logitech, Lausanne, Switzerland) connected to a PC with 200 µm accuracy. The film actuator was clamped with two needles and submitted to the magnetic stimulation of a moving magnet (BX0C8-N52-K&J Magnetics, Pipersville, PA, USA-with a periodic movement from a maximum distance to the composite sample dmax = 2.2. mm to a minimum distance dmin = 0.1 mm, f = 0.1 Hz). Results and Discussion The morphology variations in the MIL/polymer materials with solvent evaporation temperature were evaluated via SEM ( Figure 2). migration to the electrodes. To evaluate the actuation, the displacement of the films was analyzed every 2 s. The quantification of bending (ε) was carried out by measuring the displacement (δ) of the tip of the sample along the x axis and relating it with the sample free length (L) and thickness (d), as represented in Equation (2): Results and Discussion The morphology variations in the MIL/polymer materials with solvent evaporation temperature were evaluated via SEM ( Figure 2). Figure 2a shows the characteristic porous structure of electroactive P(VDF-TrFE) with an average pore size of 10.3 ± 2.2 µm [27]. Increasing the solvent evaporation temperature in pristine P(VDF-TrFE) leads to a more compact structure with the absence of pores due to the higher mobility of the polymer chains during solvent evaporation [28]. On the other hand, independently of the presence of the MIL and the solvent evaporation temperature used during the processing method, a porous structure is observed after the IL incorporation. This fact is an indication that the inclusion of the MIL in the polymer matrix induces porosity in the films, based on the strong interaction of the IL with the DMF solvent, the phase separation of the polymer and solvent phases and the solvent evaporation, with the free spaces left by the solvent occupied by the IL being dragged to the pores due to their interaction with the solvent [4]. Additionally, Figure 2 shows that the solvent evaporation temperature influences the pore size of the polymer matrix. As Figure 2a shows the characteristic porous structure of electroactive P(VDF-TrFE) with an average pore size of 10.3 ± 2.2 µm [27]. Increasing the solvent evaporation temperature in pristine P(VDF-TrFE) leads to a more compact structure with the absence of pores due to the higher mobility of the polymer chains during solvent evaporation [28]. On the other hand, independently of the presence of the MIL and the solvent evaporation temperature used during the processing method, a porous structure is observed after the IL incorporation. This fact is an indication that the inclusion of the MIL in the polymer matrix induces porosity in the films, based on the strong interaction of the IL with the DMF solvent, the phase separation of the polymer and solvent phases and the solvent evaporation, with the free spaces left by the solvent occupied by the IL being dragged to the pores due to their interaction with the solvent [4]. Additionally, Figure 2 shows that the solvent evaporation temperature influences the pore size of the polymer matrix. As observed, the pore size decreases with increasing solvent evaporation temperature (Figure 2b-d) from a mean diameter of 2.2 ± 0.8 µm for solvent evaporation at 210 • C (Figure 2b) to 35 ± 6 µm for room-temperature-prepared materials (Figure 2d). Additionally, the pore size of the P(VDF-TrFE)/[Bmim][FeCl 4 ] samples increase with decreasing solvent evaporation temperature, with the temperature influencing the solvent evaporation rate. Higher solvent evaporation temperatures lead to quick solvent removal, resulting in a higher number of spherulites with smaller porous radii during polymer crystallization. Further, increasing temperature leads to the polymer chains acquiring enough mobility to occupy the free spaces left by the solvent [28]. Contrarily, lower solvent evaporation rates lead to a phase separation process [27], and the low mobility of the polymer chains at low temperature does not allow them to occupy the free space left by the solvent, leading to a final microstructure with spherulites with small radii and higher porosity. The effect of the solvent evaporation temperature on the ME sensing response of the [Bmim][FeCl 4 ]/P(VDF-TrFE) composites with 40 wt.% of MIL is shown in Figure 3. Previous studies have shown the effect of different MIL contents in samples prepared at 210 • C [4,9]. size of the P(VDF-TrFE)/[Bmim][FeCl4] samples increase with decreasing solvent evaporation temperature, with the temperature influencing the solvent evaporation rate. Higher solvent evaporation temperatures lead to quick solvent removal, resulting in a higher number of spherulites with smaller porous radii during polymer crystallization. Further, increasing temperature leads to the polymer chains acquiring enough mobility to occupy the free spaces left by the solvent [28]. Contrarily, lower solvent evaporation rates lead to a phase separation process [27], and the low mobility of the polymer chains at low temperature does not allow them to occupy the free space left by the solvent, leading to a final microstructure with spherulites with small radii and higher porosity. The effect of the solvent evaporation temperature on the ME sensing response of the [Bmim][FeCl4]/P(VDF-TrFE) composites with 40 wt.% of MIL is shown in Figure 3. Previous studies have shown the effect of different MIL contents in samples prepared at 210 °C [4,9]. Figure 3a shows a linear increase in the ME output voltage with increasing HAC, with fitting R 2 > 0.998, characteristic of composites containing MILs composed with paramagnetic ions [29]. This effect is based in the interaction of the [FeCl4] − anions with the applied HAC, where the anions move in the direction of the applied magnetic field. With a continuous variation in the direction of the magnetic field, the movement of the ions will generate the AC voltage signal in the electrodes of the material. Thus, the materials show voltage responses in the mV range, suitable for the development of magnetic field sensors with a highest sensitivity (s) of 76 mV·Oe −1 . As shown in Figure 3b, at lower frequencies, there is a high voltage increase with increasing frequency, while for higher frequencies, the opposite effect is observed. This fact is explained based on the dynamics and mobility of the ions within the pores of the polymer matrix. For lower frequencies, the ions have enough time to move within the Figure 3a shows a linear increase in the ME output voltage with increasing H AC , with fitting R 2 > 0.998, characteristic of composites containing MILs composed with paramagnetic ions [29]. This effect is based in the interaction of the [FeCl 4 ] − anions with the applied H AC , where the anions move in the direction of the applied magnetic field. With a continuous variation in the direction of the magnetic field, the movement of the ions will generate the AC voltage signal in the electrodes of the material. Thus, the materials show voltage responses in the mV range, suitable for the development of magnetic field sensors with a highest sensitivity (s) of 76 mV·Oe −1 . As shown in Figure 3b, at lower frequencies, there is a high voltage increase with increasing frequency, while for higher frequencies, the opposite effect is observed. This fact is explained based on the dynamics and mobility of the ions within the pores of the polymer matrix. For lower frequencies, the ions have enough time to move within the pores, with this movement being maximized at ≈10 MHz. With increasing frequency, the ions' mobility is reduced, and complete displacement does not occur, leading to ion relaxation processes and causing them to lag behind the fast excitation dynamics due to the increasing frequency [5,30]. The increase in the ME response with decreasing solvent evaporation temperature is shown in Figure 3c. As a consequence, the ME voltage coefficient α increases from 1.5 V·cm −1 ·Oe −1 to 14.1 V·cm −1 ·Oe −1 when the pore size increases from 2.2 µm to 35 µm (Figure 3d), which is explained by the reduced motion capability of the MILs in the smaller pores and the larger degree of confinement [31]. When such an ME response is compared to other P(VDF-TrFE)-based systems without magnetic ionic liquids, it is observed that it is 3 orders of magnitude higher than the value reported for P(VDF-TrFE)/CoFe 2 O 4 nanocomposites (35 mV·cm −1 ·Oe −1 ) and in the same order of magnitude as the values reported for P(VDF-TrFE)/Vitrovac bi-layer laminates (66 V·cm −1 ·Oe −1 ) [32,33]. Further, the magnetomechanical bending actuation response was evaluated in the hybrid materials, with the largest pore sizes and highest ME response observed for [Bmim][FeCl 4 ]/P(VDF-TrFE) with 40 wt.% of MIL (Figure 4). The increase in the ME response with decreasing solvent evaporation temperature is shown in Figure 3c. As a consequence, the ME voltage coefficient α increases from 1.5 V·cm −1 ·Oe −1 to 14.1 V·cm −1 ·Oe −1 when the pore size increases from 2.2 µm to 35 µm ( Figure 3d), which is explained by the reduced motion capability of the MILs in the smaller pores and the larger degree of confinement [31]. When such an ME response is compared to other P(VDF-TrFE)-based systems without magnetic ionic liquids, it is observed that it is 3 orders of magnitude higher than the value reported for P(VDF-TrFE)/CoFe2O4 nanocomposites (35 mV·cm -1 ·Oe -1 ) and in the same order of magnitude as the values reported for P(VDF-TrFE)/Vitrovac bi-layer laminates (66 V·cm −1 ·Oe −1 ) [32,33]. Further, the magnetomechanical bending actuation response was evaluated in the hybrid materials, with the largest pore sizes and highest ME response observed for [Bmim][FeCl4]/P(VDF-TrFE) with 40 wt.% of MIL (Figure 4). Figure 4a,b show the displacement of the actuator tip with respect to the initial position and the bending response of the materials, respectively. It is also observed that the displacement, and consequently, the bending response of the sample increases with an increasing applied DC magnetic field. The maximum displacement (2.1 mm) is observed when the moving magnet is at the minimum distance to the sample (0.1 mm). In such a situation, the magnet applies to the [Bmim][FeCl4]/P(VDF-TrFE) sample an HDC = 3 kOe. The periodic movement of the magnet also induces a periodic displacement of the samples, leading to bending of 0.2% and an actuation capability (c) of 0.7 mm·kOe −1 . Thus, the highest bending response is achieved for a DC applied magnetic field of 3 kOe −1 , with it being possible to tune the bending response for a specific application depending on the applied DC magnetic field. The response of the samples to the applied magnetic field is generated by the number and diffusion of the cations and anions near to the electrodes, as shown in the schematic representation of Figure 4b, with the bending response thus being governed by the mobility and ionic charge [15]. Figure 4a,b show the displacement of the actuator tip with respect to the initial position and the bending response of the materials, respectively. It is also observed that the displacement, and consequently, the bending response of the sample increases with an increasing applied DC magnetic field. The maximum displacement (2.1 mm) is observed when the moving magnet is at the minimum distance to the sample (0.1 mm). In such a situation, the magnet applies to the [Bmim][FeCl 4 ]/P(VDF-TrFE) sample an H DC = 3 kOe. The periodic movement of the magnet also induces a periodic displacement of the samples, leading to bending of 0.2% and an actuation capability (c) of 0.7 mm·kOe −1 . Thus, the highest bending response is achieved for a DC applied magnetic field of 3 kOe −1 , with it being possible to tune the bending response for a specific application depending on the applied DC magnetic field. The response of the samples to the applied magnetic field is generated by the number and diffusion of the cations and anions near to the electrodes, as shown in the schematic representation of Figure 4b, with the bending response thus being governed by the mobility and ionic charge [15]. Conclusions Hybrid films based on the magnetic ionic liquid [Bmim][FeCl 4 ] and P(VDF-TrFE) with 40 wt.% filler content were prepared using the solvent casting technique at different solvent evaporation temperatures in order to tune samples' morphologies. The morphologies of the P(VDF-TrFE)/[Bmim][FeCl 4 ] composites depend on the solvent evaporation temperature, increasing the pore size with decreasing solvent evaporation temperature. The developed materials exhibit a double functional response: a magnetoelectric response of 14.1 V·cm −1 ·Oe −1 and a bending actuator response with a displacement of 2.1 mm and bending of 0.2%, which are maximized for the samples with the larger average pore size of 35 µm. Such results demonstrate the suitability of the MILs to be implemented in different polymeric matrices as innovative magnetic sensors and magnetically driven soft actuators, able to be prepared using additive manufacturing technologies.
5,026
2023-07-27T00:00:00.000
[ "Materials Science", "Engineering", "Physics" ]
MULTIPLE SYSTEM OF INNOVATION-INVESTMENT DECISIONS ADOPTION WITH SYNERGETIC APPROACH USAGE The purpose of the article was to comprehensively study and systematize knowledge about the essence, sources of origin and evaluation of synergistic effect in integration processes of national economy in order to build a conceptual scheme of its receipt in the implementation of integration interaction. The conducted research has shown that, a multi-level system of making innovative investment decisions using a synergistic approach is necessary to identify and build up a positive synergistic effect from the combination and interaction of assets and sources of financing, evaluation of the end results of such interaction, cooperation of labor, integration of industries, production integration. For example, the financing of the banking system of the agrarian sector of the Ukrainian economy was considered. The practical significance of the research is that the scientific developments will enable the formation of an effectively functioning agro-industrial complex in Ukraine with optimal financing based on the use of a multi-level system of making innovative investment decisions using a synergistic approach. Further studies are in the field of studying the system-forming factors and patterns of behavior of economic systems in terms of restoring the synergy potential. Introduction The main feature of modern economy development trends is its innovative orientation. As a result of scientific and technological progress, the world has received a strong impetus, which stimulates the replacement of technology and scientific developments in material production. As an example, let's consider agriculture as a ENTREPRENEURSHIP AND SUSTAINABILITY ISSUES ISSN 2345-0282 (online) http://jssidoi.org/jesi/ 2020 Volume 7 Number 4 (June) http://doi.org/10. 9770/jesi.2020.7.4(12) strategically important sector of the country's economy. Among the main factors of economic growth of agricultural production are banking system financing. The necessity and special role of credit, without which the producer cannot exist, are determined by the specifics of its reproduction process. Most agricultural enterprisespotential borrowers -are not attractive creditors because of their low creditworthiness. Such creditworthiness is caused by unsatisfactory financial condition, lack of liquidity and high risks of lending to the enterprises of the industry. Due to the lack of funds needed for the agro-industrial complex development, insignificant participation in the lending of agricultural commodity producers of universal banks, the priority of development of the agrarian sector should be the formation of an effective lending mechanism that can meet the high demand in the agricultural credit market and overcome the tendency of low credit supply from commercial banks and lack of sources of selffinancing for agricultural enterprises. Changes in the economic situation exacerbate the old and cause new problems that need to be solved in a timely search for new ones and improve the traditional credit mechanisms of agriculture. The main constraints on the credit market development in the agricultural sector today are: long-term decline in bank lending activity, orientation of bank lending activity in non-productive industries, high cost of loans, limitation of long-term credit for 3-5 years, low level of creditworthiness of agricultural holdings due to the high level of operation and moral obsolescence of fixed assets of agricultural enterprises, the limited infrastructure of the credit market. So, Gai and Kapadia (2019) investigate the solvency and liquidity crises of the financial system. Kanbur (2019) writes that inequality is a matter of the moment. In many rich countries, including the United States, there is a clear upward trend in income inequality. A study by Deloof et al (2019) provides new data on the implications of developing advanced banking systems in the country to fund new businesses. Grishnova, Cherkasov and Brintseva (2019) explored one of the pressing issues of the national economy: how, by what principles, by what system, how workers should be compensated, and how income should be organized in the new economy, in the new job market. Amoro et al (2019) consider two types of entrepreneurship: entrepreneurship based on opportunities linked to innovation; need-based entrepreneurship is starting a business through pressure as a way to compensate for the absence of other sources of employment. Jarkinbayev and Kosherbayeva (2018) argue that in the face of global economic instability, there is a growing need for accurate forecasts of macroeconomic indicators to make informed decisions on the implementation of the socio-economic policies of the state. Wang & Lee (2011) argue that investors often need to evaluate investment strategies according to their own subjective preferences based on different criteria. Serrano-Cinca & Gutiérrez-Nieto (2013) propose a decisionmaking model that assesses various aspects related to investment decisions. Winkler (1997) explores how cohabitants make economic decisions. Pauraa and Arhipovaa (2016) examined dairy production, which is of great importance for the European Union and is one of the important sectors of the agricultural economy. Lindholm-Dahlstrand,et al (2018) note that new knowledge is a major source of economic growth. Xiao, (2015) argue that in recent decades, the economic slowdown of major advanced economies in Europe has driven both academic and political concern for entrepreneurship. Collewaert and Fassin (2013) developed proposals for the impact of perceived unethical behavior on the conflict process between investors, venture capitalists and entrepreneurs. Frishammar et al (2019) identify new innovation audit activities and practices; Hallberg and Brattström (2019) have developed a model that outlines the impact of knowledge that shows the value of innovation, the price of products, the comparative value of innovation, and the corresponding moderators of these effects. Cobeña et al (2017) reveal the concepts of partner heterogeneity, the diversity of alliance portfolios, and the complementarity of network resources to gain a deeper understanding of alliance portfolio configuration and how it affects performance. Calvo-Mora et al (2016) study the impact of process methodology and partner management, as well as the relationship between this variable and key business outcomes. Brix-Asala et al (2016) highlight the opportunities and disadvantages of informal valorisation in return logistical activities, both socially and environmentally. Chulkova et al (2017) study the issues of increasing the investment attractiveness of agriculture, where economic security comes first and actually becomes a topic of food security. It is advisable to investigate the improvement of the credit system of the borrowers of the agricultural sector of the economy through the prism of neutralization or reduction of the effect of the mentioned number of factors. Today, the practice of conducting domestic agrarian business with the attraction of credit for seasonal needs, modernization and construction of new production facilities has been formed. Most representatives of the agricultural sector of the economy use short and long-term loans and in the current conditions, a significant decrease in lending activity of banks lacks them. Therefore, the domestic banking system plays an important role in the agriculture regarding the continuity of the reproduction process and the development of entrepreneurial activity; the study and justification of the need for a multi-level system of making innovative investment decisions using a synergistic approach has theoretical and practical importance. Purpose of article The research of economic systems' development problems on the basis of synergetic approach substantiated the relevance of the tasks, which are fulfilled in this article: 1. Generalization of the synergistic concept components and identification of the economic systems development features on the basis of the systematic approach and the general theory of systems. 2. Analysis of the current state of the agro-industrial complex financing by banking institutions, submission of proposals for the further effective financing of the agricultural enterprises in the Ukrainian economy based on synergistic effect. 3. Consideration of the innovation-investment process as a source of synergy formation, which creates preconditions for the potential of self-organization and reproduction of the real sector cycles of the economy. The purpose of the article is to comprehensively study and systematize knowledge about the essence, sources of origin and evaluation of synergistic effect in integration processes to build a conceptual scheme of its receipt during the integration interaction. Aleskerova et al. (2018) argue that the agricultural business needs significant support for the process of resource recovery. The objective necessity of applying a loan for the reproduction of fixed assets is conditioned by the specific nature of the seasonal nature of the agricultural production process. Vasylieva (2018) writes that Ukraine's agriculture occupies a dominant global position in growing cereals and oilseeds. National exporters belong to the TOP-10 markets for wheat, barley, corn, sunflower and soybeans. Loukianova et al. (2017) write that synergies can be obtained from different sources. They identify the main types of synergy -operational and financial. Operating synergy involves improving the operating activities of companies. However, the synergy does not automatically arise after the M&A is concluded. Businesses need to make some efforts (and some costs) to gain synergy. Xu et al. (2012) note that in a knowledge economy, organizational learning is an effective way for an enterprise to acquire, assimilate, assimilate, and apply and produce knowledge. The paper of these scholars presents a multi-level view of the organization of training, which suggests that training in organizations occurs at the individual, team, organizational and inter-organizational levels. Rajchlova et al. (2018) investigate the possible synergistic effect of the essence of accounting problems. Having identified the synergistic effect and the positive synergistic effect, the researchers focused on monitoring the positive synergistic effect of achieving positive changes in financial performance, the so-called "positive financial synergy". Zhylinska et al. (2017) propose an authors' approach to structuring synergistic effects; identify the features of synergistic interaction and identify methodological tools for evaluating the activities of diversified companies as complex integrated open- ENTREPRENEURSHIP AND SUSTAINABILITY ISSUES ISSN 2345-0282 (online) http://jssidoi.org/jesi/ 2020 Volume 7 Number 4 (June) http://doi.org/10. 9770/jesi.2020.7.4(12) type structures, taking into account modern marketing management concepts. Halynska (2017) writes that given the trends of globalization and the growing competition between enterprises, the transformation processes that are currently taking place in the Ukrainian economy lead to the need for cooperation and the creation of various forms of vertical integration structures. Collaborating with one another, firms are increasingly forming alliances that open up huge opportunities for businesses. The synergy effect is often replaced by the notion of an economic effect, which is a predictable result, and a systematically organized integration process, as opposed to a synergy effect, is predicted. Pokrason (2017) argues that the Ukrainian banking sector is faced with extremely difficult conditions, which are reflected in the national economy, a decrease in capital reserves, and a decline in the quality of lending throughout the market. The analysis of Holovach, Petrovskyi, Adamchuk (2018) showed that most European states support the policy of regulating the financial system as a holistic, indivisible phenomenon, gradually deviating from its perception as a collection of individual segments. The European Union has made a significant impact on this issue, which has introduced the integration of key functions in the regulation of the EU financial system and assigned these functions to a separate group of special bodies. Zavadska (2018) determines that a special task for the development of the modern economy of Ukraine is to increase the role of banks in shaping the necessary resources for the implementation of innovation policy. Dzhafarova et al. (2018) write that European integration for Ukraine, on the one hand, is a way of modernizing the economy, attracting foreign investment and technology, improving the competitiveness of domestic producers, access to world markets, including the financial services market. On the other hand, it is access to world markets. The formation of an open economy also means that the economy and financial system of the state must be internally stable, able to withstand the risks that accompany the processes of globalization and European integration. Ivanov et al. (2018) argue that it is impossible to ensure the active development of the economy, to strengthen the democratic foundations of Ukraine and to raise the standard of living of the population without the effective functioning of the credit and financial mechanism, which is a component of the banking system. Rogach et al. (2019) write that the current system of financial support for agriculture in Ukraine is on a vector of formation and adaptation to the conditions of the European Union. The EU's financial support policy for agriculture in the European Union ensures high results in agricultural production, economic and social processes and the promotion of agriculture. Rostetska, Naumkina (2019) develop the theory and practice of cooperation of Central European countries in the context of modern European integration processes, which is important for the development and implementation of foreign and domestic policy strategies in European countries and Ukraine. Aleskerova et al. (2018) considered features of securing a loan for reproduction of fixed assets in agriculture are to take into account the sectoral specificity and structure of fixed assets and determine the types of loans that can be attracted by agricultural enterprises for the formation of resources. The conducted research shows that lending relations for reproduction of fixed assets are at the initial stage of their development. Pokrason (2017) defined the credit rating criteria for Ukrainian banks: 1) sovereign risk; 2) capital position and asset quality; 3) financing and liquidity; 4) exchange rate. Holovach (2018) have determined that government regulation of the financial system of many European countries is based on the consolidation of coordination and supervisory functions. One or more clearly-defined bodies carry out national regulation of financial relations in such European countries as Germany, Poland, Sweden, Spain, etc. Zavadska (2018) has identified effective areas of customer interaction with banks, developing fundamentally new banking tools for investing in innovative businesses, which will help to enhance the role of banks in the innovative development of Ukraine. Rogach et al. (2019) point out that the defining feature of European financial support for agriculture is to regard it as one of the factors in the development of the European Union's financial system. In Ukraine, support for the agricultural sector formally and marginally affects the development of the agricultural sector. ENTREPRENEURSHIP AND SUSTAINABILITY ISSUES ISSN 2345-0282 (online) http://jssidoi.org/jesi/ 2020 Volume 7 Number 4 (June) http://doi.org/10. 9770/jesi.2020.7.4(12) Therefore, the expediency of spreading topical forms of bank credit for agrarian structures is conditioned by destabilizing determinants in the domestic economy against the backdrop of world financial events, since it is the agro-industrial complex that addresses the issue of food security. The agro-industrial sector of the economy, considering the seasonality factor and climatic conditions, is considered especially risky from the standpoint of conservative lenders, and especially the financing of its innovation process. At the same time, the existence of agribusiness in the gap with innovations and advanced transformations in science and technology is impossible and requires close integration into the industry. Such convergence certainly requires proper access to financial resources, which is not always self-financing and requires external borrowing. Outlined problems of financing the innovation process in agribusiness, riskiness in particular, low investment attractiveness, specific features of the industry require the study of key trends and prerequisites for the implementation of bank lending in the system of financial support of the innovative agribusiness process in order to expand sources of access to financial resources. Methods During the study, the following methods were used: dialectical (Wang & Lee (2011) (2019)) to determine the role of the banking system in financing the development of agriculture; analysis, synthesis (Pauraa and Arhipovaa ( 2016 ))to evaluate the potential of the banking system to finance the development of agriculture; economic and statistical method ( Lindholm-Dahlstrand, Andersson & Carlsson (2018))to study the impact of macroeconomic indicators of the banking system on agriculture value added; formal and logical, systematic approach (Collewaert and Fassin (2013))to improve conceptual provisions and develop a conceptual model of agricultural development management using bank financing. The methods of scientific generalization, data averaging and retrospective analysis methods of financing sources of agribusiness innovations of Ukraine were also used in the article. The theoretical and methodological basis of the research is the scientific works of representatives of different schools and areas of economic theory, modern scientific developments of domestic and foreign scientists, devoted to the issues of synergy, synergetic effect and their manifestation and measurement in integration processes. The methodological basis of the study was the systematic and integrated approaches to the study of the synergistic effect in integration processes (see Figure 1). ENTREPRENEURSHIP AND SUSTAINABILITY ISSUES ISSN 2345-0282 (online) http://jssidoi.org/jesi/ 2020 Volume 7 Number 4 (June) http://doi.org/10. 9770/jesi.2020.7.4(12) 2750 Together with the interdisciplinary nature of the synergistic approach, its transdisciplinarity is important. It reveals itself in such synergistic features as operation "through", disciplinary boundaries in the study of the subject, as going "beyond" specific disciplines. The transdisciplinary features of the synergistic approach are manifested in the ability to transfer cognitive schemas from one subject area to another with the emergence of shared spaces of existence. The transdisciplinary nature of synergetics creates an anthropic space for the dialogical communication of subjects. From this point of view, synergetics reveals the researcher's territory of subjectivity. The key provisions of the synergistic methodology are as follows: -complex systems cannot impose development paths, but need to promote their own development trends (Anning-Dorson, Nyamekye & Odoom 2017); -chaos can be a constructive source, from which a new organization of the system may be born (Holovach, Petrovskyi and Adamchuk 2018); -at certain moments of instability, small perturbations can have macro-consequences and develop into macrostructures, in particular, the actions of one particular personality can affect macro-social processes (Vasylieva 2018.;Paptsov, Nechaev and Mikhailushkin 2019); -for complex systems there are several alternative ways of development, but at certain stages of evolution a certain pre-determination of the deployment of processes manifests itself, and the present state of the system is determined not only by its past but also by its future (Loukianova, Nikulin and Vedernikov (2017)); -a complexly organized system involves not only simpler structures and is not an ordinary sum of parts, but generates structures of all ages in a single world (Halynska (2017)); -taking into account the regularities and conditions of the rapid, avalanche-like processes and processes of nonlinear self-development of systems, it is possible to initiate these processes through human administrative actions (Laila and Widihadnanto 2017). Results According to the analysis of the Global Innovation Index (GII), which takes into account about 80 criteria and allows annual monitoring of innovative activity of countries, the rating of innovative activity of Ukraine in the world is also gradually increasing (Fig. 2). For a more detailed explanation of the topic of the study, it was considered the dynamics of banking lending to the agrosphere. Financing of the innovation process of the agro-industrial complex of Ukraine is carried out exclusively at the expense of the own financial resources of the enterprises (Table 1). The state budget is (3.7% in 2012) and lending is (0.8% -in 2010, 3.1% -in 2012, 0.3% -in 2013). Such a structure of financial support hinders the innovative activity of the industry, since its own funds are directed mainly to the modernization of existing equipment, not to the creation of new ones. It was considered the potential of the banking system to finance the development of agriculture. The study of innovative economic processes and the identification of reserves for the effective use of financial instruments, the increase in production of goods, works and services occur after the implementation of the cycle of financial resources. They provide an opportunity to improve the work in the future, but do not affect the mistakes, miscalculations and all kinds of illegal actions that took place in the analyzed period. While exercising the control function is not so much about detecting deviations from a given state of an object, it is about preventing them from occurring. Assessing the feasibility of using a financial instrument at the stage of innovation in agriculture prevents those processes that are contrary to the requirements of regulatory documents or do not agree with the purpose of innovation. Accordingly, control is a means of preventively regulating innovation in agriculture, which causes positive or unwanted changes in management. However, these management functions, with the skillful use of their interpenetration, reveal a real picture of the managed system. It follows that, knowing the content of managing the bank financing of the national economy, it is possible to effectively manage the development of innovative agriculture. In fig. 3 it is developed a conceptual model for making innovative investment decisions in different sectors of the national economy using a synergistic approach. Table 2 the financial sustainability indicators of the Ukrainian banking system are analyzed. In Fig. 4 the dynamics of agriculture, value added (% of GDP) and employment in agriculture (% of total employment) are analyzed. ENTREPRENEURSHIP AND SUSTAINABILITY ISSUES After the calculations, the function y agriculture value added (% of GDP) is characterized by equation: It is understandable that with the constant number of operating structural units of banks and employment in agriculture, the growth of the volume of credit support of the agro-industrial complex of Ukraine by the banking system by 1 million UAH entails an increase in agricultural value added by an average of 0.09%. The increase in the number of operating structural units of banks with constant volumes of credit support of the agro-industrial complex of Ukraine and employment in agriculture, leads to an increase in agricultural value added on average by ENTREPRENEURSHIP AND SUSTAINABILITY ISSUES ISSN 2345-0282 (online) http://jssidoi.org/jesi/ 2020 Volume 7 Number 4 (June) http://doi.org/10. 9770/jesi.2020.7.4(12) 2755 0.11%. With unchanged volumes of credit support of the agro-industrial complex of Ukraine and the number of operating structural units of banks, the growth of employment in agriculture leads to an increase in agricultural value added by an average of 0.06%. However, this does not mean that the agricultural value added is more influenced by the first factor than the second and the third. Using the regression equation on a standardized scale, it is possible to make the following comparison. The standardized regression equation looks like this: This means that with the growth of the first factor by one unit, with a constant number of operating structural branches of banks, employment in agriculture and agricultural value added increase by an average of 0.8%. Since β1>β2> β3 (0,8>0,3>0,5), it can be concluded that the first factor has a greater influence on the agricultural value added rather than the second factor, as it seems from the full-scale regression equation. In Fig. 5 the sources and effects of synergy in a multilevel system of making innovative investment decisions using a synergistic approach are considered. Fig. 5. Sources and effects of synergy in a multilevel innovative investment decision making system using a synergistic approach Source: own development ENTREPRENEURSHIP AND SUSTAINABILITY ISSUES ISSN 2345-0282 (online) http://jssidoi.org/jesi/ 2020 Volume 7 Number 4 (June) http://doi.org/10. 9770/jesi.2020.7.4(12) 2756 Direct proportional dependence causes consideration of the basic processes and their changes on the basis of systems of differential linear equations, the solution of which is based on the use of mathematical analysis apparatus and is obtained depending on the initial conditions in real and complex values. A multilevel system of making innovative investment decisions using a synergistic approach is used to identify the forms of dependencies that cause this or that trend, since the decisions can be many. The existence of multi-directional links over time can lead to the aggregation into new forms (cooperation, integration, coalition) and the formation of links with more potential. From the point of view of the synergistic approach, economic systems are represented as a set of many subsystems, characterized by incompleteness of information and the following features: nonlinearity (loss of the property of additivity in the process of development); instability (loss of equilibrium states in the process of evolution); openness (exchange of resources from the outside); subordination (functioning and development are determined by processes in subsystems). The following synergistic effects are distinguished: the effect of introducing (accessing) new products and markets and the effect of further compatible strategic steps of the partners. In Fig. 6 the advantages and contradictions of making innovative investment decisions using a synergistic approach are considered. ENTREPRENEURSHIP AND SUSTAINABILITY ISSUES ISSN 2345-0282 (online) http://jssidoi.org/jesi/ 2020 Volume 7 Number 4 (June) http://doi.org/10. 9770/jesi.2020.7.4(12) 2757 Fig. 6. Advantages and contradictions of making innovative investment decisions using a synergistic approach Source: own design ENTREPRENEURSHIP AND SUSTAINABILITY ISSUES ISSN 2345-0282 (online) http://jssidoi.org/jesi/ 2020 Volume 7 Number 4 (June) http://doi.org/10. 9770/jesi.2020.7.4(12) Thus, the potential for the development of economic systems lies in the possibility of their self-organization, the realization of which occurs in cases of inertia of motion. The use of a multi-level system of making innovative investment decisions using a synergistic approach generates a synergistic potential, capable of finding many possible solutions for the trajectory of sustainable development (see Table 3). What are the different roles of the synergistic concept in transformational change? Analysis of the current state of financing of the agro-industrial complex by the banking institutions, submission of proposals for further effective financing of enterprises of the agricultural sector of the Ukrainian economy based on synergistic effect. Jarkinbayev and Kosherbayeva (2018) write that the accuracy and consistency of macroeconomic projections are of particular importance to developing countries, which are significantly influenced by external factors and macroeconomic shocks. According to the theory of synergetics, no open system can impose a mode of behavior or development, but one can choose and stimulate one of the conditions laid down in specific conditions, counting not so much on a managerial, but on a synergetic, self-managed process. Assessment of other factors is a topic for future research. Consideration of the innovationinvestment process as a source of synergy formation, which creates preconditions for the potential of self-organization and reproduction of cycles of the real sector of the economy. Amoro et al (2019) argue that if developing countries do not make the promotion of productive entrepreneurship a major concern in their political agendas, they will only diminish efforts without achieving higher results. Innovation and investment processes are based on the laws and patterns of self-organization and self-development of economic systems. They give an opportunity to take a new approach to the development of problems of agricultural development, considering them primarily from the point of view of openness, co-creation and orientation to development. How is the innovation and investment process changing in different technological sectors over a long period of time? Comprehensive research and systematization of knowledge about the essence, sources of origin and evaluation of synergistic effect in integration processes to build a conceptual scheme of its receipt in the implementation of integration interaction. Deloof, La Rocca and Vanacker (2019) have considered the idea that factors at the country or local level can influence the discretion of entrepreneurs. Researchers have illustrated that developing a local banking business can both facilitate and limit the ability of entrepreneurs to raise finance for their businesses. Synergetics theory focuses on disequilibrium, instability as a natural state of open nonlinear systems, on the multiplicity and uncertainty of the paths of their development, depending on the factors and conditions that affect it. Is the role of synergies different in different sectors and at different times? Discussion We are expanding the research of scholars such as Gai and Kapadia (2019) who find that liquidity crises affect solvency prospects; and the expected recovery ratios of the creditors, in turn, will affect the short-term prospect of a liquidity decision. We, for more detailed disclosure of the research topic, study the dynamics of development of banking lending to the agrosphere and financing the innovation process of the agro-industrial complex of Ukraine. This allows us to capture the interaction between the specific features of the country and the possibilities of financing agriculture. So, Kanbur (2019) notes that the analysis of unevenness in the country is important for establishing the basic facts of unevenness in a world where countries are increasingly linked by trade and investment and where, the global economy requires an assessment of global inequality rather than national inequality in isolation. We contribute to our existing research activities by analyzing the Global Innovation Index (GII), which takes into account about 80 criteria and allows us to track the innovative activity of countries and the rating of innovative activity of Ukraine in particular on a yearly basis. (2019) provide new theoretical and empirical insights into how heterogeneity in the local banking market affects funding. And in our study, we continue and expand on this topic by considering the potential of the banking system to finance the development of agribusiness. Deloof, La Rocca and Vanacker In their work, Grishnova, Cherkasov and Brintseva (2019) have obtained empirical results that confirm theoretical hypotheses about the dynamic changes in the social sphere and employment and society in general; and the world at large, the public administration of each individual country and every real or potential employee, in particular, must prepare thoroughly for this. And as we deepen the proposed topic, we study the impact on agriculture of value added (% of GDP) of the volume of credit support of the agro-industrial complex of Ukraine by the banking system, the number of operating structural units of banks, employment in agriculture. Amoro, Ciravegna, Mandakovic, and Stenholm (2019) argue that developing countries should rationally organize their functions, improve governance and eliminate barriers that impede productive business activity rather than focus only on reducing unemployment. Our research shows that with constant volumes of credit support of the agro-industrial complex of Ukraine and the number of operating structural units of banks, growth in employment in agriculture leads to an increase in agricultural value added by an average of 0.06%. Jarkinbayev and Kosherbayeva (2018) argue that monetary forecasting errors during the review have increased and a possible reason for this is the lack of coordination between the Ministry of National Economy, Ministry of Finance and the National Bank in the preparation of macroeconomic forecasts. And in our study, in turn, the conceptual model of making innovation-investment decisions in different branches of the national economy using a synergistic approach is proposed. The analysis and research of the tendencies characterizing the state of credit of the agro-industrial complex in Ukraine made it possible to conclude that bank loans are not able to fully meet the needs of agricultural enterprises for credit resources; regulation of credit provision for agricultural enterprises is ineffective and government support is inadequate; unsatisfactory volumes of credit inflows into the agricultural sector due to harsh conditions and high interest rates. The study also found that globalization and concentration of production are now taking place in all sectors of the economy. They qualitatively and quantitatively affect both the individual enterprise and the economy of the state as a whole. The solution to these problems is complicated by the fact that the functioning of the Ukrainian economy is characterized by a number of fundamental contradictions. Methodological and methodological limitations of the study are that when assessing economic cyclicality, its synergistic nature is taken into account, which has a significant impact on the features of cyclical dynamics in real ENTREPRENEURSHIP AND SUSTAINABILITY ISSUES ISSN 2345-0282 (online) http://jssidoi.org/jesi/ 2020 Volume 7 Number 4 (June) http://doi.org/10. 9770/jesi.2020.7.4(12) economic systems. The nonlinearity and complexity that characterize the economic system, as well as the presence of a large number of feedbacks, determine the synergistic nature of many economic phenomena and lead to numerous synergistic effects that change the qualitative side of the functioning of the national economy. Understanding the nature and features of the manifestation of synergistic effects allows you to organize the management of the economy at a new level, based on the ideas of discretion and stability of trajectories of economic development. The presence of synergistic effects in the economy requires new approaches to forecasting, planning, and regulation at different levels -from the economy of an individual firm to the economy of the whole country. Prospects for further research are to study non-linear processes of formation and development of institutional systems, to find ways to form and actualize their market potential. Conclusion The conducted research has shown that, a multi-level system of making innovative investment decisions using a synergistic approach is necessary to identify and build up a positive synergistic effect from the combination and interaction of assets and sources of financing, evaluation of the end results of such interaction, cooperation of labor, integration of industries, production integration. For example, the banking system financing of the agrarian sector of the economy in Ukraine was considered. The research showed that with the constant number of operating structural branches of banks and employment in agriculture, the growth of the credit support volume of the agro-industrial complex of Ukraine by the banking system by 1 million UAH entails an increase in agricultural value added by an average of 0.09%. The increase in the number of operating structural branches of banks with constant volumes of credit support of the agroindustrial complex of Ukraine and employment in agriculture, leads to an increase in agricultural value added on average by 0.11%. With unchanged volumes of credit support of the agro-industrial complex of Ukraine and the number of operating structural branches of banks, the growth of employment in agriculture leads to an increase in agricultural value added by an average of 0.06%. It is proved that the factor of the volume of credit provision of the agro-industrial complex in Ukraine has the largest influence on agricultural value added by banking system among other investigated factors. The practical significance of the conducted research is that the scientific developments will allow forming in Ukraine an effectively functioning agro-industrial complex with optimal financing on the basis of the use of a multi-level system of making innovative investment decisions using a synergistic approach. Further researches are in the field of studying of system-forming factors and regularities of behavior of economic systems in the conditions of synergy potential restoration.
7,791
2020-06-01T00:00:00.000
[ "Economics", "Business" ]
A New Approach to Non-Singular Plane Cracks Theory in Gradient Elasticity A non-local solution is obtained here in the theory of cracks, which depends on the scale parameter in the non-local theory of elasticity. The gradient solution is constructed as a regular solution of the inhomogeneous Helmholtz equation, where the function on the right side of the Helmholtz equation is a singular classical solution. An assertion is proved that allows us to propose a new solution for displacements and stresses at the crack tip through the vector harmonic potential, which determines by the Papkovich–Neuber representation. One of the goals of this work is a definition of a new representation of the solution of the plane problem of the theory of elasticity through the complex-valued harmonic potentials included in the Papkovich–Neuber relations represented in a symmetric form, which is convenient for applications. It is shown here that this new representation of the solution for the mechanics of cracks can be written through one harmonic complex-valued potential. The explicit potential value is found by comparing the new solution with the classical representation of the singular solution at the crack tip constructed using the complex potentials of Kolosov–Muskhelishvili. A generalized solution of the singular problem of fracture mechanics is reduced to a non-singular stress concentration problem, which allows one to implement a new concept of non-singular fracture mechanics, where the scale parameter along with ultimate stresses determines the fracture criterion and is determined by experiments. Introduction The problem of singularities in the theory of elasticity and in fracture mechanics is widely discussed in the related scientific literature [1][2][3]. The singularity of solutions for stresses at the crack tip in the linear theory of elasticity excludes the use of traditional criteria for the strength of bodies with stress concentration. Moreover, the formally obtained singular solutions contradict not only the physical meaning, but also the postulates of the theory of elasticity. Such paradoxes still require an explanation [3]. Gradient elasticity allows one to describe size effects [4], and provides regularization of singular solutions of differential equations of elasticity theory [5,6]. In this regard, it would be quite natural to expect the development of gradient fracture mechanics. Nevertheless, so far only non-singular solutions have been constructed in gradient fracture mechanics for test problems corresponding to cracks of Mode III [7]. 2 of 14 For the general theory of gradient deformation, the variational model is described by the density of potential energy: 2E = C ijkm ε ij ε km + C ijklnm u i, jk u l,mn , where u i is the displacement vector, ε ij the strain tensor, C ijkm and C ijklmn are classical and gradient moduli of elasticity respectively, having different dimensions. The force model is determined by the Cauchy stress tensor σ ij and by the tensor of double stresses of the third rank µ ijk . Constitutive equations are obviously given by the following formulas: In the general case, the physical properties of an isotropic gradient theory are described using seven physical constants, two of which are Lamé constants, and five others determine the tensor of gradient moduli of the sixth rank. Here, λ, µ are the Lame coefficients, s is the scale parameter, ϑ = u k,k is the dilation, and δ ij is the Kronecker delta. The equilibrium equations in such a model are written through the tensor of "total" stresses (see [9]): (σ ij − µ ijk,k ) ,j = 0 or τ ij,j = 0, where τ ij = σ ij − µ ijk,k . Constitutive equations for such applied gradient models have the following form (see [9]): where ∆ is the Laplace operator. Note that static boundary conditions for the considered gradient model are specified through a linear combination of "classical" stresses τ ij and derivatives of double stresses µ ijk . As a result, the boundary conditions cannot be written only through the tensor of stresses τ ij , which makes it difficult to obtain solutions. This probably explains why gradient solutions are constructed only for harmonic problems of cracks of Mode III. Let us note that the problem of the regularization of the classical singular crack fields was still considered in [10][11][12] where the attempts were made to construct non-singular solutions. Furthermore, in [13] these studies were discussed because the obtained solutions did not satisfy the strain compatibility equations. Indeed, in the indicated works, it was possible to construct only approximate non-singular solutions, which is associated with incomplete consideration of the biharmonic components on the right side of the Helmholtz equations. In any case, in the indicated papers there are not the common analytical solutions to problems for cracks of Mode I. In the interesting work [14] the search of solution for the theory of cracks comes down to integral equations. The complexity of the obtained expressions and the method of constructing approximate solutions using the Chebyshev series did not allow the find of explicit analytical expressions that would show the regularity of the solution. The authors do not provide analytical solutions in an accurate study conducted in [15] using micropolar elasticity, as well as in the fundamental work [16] since, during the construction of an approximate solution, the authors are not able to ensure fully the continuation of the solution directly to the crack tip. In the present work, we are going to construct an exact non-singular solution for Mode I cracks using a new representation of the classical solution for generalized stresses through a complex-valued scalar potential. The modified variant of the generalized elasticity theory was proposed by the authors of [17,18], where the defining relations are written for the generalized non-local stress tensor and the generalized non-local strain tensor. Note that the idea of the non-local theory of elasticity was proposed in the fundamental works [19,20]. Connection of non-local theory with gradient theories was established in the work [21]. Theoretical aspects of non-local theories and their generalized variants and also their applications in the area of mechanics of nano-structures and nano devices discussed in the interesting works [22][23][24]. We use the applied variant of model when a generalized non-local tensor function is introduced using the averaging operation of a local function on a given finite fragment and then the local function is expanded in a Taylor series on local coordinates on the fragment under consideration with retention of a finite number of terms in the series. As a result, after integrating the resulting finite Taylor series with respect to local coordinates over the considered fragment, the non-local generalized functions are constructed. In the proposed procedure, generalized functions are determined not only through the values of the local function at some point of the fragment, but also in aggregate through its derivatives of the second, fourth, etc. orders, depending on the number of terms held in the Taylor series. Here we can see a difference from the traditional differential calculus where the function or its derivatives are determined by their values at the point. Generalized functions in particular are determined by the corresponding local functions through the Helmholtz operator. The construction of the solution in the model [17,18] is carried out in two stages. At the first stage, the traditional boundary value problem of the theory of elasticity is considered and non-local stresses and displacements are found. At the second stage, the obtained solutions are substituted on the right-hand side of the Helmholtz equation, which is solved with respect to local functions. Thus, to construct the local solutions it is first proposed to solve the classical elasticity problem for generalized stresses and displacements, and then find local stresses Ξ ij and displacements U i , by solving the corresponding Helmholtz equations: We can see that in using this model for the mechanics of cracks the generalized non-local solutions formally coincide with the classical solution of the static boundary conditions and the local stress field must be found as common solutions of the non-homogeneous Helmholtz equation (2). The solution for local stresses and displacements has the form of a boundary layer. We note that the procedure for constructing local solutions is similar to the construction of gradient solutions considered in [25]. On the other hand, in the discussed model [18] the constitutive equations are written on non-local generalized stresses and deformations σ ij = S ijmn ε mn , which is quite justified, physically. As is evident, the constitutive equations of the discussed model are different from the constitutive equation (1). For the mechanics of cracks, the generalized non-local solutions formally coincide with the classical solution of the static boundary conditions, and the local stress field must be found as a common solution of the non-homogeneous Helmholtz equations. We assume that the local stresses are important for assessing the strength and that local stresses are involved in the strength criteria. These stresses are found after the definition of generalized stresses ("classical") σ ij from the equation σ ij = (Ξ ij − s 2 ∆Ξ ij ). The purpose of this work is to show that, for the model under consideration, the local stress field in the vicinity of the crack tip is regular, i.e., it does not have a singularity. Non-singular solutions could be used as the basis for constructing and testing a new concept of stress concentration in crack mechanics proposed in [26]. We belive that for brittle cracks the non-singular cracks dependences allow us to determine the important role for the scale parameter as a characteristic of fracture for brittle cracks along with tensile strength. In fact, a simular algorithm was implemented in [26] for the case of finite cracks of normal separation in a plane strip. It was shown that the parameter s for a particular brittle material is a constant and can be considered as a critical parameter of fracture, giving with high accuracy a strength forecast for brittle materials. We propose a new representation of the solution of the plane problem of the theory of elasticity through the complex-valued harmonic potentials included in the Papkovich-Neuber representation in a symmetric form, convenient for applications. A condition is determined under which the solution of plane problems of elasticity can be written through one harmonic potential. This condition is also fulfilled for problems of crack mechanics. As a result, we found the form of solution through one harmonic potential for displacement, and stresses that allow us to construct the non-singular solutions of mechanics of cracks. New Representation of Solution through Harmonic Potential in Mechanics of Cracks Let us consider the Papkovich-Neuber representation as a convenient analytical tool for describing the stress-strain state of an elastic body [27]. This representation allows us to represent the displacements and the corresponding stresses, through two auxiliary harmonic potentials: f is the vector potential and another φ is scalar, We propose a new representation of the solution of the plane problem of the theory of elasticity through the complex-valued harmonic potentials included in the Papkovich-Neuber representation in a symmetric form, convenient for applications. We show that these representations obtained directly from the representation of the general Papkovich-Neuber solution can be reduced to the well-known representations of Kolosov-Muskhelishvili for the plane problem of the theory of elasticity through two complex potentials. However, then we prove that under certain conditions a solution to a plane problem can be represented through one complex-valued harmonic potential. Such a representation is new and is convenient in constructing generalized solutions in crack mechanics. Lemma 1. 1. Papkovich-Neuber's representation allows us to present a solution to the plane problem of the theory of elasticity through three complex potentials in the following form: 2. Papkovich-Neuber's representation leads to the following form of the common solution for complex potentials u , p, τ of the plane problem of the theory of elasticity through two analytic functions Φ(w) and F(w): Proof. We consider the plane problem of the theory of elasticity and write the displacements and stresses in complex form, introducing in (3) complex potentials u = u x + iu y , f = f x + i f y instead of the corresponding vector expressions u = u x , u y , f = f x , f y and complex variables w, w instead variables x, y. Also, we use differentiation by complex coordinates w = x + iy, w = x − iy: Let us define the first invariant of the stress tensor p = (σ x + σ y )/2 and the complex potential τ = (σ x − σ y )/2 + iτ xy . Then the components of the displacement vector u x , u y and the stress tensor σ x = σ 11 , σ y = σ 22 , τ xy = σ 12 can be expressed explicitly through the introduced complex potentials in the form of Equation (4). The Equation (3) determines the link between u, p, τ and potentials f = f x + i f y and φ. Using representation (3) and taking into account the equalities rf = It is easy to establish also the following symmetric expressions for the stresses: We take into account that the potential f is harmonic: ∆ f = 4∂ 2 f /∂w∂w = 0. Then Equation (7) leads to the following equalities for complex potentials p, τ: Furthermore, we take into account that for a plane statement the general complex solution of the Laplace equation f is expressed across two analytical functions f = F 1 (w) + F 2 (w). On the other hand, the scalar potential ϕ in (3) is real and can be written through one analytical function The following sequence of equalities shows the possibility of writing Equations (6) and (8) is the antiderivative function F 2 . We took into account that the derivative of the analytic function with respect to the conjugate complex variable is equal to zero. The stresses potentials are converted similarly: . There is a relation between the functions f and φ under which the Papkovich-Neuber relation can be written through one harmonic potential, assuming that φ = 0. 3. For problems of crack mechanics, the Pakovich-Neuber representation can always be written through one harmonic vector potential. Proof. 1. Comparing the right and left parts of Equations (5) and (9) it is easy to see that they are completely consistent with each other, if accept that Substitution of the functions F(w) and Φ(w) from Equation (11) into Equation (5) for complex displacements u = u x + iu y shows that the resulting equation exactly coincides with the expression for displacements. The first part of the theorem is proved. 2. To prove the second part of the theorem, we consider again the representation (3) and show that there is somewhat overdetermined, since between the functions f and φ a relationship can be established, which does not change the stress-strain state. Indeed, we can introduce the following harmonic scalar function satisfying the equality: As a result, if Equality (12) holds, then for any harmonic function φ it can be resolved with respect to the function h and the Papkovich-Neuber representation (3) can be rewritten by redefining the vector potential f with condition φ = 0. 3. Let us briefly consider the cases when relation (12) is not satisfied. We represent the function φ in the form r α χ(ϕ, θ), where ϕ and θ are the angular coordinates, which are independent from the radial coordinate r. Equality (12) does not satisfied if α = 4(1 − ν). For negative values of the exponent, Equation (12) always has a solution, and without loss of generality the relation (3) can be written assuming condition φ = 0. Therefore, for problems of crack mechanics, the Papkovich-Neuber representation can always be written in terms of a single harmonic vector potential. The theorem is proved. Thus, the general solution of problems of the plane theory of elasticity can be expressed not only using two analytical functions in the form of Equation (5), but also in equivalent form using one harmonic function f = F 1 (w) + F 2 (w) (i.e., through Papkovich-Neuber complex potential): Let us consider the canonical singular problem of the theory of elasticity, which defines the problem of fracture mechanics for the plane problem: and leads to the determination of the correct solution in stresses, tending to zero when w → ∞ as O(|w| 1/2 ), w = x + iy, i = √ −1. It is known that the correct solutions to this problem for a plane problem for normal separation crack and transversal shear, respectively are given through two Kolosov-Muskhelishvili potentials. This solution for a crack open displacement is written as follows (see [1]): Here u x , u y , σ x , σ y , τ xy are components of the displacement vector and stress tensor, respectively, r, θ are polar coordinates in the vicinity of the top of the crack K I is stress intensity factors. Consider again the Papkovich-Neuber representation (see [27]), which allows us to present a general solution to the problem of the theory of elasticity; that is, both displacement vector u and their corresponding stresses σ(u) on a surface element with a normal vector n, through only one harmonic vector potential f (in accordance with Theorem 1): Here, the differential combination n∇f means the expression n j (∂ f j /∂x i ) , σ(u) = σ ij n j , where σ ij is the stress tensor. Let us prove that the solution of the classical singular problem (for the example of a crack open displacement (15)-(17)) can be represented also through one harmonic complex-valued potential. Theorem 2. An explicit expression exists for the harmonic potential for which the general representation of the solution of the plane Papkovich-Neuber problem (13), (14) is completely equivalent to the classical representation for the singular solution for a crack Mode I given by Equations (15)- (17). Proof. Let us find an explicit expression of the harmonic potential f for a crack Mode I. We first consider the classical solution (13) for the displacement components u x , u y and transform them, distinguishing the coordinate y = r sin θ = i(w − w)/2. We obtain: Using the written equalities, we can get a complex-valued value of displacements u = u x + iu y : In the resulting expression, we select the derivative with respect to the coordinate w: Assuming in (19) that w 1/2 = f , we rewrite the complex value of displacements u = u x + iu y in the form: In order to reduce Equation (20) to the form (13) we must show that the last term in Equation (20) can always be compensated by the corresponding choice of the harmonic function. Instead of the harmonic function f = w 1/2 in Equation (20), we propose to consider another harmonic function f I obtained as the sum of the function f = w 1/2 and the harmonic function Aw 1/2 with some unknown coefficient A, so far f I = w 1/2 + Aw 1/2 . It is easy to verify that one can find a constant A such that Expression (20), in which instead of the function f = w 1/2 stands f I , coincides completely in appearance with Formula (13) for complex displacements, found in accordance with the Papkovich-Neuber representation. Indeed, after some transformations we find A = −1/(5 − 8ν). Now the statement of the theorem is verified by direct substitution. Thus, for a crack open displacement in the classical formulation, the field of elastic displacements and stresses in the vicinity of the crack tip is described by the Papkovich-Neuber representation (13) and (14) in complex form with a potential f I having the following expression through complex variables w and w: We have thus proved that the mathematical description of the crack is based on harmonic functions with a half-integer value of the exponent for the complex variable w. The form obtained for representing the solution is based on one complex potential, which represents the solution in a form convenient for construction of the gradient solutions. The theorem is proved. The statement proved above is also valid for crack of mode II and crack of mode III. For example, for the classical singular solution mode II: the following harmonic function of complex variables w, w can be found: This harmonic function establishes a correspondence of the classical singular solution (22)-(24) with the solution given by the Papkovich-Neuber representation (13) and (14). Finally, we consider the longitudinal shear crack (Mode III), which in the spatial theory of elasticity is described by the following relations for displacements and stresses: It is easy to check that the stress-strain state for this case (26) is also completely described by the spatial version (3) of the Papkovich-Neuber representation with non-zero plane harmonic potential f z (x, y) and its compensating potential φ = z f z (x, y), where: Thus, strain-stress state for Mode I, II, and III singular cracks can be described on the basis of the Papkovich-Neuber formulas with the aid of one complex potential with a fractional degree. These forms are convenient for constructing a generalized gradient regular solution. Remark 1. Note that harmonic functions establishing a correspondence between singular classical solutions of crack mechanics and solutions constructed using the Papkovich-Neuber representation are not analytic because, as is easily verifiable, the Cauchy relations are not satisfied for them. In the future, our goal is to construct the local stresses and displacements associated with classical stresses and displacements through the Helmholtz equation using generalized elasticity. Consequently, the field of local stresses will describe the non-singular crack solutions. Regular Gradient Solutions in Crack Mechanics Let us construct a regular solution for crack Mode I, using an applied version of the non-local theory of elasticity. Using the algorithm discussed earlier in the Introduction, we can determine the local displacement and stress fields as general solutions of the inhomogeneous Helmholtz Equation (2), on the right-hand sides of which the well-known classical solutions are taken. We rewrite relation (2), introducing on the right-hand sides of the Helmholtz equations the classical solutions written in terms of complex displacements u(w, w) and complex potentials p = p(w, w), τ = τ(w, w) (see Equation (4): p = (σ x + σ y )/2, τ = (σ x − σ y )/2 + iτ xy . The generalized theory of elasticity allows finding the local stresses and displacements associated with classical stresses and displacements through the Helmholtz equation. The fields of local stresses describe the regular crack solutions as: Here u, p, τ are the potentials of displacements and stresses expressed through the Papkovich-Neuber potential (21) by the formulas (13) and (14). The general solution of Equation (28) is constructed as the sum of the general solution of the homogeneous Helmholtz equation and the particular solution of the inhomogeneous Helmholtz equation. It holds the following lemma, indicating the structure of a particular solution of the Helmholtz equation. Lemma 2. Assume that function φ is the right-hand side of the Helmholtz equation, written with respect to a function Φ. Assume also that function φ is a biharmonic function: where ∆ 2 φ = 0. Then, a particular solution of the inhomogeneous Helmholtz equation has the form: Proof. The lemma is proved by directly substituting Expression (30) into Equation (29). The regularity of the generalized solution of Equation (29) is ensured by the structure of a homogeneous solution and the special choice of arbitrary constants in this solution. Since the functions u, p, τ in Equation (28) and their particular solutions (see Equation (30)) contain singularities with only a half-integer value of the exponent in the degree of the complex variable w, among the solutions of the homogeneous Helmholtz equation we are interested only in singular functions with the same value of the exponent in the degree of the complex variable w. It was shown in [29,30] that a system of such singular solutions can be constructed in an explicit analytical form using radial multipliers satisfying a special recurrence relation: Thus, solutions of Equation (28) in the gradient theory of cracks can be constructed by explicitly taking into account Expressions (31) and (32). A complete representation of the generalized solution (classical solutions) for a crack Mode I for generalized displacements u i and generalized complex stress potentials p, τ is given by Formulas (13), (14) and (21). As a result, taking into account the statement of Lemma 2 and Relations (31) and (32), we find: Here A n , B n , C n are unknown coefficients that are selected from the compensation condition for singular terms in gradient potentials U, , T. The specific structure of the solutions in the last terms of the above-stated equations, which are general solutions of the homogeneous Helmholtz equations, is found taking into account the order of the singularities in the corresponding particular solutions. So, in Expression (33), the first three terms determine a particular solution of the Helmholtz equation for generalized displacement U. They include derivatives with respect to the argument w up to the second order, which give rise to singularities of the form w −3/2 . Therefore, in order to compensate for the singularities in (33), we should use A 0 = 0, A 1 0, A n = 0, n ≥ 2. Furthermore, in Equality (34), the first term, which is a particular solution of the Helmholtz equation for the harmonic potential of stresses P, contains only the first derivative with respect to variables w, w. Therefore, in the last term we should put V 0 0, V n = 0, n ≥ 1. Finally, since in Equation (35) a particular solution includes derivatives with respect to the argument w up to the third order and contains singularities w −1/2 , w w −3/2 and w −5/2 , then in the term representing the general solution of the homogeneous equation, we should put S 0 0, S 1 = 0, S 2 0, S n = 0, n ≥ 3. The constants A 1 , V 0 , S 0 , S 2 in the solution obtained in this way are explicitly found in such a way as to compensate for the singularities in the general solutions (33)-(35). We get: Thus, Equalities (33)-(36) allow us to construct regular generalized solutions for complex displacements U and complex stress potentials P and T. The components of generalized displacements U x , U y , and generalized stresses Ξ x , Ξ y , T xy in the gradient theory of elasticity are calculated through the real and imaginary parts of complex potentials (33)-(35) according to the following formulas: As a result, we obtain the following regular generalized solutions describing the non-singular stress-strain state of a normal separation crack in a gradient formulation: where:ĥ 0 (r) = e −r/s ,ĥ 1 (r) = − r/s + 1 2 e −r/s ,ĥ 2 (r) = (r/s) 2 + 3(r/s + 1) 4 e −r/s . When deriving (38)-(40), we take into account the relations ww −1/2 = r 2 w −3/2 , ww −3/2 = r 2 w −5/2 in Expressions (39) and (40) written down taking into account (21). We also take into account that functionsĥ 0 ,ĥ 1 andĥ 2 are real functions. The coefficients A i , V i , S i for the radial multipliersĥ 0 ,ĥ 1 andĥ 2 are chosen so that in the expressions (38)-(40) there are no singular components. We can verify this, if we take into account the specific form of functionsΦ n (w, w) and the asymptotic behavior of the exponential in functionsĥ 0 ,ĥ 1 andĥ 2 in Equations (38)-(40): Using the polar coordinates w = r (cos θ + i sin θ) in Expressions (38)-(40), we obtain an explicit regular everywhere generalized gradient solution for the normal separation crack, written for the components of displacement: 1 − (r/s + 1)e −r/s cos 3θ 2 (42) and stress components: that the real value 0 ( , ) y r   is a function of r and is a positive bounded continuous function of the parameter r over the entire determination interval 0 r    that becomes zero when 0 r  . The maximum of this function is realized in some neighborhood of the point 0, 0 r r   . Figure 1 shows the distribution of normalized stresses 0 ( ) y r   in the vicinity of the crack tip alone the normalized distance from the crack tip ( 0   ). Obviously, the distribution of this function over the parameter r is a typical picture for stress concentration. Therefore, to evaluate the strength, it is proposed to use the following procedure. First, the dependence of the maximums of relative stresses Ξ y 0 (rλ) on the parameter λ is found. Then, using the known explicit Equation (45), the dependence of the maximum points of function Ξ 0 y max (λ) on the parameter λ is constructed. This dependence is essentially a dependence of the stress concentration coefficient on a parameter λ. In the case when the ultimate stress level characterizing failure is determined by the ultimate stresses σ b for brittle cracks, the constructed dependence allows one to determine the relative scale parameter λ = c/s, which along with tensile strength is characteristic of the fracture for brittle cracks. Conclusions A new representation is proposed, for the first time, for the classical asymptotic solution for singular cracks through harmonic potentials Equations (21), (25) and (27), which allows explicitly indicating the structure of particular solutions of the equation. On the other hand, the use of the radial factor method made it possible to establish the explicit form of the corresponding general solutions of the homogeneous Helmholtz equation. Together, the representations obtained make it possible to indicate, for the first time, a clear procedure for determining the exact solutions for non-singular local stress fields and the corresponding displacement fields. The complex form of representation of solutions leads not only to compact solutions, but ensures their completeness, eliminating the possibility of losing some components of solutions. We believe that the solutions obtained can be considered as a reference for similar problems in the gradient mechanics of cracks.
7,084
2019-10-26T00:00:00.000
[ "Mathematics" ]
Controllability of second order neutral impulsive fuzzy functional differential equations with Non-Local conditions In this paper, the controllability of fuzzy solutions for a second-order nonlocal impulsive neutral functional differential equation with both nonlocal and impulsive conditions in terms of fuzzy are considered. The sufficient condition of controllability is developed using the Banach fixed point theorem and a fuzzy number whose values are normal, convex, upper semi-continuous, and compactly supported fuzzy sets with the Hausdorff distance between α -cuts at its maximum. The α -cut approaches allow to translate a system of fuzzy differential equations into a system of ordinary differential equations to the endpoints of the states. An example of the application is given at the end to demonstrate the results. These kinds of systems come in use for designing landing systems for planes and spacecraft, as well as car suspension systems. Introduction Control theory is a fascinating part of application-oriented mathematics that deals with the fundamental ideas that support control framework analysis and design.The main objective of the control theory is to perform specific tasks by the system applying appropriate control.Controllability is well recognized in the context of control systems and comprises a central location.In controllability, one analyses the possibility of changing a system from a given state (initial state) to a certain required final state by using a set of permissible controls.One main presumption in the control system is that all its components are involved with complete precision.Moreover, control systems related to reasonable circumstances are characterized by fuzziness.Fuzzy set theory, introduced by [21] is competent to take care of such kind of fuzziness.Since a fuzzy differential equation describes a fuzzy control system with some initial conditions (fuzzy or non-fuzzy).So, first, we study some results pertaining to fuzzy differential equations, see ( [10], [16], [11]), and references therein. Neutral functional differential equations emerge in various disciplines of applied mathematics and so these equations have become more important in a few decades.For more detail on neutral functional differential equations, refer [13], and the references therein.Different kinds of mathematical models in the study of population dynamics, biology, ecology, and epidemics can be represented as impulsive neutral differential equations.The theory of these equations has been examined by many authors ( [9], [14], [20]).The vehicle industry has closely examined and is still curious about, the vehicle suspension system since it is the component that physically isolates the vehicle body from the wheels of the car to move forward the ride stability, comfort, and street dealing with of vehicles. The issues which can be modeled in form of impulsive control systems experience sudden changes at certain focuses of time.Generally, impulses are not defined in a precise manner.So fuzzy impulsive condition may be better than to simple impulsive condition.For more details on fuzzy and non-fuzzy impulsive differential equations, we refer to see ( [5], [18]) and references therein.[19] studied the periodic boundary value problems for second-order impulsive integrodifferential equations.[7] studied the controllability for the following impulsive fuzzy neutral functional integrodifferential equations using Banach fixed point theorem. d dt x t g t x Ax t f t x q t s x ds u t t J The controllability of impulsive second-order semilinear fuzzy integrodifferential control systems with nonlocal initial conditions has been studied by [17].Controllability of second-order impulsive neutral integrodifferential systems with an infinite delay has been studied by [3]. Recently, [1] studied the controllability results of fuzzy solutions for the following first-order nonlocal impulsive neutral functional differential equation using the Banach fixed point theorem 3) [8] studied the existence of fuzzy solutions for nonlocal impulsive neutral functional differential equations.In this paper, we attempt to establish controllability results for a class of fuzzy control systems governed by a fuzzy differential equation of second order coupled with nonlocal and impulsive conditions using the α-cut technique.In fact, nonlocal conditions are more viable for depicting the physical measurement rather than classical conditions (see for instance ( [12], [4]), and references therein).We have discussed the controllability of fuzzy solutions for the following second-order non-local functional differential equations with an impulse which is an extension to the work done in [15].Here both nonlocal as well as impulsive conditions are considered fuzzy. where A B J E n , : → is the fuzzy coefficient, E n is the set of all upper semi-continuous, convex, normal fuzzy numbers with bounded α levels. The functions We define c c c o c : : M f o such that for any r > 0, (0) ϕ is bounded and measurable function on [ ,0] −r and , where B h is endowed with the norm The paper is organized as follows: Section 2 summarizes the fundamental heuristics.The controllability results of the fuzzy solutions to non-local second-order neutral functional differential equations with impulse are proved using the Banach fixed point theorem in Section 3.An example has been provided to support the theory in Section 4. Section 5 contains an application with a graphical representation of the solution and finally, the conclusion is given in Section 6. Definition: Fuzzy Set A fuzzy set A X zM is characterized by its membership function A X : [0,1] → and A x ( ) is interpreted as the degree of membership of element x in fuzzy set A for each x X ∈ . Definition Let CC n ( ) R denote the family of all nonempty, compact, and convex subsets of R n .Define addition and scalar multiplication in CC n ( ) R by such that satisfies a s below : 1. w is normal, that is, there exists an R is a complete and separable metric space [7]. Definition The complete metric d ∞ on E n is defined by ∞ is a complete metric space [22]. Definition The supremum metric H 1 on C J E n ( , ) is defined by Hence, ( ( , ), ) 1 C J E H n is a complete metric space [24]. Definition The derivative ′ x t ( ) of a fuzzy process x E n ∈ is defined by provided that the equation defines a fuzzy set c x t E n ( ) [24]. Definition The fuzzy integral provided that the Lebesgue integrals on the right-hand side exist [24]. Definition A Definition A strongly measurable and integrably bounded map is strongly measurable and integrably bounded, then f is integrable [23]. Definition A system is said to be controllable in E n , if there exists an admissible control function u(t) using which it is possible to steer a system from any arbitrary state to desired final state. Definition The α-cut or α-level set of A is denoted by A α or [A] α and is defined as Assumptions Assume the following hypothesis. H1. [16] H2. [1] The H3. [1] If g is continuous and there exists constants G k p k , =1,2, , , such that ) H4. [16] There exists a non-negative d k and d k′ such that where, where, for all x y E n , ∈ and t J ∈ . H5. [1] The nonlinear function f J E E n n : u o is continuous and there exists a constant d 2 > 0 , satisfying the global Lipschitz condition, such that satisfies the following conditions: • For each positive number l ∈ », there exists a positive function w(l) dependent on l such that and lim inf w here sup s • f is completely continuous. Definition [7] The nonlocal problem (1.4) is said to be controllable on the interval J if there exists a control u(t), such that the fuzzy solution x(t) of (3.1) is controllable and satisfies Before proving the controllability of system (1.4), we define the fuzzy mapping where P( ) R is the set of all closed compact control functions in R and Γ u is the closure of support u.In [2], the support Γ u of a fuzzy number u is defined as a special case of the level set by Γ u = { : ( ) > 0} x x u µ . Then, there exists W j l r , are bijective mappings.Now, the α-level set of u(s) is ( Substituting this in equation (3.1), we get an α -level set of x(T) as Hence, the fuzzy solution x(t) for equation (3.1) satisfies [ ( )] = [ ] . 1 x T x α α Now for each x(t) and t J ∈ , define ( )[ (0, )] ( , ) ( where ( ) 1 W − satisfies the previous statements.Observe Φ( ( )) x t = [ ] 1 x , which represents that the control u(t) steers the system (3.1) from the arbitrary stage to x 1 in time T, given that there must exist a fixed point of the nonlinear operator Φ. Similarly, ³ g y y y S T y h h T y AT s The controllability of fuzzy solutions for the neutral impulsive functional differential equation with nonlocal conditions is discussed in the following theorem. Proof: For x y , c : ³ ³ S T y h h T y A C T s h s y ds S T s T T s T C t g y y y S t y h H p d ] , S t t I y t S t t I y t ] S T s v s ds CT t I y t ST t I y t t s ds S t s W x C T g y y ], [( ( )) ( )) ( ( ), ( )) (( ), ( )) T y A T s h s y ds S T s v s ds CT t I . By condition (H6), Φ is a contraction mapping.Using the Banach fixed point theorem, equation (3.3) has a unique fixed point x c : .Hence the System (1.4) is controllable on J. Similarly, we proceed for ′ x , 1 Substituting this in equation (3.1), we get an α -level set of x′(T) as Hence, the fuzzy solution x(t) for equation ( where ( ) 1 W − satisfies the Equation (3.5).Observe )( ( )) [ ] x , which represents that the control u(t) steers the system (3.1) from the arbitrary stage to x 1 in time T, given that there must exist a fixed point of the nonlinear operator Φ. Similarly, For c c c x y , , : ) Proceeding in the similar fashion as in x t ( ), we get, )] ( , ). Example In this section, we apply the results proved in the previous section to study the controllability of the following partial differential equation: The α-level set of number 0 is given by, And the α-level set of number 2 is given by, satisfies the inequality which is given in condition (H5).Let the target state be 2. Now, from the definition of fuzzy solution Then the α-level set of x T ( ) is given by, x T x α α .Thus all the conditions stated in Theorem 3.1 are satisfied.So the system (4.1) is controllable on J. Application Consider a coil spring suspended from the ceiling with 8-lb weight placed upon the lower end of the spring.Stretching From the graph, we conclude that as x 1 increases x 2 is decreasing function.Hence, the solution is a strong solution. Conclusions In this paper, we have proved the controllability of the fuzzy solutions for the second-order impulsive neutral functional differential equation by applying the contraction mapping principle.Further, we can extend the controllability results for the fuzzy inclusions.The numerical solution of the system is also useful for the study of a real-life phenomenon.For instance, we can consider a real-life phenomenon of a friction pendulum bearing specially designed for base isolators used in many heavy structures like bridges, buildings, towers, etc. to reduce the impact of earthquakes.Using the above system, we can also find critical points where the structure becomes unstable or gets damaged. Figure 1 : Figure 1: Represents the coil spring-suspended with 8-lb mass
2,833.6
2023-04-04T00:00:00.000
[ "Mathematics" ]
Why ASEAN is Cooperating in the Education Sector? Over the last few years ASEAN member states have begun collaborating more tightly in the tertiary education sector, which has led to a cooperation agreement with the European Union to help harmonize and lift the overall standard of tertiary education in the region. However, the broader question is why is that the case? Education is not considered a classical field of regional integration, and this chapter seeks to analyze various sources which include references from elitist circles, as well as the public sphere in order to identify the motivation for cooperation in the education sector through qualitative content analysis. The analysis is based on a theoretical framework, which incorporates both a neofunctionalist approach and a norm diffusion approach which show that the predominant factors behind this cooperation process are economic. Introduction "'Looking at all the challenges that our education system has faced, I don't think we're going anywhere soon if we don't take action right now,' Dr Van Chanpheng, deputy director general of higher education at the Ministry of Education, told University World News" (Keo, 2012). In January 2015 the Association of Southeast Asian Nations (ASEAN) and the European Union (EU) agreed on a cooperation in the field of tertiary education.It aims on sharing experiences of the European harmonization process in order to help propel tertiary education further in ASEAN (Delegation of the EU to Indonesia, Brunei Darussalam and ASEAN, 2015;ASEAN University Network, 2015).Besides this recently stated cooperation, various steps towards integration in the education sector have already been taken: the establishment of the ASEAN University Network, ASEAN Education Ministers Meeting on a regular basis since 2006, the establishment of a quality assurance mechanism (AUN-QU) or the AUN-ASEAN Credit Transfer System, to name only a few (ASEAN Work Plan on Education, 2013).This development, combined with the fact that even cooperation with the EU is pursued, allows for the assumption that a shared interest for further and deeper collaboration in the education sector is present.This appears especially interesting when taking into account that education policy is not a classical field of regional cooperation. Additionally, not much work has been done looking into this rather new phenomenon. Hence, this chapter aims on investigating these circumstances and eventually pointing out key motivations and justifications for cooperation in the field of education among ASEAN members. Therefore, a sample of documents from different sources will be analyzed along three hypotheses, carved out using both deductive and inductive approaches in order to find motivations and justifications for regional cooperation among ASEAN members.In doing so, H1: "Education Integration initiatives are spillovers from the economic sector" can be confirmed, whereas H2a: "Integration in the education sector is a result of political learning" and H2b: "Integration in the education sector is a result of appropriate acting" cannot be verified.The findings allow to confirm the central research question "Does economic integration create functional needs for education Why ASEAN is Cooperating in the Education Sector?integration?".60% of the text passages which were allocated to the underlying category system fit into categories which support hypothesis H1 and subsequently confirm the central research question.That is, I argue that ASEAN member states strive to cooperate in the sector of education for mainly economic reasons in moving closer to meeting the central requirements of a single market; "In particular, the Leaders agreed to hasten the establishment of the ASEAN Economic Community by 2015 and to transform ASEAN into a region with free movement of goods, services, investment, skilled labour, and freer flow of capital" (ASEAN, 2008, p.5).The motivation for heightened cooperation in the region serves more to contribute toward a freer flow of skilled labor to foster economic performance than anything else.It isas James Carville, campaign strategist of Bill Clinton's successful presidential campaign in 1992 famously put it -about "the economy, stupid." Historical Review When looking at the early stages of regional integration in Southeast Asia, which was founded in 1967 following Indonesia's konfrontasi against Malaysia and was primarily meant "to alleviate intra-ASEAN tensions, to reduce the regional influence of external actors, and to promote the socio-economic development of its members" (Narine, 2008, p. 6), integration in the education sector cannot be considered a logical or even necessary development.Now, ASEAN consists of ten member states and aims on bringing peace and stability to the region (Narine, 2008, p. 6).Additionally, economic growth, social welfare enhancement and tighter collaboration in sectors of shared interest are aspired (ASEAN Secretariat, 1967).Likely due to the great heterogeneity of the member states, the integration process has not always been smooth and linear. In order to face and eventually overcome this complexity, the Treaty of Amity and Cooperation (1976) was implemented (ASEAN Secretariat, 1976).It describes a certain way of behavior and communication when interacting with each other.It centers around the strict compliance with the norms of non-interference with domestic politics of other member states, informal conflict management and respect for territorial integrity of all member states, the abstinence of direct confrontation with other member states and also the pursuit of unity and harmony (Busse, 1999, p. 39;Narine, 2008, p. 8;Rother, 2004, p. 29).During the course of the Asian Financial Crisis 1997/98, however, the so far developed cooperation system, which was founded on these norms, turned out to not be efficient enough.As a response to this obvious shortage of room for maneuver (Narine, 2008, p. 18;Rüland, 2012, p. 251) 2003).With the introduction of ASEAN Community 2015 regional cooperation in the economic sector was significantly broadened and was expanded to the social-cultural sector.Originally the start of the ASEAN Community 2015 was set to January 1st, 2015, but was then postponed in 2012 to the end of 2015 (Ashayagachata, 2012).As the AEC -the framework for economic integration measurements -is included in the ASEAN Community, it was subsequently postponed as well.That contributed to the rising critical voices towards the overall well-being of the new common market, which had been ever present from the early stages of planning until the finalization of the implementation process (Frenquest, 2015).Above all, the member states' disparate education situation and the subsequent performance level of the AEC were subject to criticism. Theoretical Framework The connection of economic integration as one thematic complex and education integration as another, appears to be a valid starting point for the investigation on justifications and motivations for joint efforts to further integrate in the field of education.This is emphasized by the fact that this discussion is not only present in the public sphere but also in the scientific community; Chia et al., (2009, p. 53) state in their edited analysis of the AEC the necessity of free movement of skilled workers and the therefore needed regional education standards.Still, this is a very new and ongoing phenomenon and subsequently not much research on the matter has yet been produced. 2That is, no commonly accepted baseline for a theoretical approach can be identified and therefore has to be developed independently.For this reason, a deductive approach based on established theories of regional integration, and ASEAN research, respectively, is chosen.Here, Neofunctionalism (NF) as a classical theory of regional integration is suggested.Deducted from this theoretical approach, the central research question is derived: "Does economic integration create functional needs for education integration?"Taking into account that different variables might also be in play, the results of this analysis will additionally be contemplated through the perspective of norm diffusion and later contrasted with the neofunctionalist perspective. Neofunctionalism Deriving from idealist thinking and inspired from the belief that state's aggressive egoistic actions can be overcome, Functionalism was developed (Conzelmann, 2006, p. 157).Looking at the shipwrecking of the League of Nations, Functionalism postulates cooperation "from below", which is to decrease relevance of military power and enhances the possibility for peaceful relations at the same time. This means cross-border cooperation mostly in the low politics sector -in contrast to elite-driven cooperation "from above" (Conzelmann, 2006, p. 158).The appeal to cooperate comes from interdependency, meaning reciprocal dependencies between nation states (Keohane & Nye, 1977).Military power then loses relevance in the light of interdependency and the "long shadow of the future", and additional trust in cooperation can be achieved through iteration.Therewith the game theory cooperation dilemma can be overcome and a way to strive for absolute gains can be paved (Schimmelfennig, 2008, p. 95).This cooperation then enables further cooperation on other issues (ramification).That is, the institutional design follows functional appeals; "form follows function" (Conzelmann, 2006, p. 158). Neofunctionalism, an evolution of Functionalism, shifts its focus from "formulating recommendation for actions" to "intersubjectively comprehensible analysis of real world integration processes" (Conzelmann, 2006, p. 163).Integration is defined as a process, which leads to a certain feeling of community, common institutions and actions, as well as a long term expectation of peaceful change for a group of individuals within a specific territory (Deutsch et al., 1957, p. 5;Dougherty & Pfaltzgraff, 2001, p. 510).Neofunctionalism, most notably coined by Ernst Haas, asks how economic cooperation could turn into political cooperation and is more a "social scientific analysis" compared to Functionalism (Conzelmann, 2006, p. 163). Neofunctionalism is largely developed around the empirical example of the European integration project.That becomes obvious through the emphasis of development of a "political community" and supranational organs (Conzelmann, 2006, p. 164).Central to this theoretical strand is the "spillover" concept (Haas, 1958, p. 238;Lindberg, 1963, p. 10) as a dynamic variable.The idea here is, that technical cooperation in one sector spills over to neighboring sectors, as this is likely to reduce costs (Conzelmann, 2006, p. 166).That is, political integration follows economic cooperation immediately and subsequently Haas points to the "expansive logic of sectoral integration" (Haas, 1958, p. 311).This sectoral integration eventually extends to higher political integration (Conzelmann, 2006, p. 166;Haas, 1958, p. 292;Rosamond, 2005, p. 244). Furthermore, the distinction between integration as a status quo and integration as a process is important.Haas describes integration as a process and subsequently incorporates the dynamic spillover.To sum up: "Without inclusion of neighboring sectors, expected welfare gains through cross-border cooperation in the original sector cannot be achieved permanently or completely" (Conzelmann, 2006, p. 167). It is also to be noted that Neofunctionalism also takes social groups and supranational bureaucracies into account (Dougherty & Pfaltzgraff, 2001, p. 511).In Haas' eyes this automatically leads to a steady integration process (Haas, 1961, p. 268).This automatism, however, was subject to major criticism and was later taken back (Conzelmann, 2006;Lindberg & Scheingold, 1970;Schmitter, 2004).Also, the empirical focus on the European integration project has been criticized (Mattli, 2005), as well as overvaluing functional needs and the neglect of national interests.In sum, Neofunctionalism has been highly criticized for being too "optimistic" towards linear integration processes (Conzelmann, 2006;Lindberg & Scheingold, 1970;Schmitter, 2004). Neofunctionalism and ASEAN When looking at the ASEAN area through a neofunctionalist perspective, a few shortages and limitations can be revealed with regard to its application.Built around the European integration project, NF perceives democratic pluralism during regional decision-making processes (Kim, 2014, p. 379).However, most of ASEAN member states are not democracies.Additionally, no member state is considered "free" according to the Freedom House Index.Six states are listed as "not free" and four as "partly free". 3Another limitation is NF's emphasis on the role of civil society groups which pressure the government.In Europe those are mainly economic interest groups (Kim, 2014, p. 383).Those type of groups, however, do not play a significant role in ASEAN's decision making process.The integration process in ASEAN is much more an elite-driven project, which is only hardly under institutional influence of economic interest groups (Ravenhill, 2008, p. 483).Furthermore, that applicability of the concept "form follows function" needs to be questioned here.Kim concludes that very often integration steps in ASEAN follow the very opposite logic.Kim argues that decisive steps are taken during meetings of state leaders in order to support their own interests and not because economical appeals in one sector made deeper cooperation necessary in another sector (2014, p. 381). Even in light of these limitation NF still holds a certain value when analyzing integration processes in ASEAN.NF highlights the importance of socialization among the elites (Dougherty & Pfaltzgraff, 2001, p. 516;Kim, 201, p. 378).Not only the European Union but also ASEAN can be described as an elite project (Kim, 2014, p. 378).This socialization occurs during common decision-making procedures in ASEAN, which is largely driven by expansion of regional cooperation with respect to sovereignty, strong national interests and the explicit refusal of supranational bodies. At this point NF is able to explain how and under which circumstances the integration process is developing using its argument of elite socialization (Kim, 2014, p. 378).It is also worth taking a look at the heart of neofunctionalist thinking: the spillover. Generally, NF concentrates on political integration that is derived from economic cooperation.Although this logic might not be fully applicable to every step of ASEAN's integration process, the idea of the spillover should not be overlooked completelyespecially with regard to the central research question and the relationship between economic entanglements and education integration.In summary, it can be said that NF, which was clearly built around the European integration project, has its limitations when applying it to the case of ASEAN. Nevertheless, NF contains several components -first and foremost the functional logic of the spillover -which justify an analysis of aspects of ASEAN's integration efforts through this perspective.However, it is acknowledged that also different, nonfunctional variables might be essential to the integration efforts in the education sector.In order to take this possibility into account and to strengthen the following discussion, norm-diffusion processes will also be considered. Norm Diffusion Another potential problem of Neofunctionalism when connected to qualitative content analysis could be the so-called "rhetoric-action-gap" (Jetschke & Rüland, 2009).It describes the discrepancy between speech and resulting action. 4A second theoretical approach, namely norm diffusion research, will be introduced to expand the theoretical frame work in order to tackle this potential problem.Especially third generation norm diffusion approaches operate on the rhetoric-level and subsequently present a good opportunity to review hypotheses deriving from neofunctionalist argumentative logic from a reflexivist's perspective.Therewith it contributes to a more profound answer to the central research question. The empirical starting point is the observation of processes of adaption, imitation and reproduction of norms within the international system.It was introduced to the field of international relations through the research on Europeanization at the beginning of the 21st century (Börzel & Risse, 2000;Radaelli, 2000).Within norm diffusion research, three generations can be identified (Archaya, 2009).Influential concepts for the first generation are the Life Cycle Model (Finnemore & Sikking, 1998), the Boomerang Model (Keck & Sikking, 1998), as well as the Spiral Model (Risse, Ropp & Sikking, 1999).These approaches have later been criticized for their Western-based perspective and the passiveness of the norm recipients (Acharya, 2009, p. 14).The second generation takes local, norm-receiving structures into account in order to deduce the hypothesis of cultural fit (Acharya, 2009).This generation mostly focuses on Europeanization (Börzel & Risse, 2000, 2009;Radaelli, 2000). The third generation decouples itself from the concentration on Western-based norm agents and reacts to the lasting criticism regarding the focus on the West and the passive local actors.The introduction of the premises that local actors react differently to incoming norms enhances the analytical framework.Thus, norm recipients are treated as active actors and their room for maneuver is put in the spotlight (Acharya, 2009, p. 14).Additionally, a new form of flexibility is created, which allows for more detailed analyzation of norm diffusion processes between the two poles of outright rejection and full transformation.Four different types of norm diffusion processes can be observed (Rüland, 2012).According to that, norms can firstly be fully rejected (see "Asian Value Debate" Rüland, 2012, p. 250).Norms can secondly be adopted rhetorically (isomorphic adaption).That means a formal adoption of institutional or organizational structures or terminology while local identities remain unchanged. These strategies usually serve to secure legitimacy or pacification of normative pressure deriving from the international community (Di Maggio & Powell, 1982;Rüland, 2012).Thirdly, external norms can combine themselves with already existing norms and create a fusion (localization).The prevailing set of rules, which are deeply rooted in the society -the so called cognitive prior -are not meant to be substituted completely in this case.Through the participation of local norm entrepreneurs in the norm diffusion process and the combination of local and external norms, local identities can partially change (Archaya, 2009).Fourthly, norms can be fully adopted and internalized (Radaelli, 2000;Rüland, 2012). Not only has the degree of identical change had to be considered but also the activator for such a change.Normative change can arise step by step, in a discursive interplay of affected actors, or as a reaction to an external shock (Rüland, 2012, p. 250).Furthermore, two types of diffusion mechanisms can be observed.On the one hand diffusion through coercion; for example by a hegemonic power or an international organization.On the other hand, voluntary diffusion can be contemplated at this point.This voluntary diffusion can utilize different mechanisms, depending on the theoretical perspective.Rational-choice Institutionalism follows a rationalist approach and the sociological Institutionalism follows a reflexive, or cognitive approach, respectively. The rational approach follows the logic of rational acting, meaning a cost-benefit calculation as a reaction to either positive or negative appeals through diffusion.A positive appeal could be the prospect of financial or technical aid.Negative would be potential sanctions (Börzel & Risse, 2009, p. 10).Diffusion is then related to political learning (Braun & Gilardi, 2006, p. 306) Learning can then be a result of either functional pressure or competition; institutional arrangements which make others better are adopted. Reflexive, or cognitive approaches follow the logic of appropriate acting.Actors aim to meet social standards and principles.Hence, norms do not diffuse as result of competition but because an external norm satisfies standards of appropriateness. Essential for local actors, meaning norm recipients, is, to secure legitimacy and to ensure socialization within the international system here.These two approaches are usually separated.According to Jetschke & Lenz (2011), however, legitimacy can be generated through learning and the search for legitimacy and appropriateness can, in turn, contain learning (bounded learning). Norm Diffusion and ASEAN It has already come to light that initial approaches of norm diffusion offer explanations for integration processes in ASEAN.After the Asian Financial Crisis 1997/98, forms of rational learning could be observed.In that case, the deeper cooperation in the economic sector facilitated through the founding of the ASEAN Economic Community (AEC) can be seen as a result of functional needs for institutional adjustments modeled after the European common market.This alteration of economic cooperation is an obvious product of rational learning (Jetschke & Murray, 2012).Also, processes of appropriate acting can be found with regard to central parameters which are decisive for the international acknowledgement of states.That is, an at least rhetorical shift towards central norms such as democratizing, good governance and the recognition of human rights can be observed in the ASEAN Charter ("people-oriented regionalism", Rüland, 2012, p. 238) -which was drafted in ASEAN Charter 2007 (ASEAN Secretariat, 2007).This shift cannot be fully explained with functional needs to adopt institutional arrangements which are shaped after EUinstitutions and ideas (Jetschke & Murray, 2012, p. 181). Research Design Based on neofunctionalist assumptions the first hypothesis is derived: H1: "Education Integration initiatives are spillovers from the economic sector".In that sense, economic integration efforts would lead to functional appeals for political integration.This question -and also the general suitability of NF in the context of ASEAN -will be analyzed on the basis of the evaluated material.In order to test H1 and also to offer an additional theoretical framework to the neofunctional logic, two hypotheses -one following rationalist thinking and one following reflexive thinkingwill be deduced from norm diffusion research. H2a: "Integration in the education sector is a result of political learning." H2b: "Integration in the education sector is a result of appropriate acting." The examination of the material and the following discussion about these two hypotheses will be limited to references of the European integration project.The EU is presumed to be the state-of-the-art integration project and is subsequently very well suited as a reference point for processes of learning and diffusion.As it has already been mentioned above, this chapter aims on pointing out justifications and motivations for the ongoing integration process in the education sector.The central research question on functional needs from the economic sector shall then be answered within these parameters.A prerequisite for this undertaking is the evaluation and structuring of articulated motivations and justifications available in the material.Therefore, a content-structured content analysis is proposed (see Kuckartz, 2012;Mayring, 2010;Schreier, 2012).The main aim of this method is the "analysis of material, which derives from any form of communication" (Mayring, 2010, p. 11).The material is analyzed along a theory-based question or problem; "the results are interpreted based on the underlying theoretical framework and also each analytical step is guided by theoretical considerations" (Mayring, 2010, p. 13). Additionally, the material will be processed according to Mayring's (2010, p.13) frequency analysis.That means the counting of the previously structured elements in order to compare them to each other and generate deeper insight into the material. The connection of content-structured content analysis and frequency analysis enables the development of not only a distinct and comprehensible overview of the material in comparison to a strictly qualitative approach but at the same time also a deeper understanding of the material, as opposed to a purely quantitative approach such as simply counting words.The concrete modus operandi is oriented after Schreier's suggestions for a content-structured content analysis (Schreier, 2014, p. 24), as well as Mayring's approach to a frequency analysis (Mayring, 2010, p. 15).The central research question is derived from the prevailing context of ASEAN's integration efforts, as well as from Neofunctionalism; does integration in the economic sector lead to functional appeals for integration in the education sector? In order to find answers to this question, material from three thematic clusters will be analyzed using a content-structured content analysis in combination with a frequency analysis.The first cluster contains documents coming from the respective national states and is therefore referred to as "national".The second cluster consists of documents coming from sources directly related to ASEAN and is subsequently referred to as "regional".The third cluster persists of press articles from newspapers which operate in the ASEAN area and is therefore referred to as "press".The analyzed material covers the time span from 2003 -the ratification of Bali Concord II, which confirmed the establishment of the ASEAN Community 2015 and subsequently the AEC -to 2015.Deductive reasoning, however, leaves us with limited options regarding the sample.Its size is thus limited.The analyzed units will not be restricted or shortened artificially.Thus, every message and text message from the material which are compatible with the categories can be captured. In the light of the research questions' strong theoretical relation, it appears to be fruitful to not only develop inductive categories along the material, but also to derive deductive categories from the theoretical framework.Two categories are established based on the neofunctional spillover-logic and alongside central targets anchored in the AEC.Additionally, two categories following the logic of norm diffusion are presented to counter the first two categories.Five further categories are developed inductively according to the pre-analyzed material. Analysis The documents under consideration will be closely examined in order to find justifications and motivations for deeper integration in the education sector using the category system.The category system can additionally be summarized into three umbrella categories (UC) under which the nine categories can be subsumed after two rounds of pre-coding and reviewing the material. 1. Umbrella Category: Enhancing the economic performance by integrating in the education sector. Subcategories: K4: Cultural Awareness, K5: Regional Identity 3. Umbrella Category: References to political learning or appropriate acting referring to the European Union. Subcategories: K8: Political Learning, K9: Appropriate Acting The analyzed material is constituted by sources from the above mentioned clusters.The national and also regional cluster mirror arguments and motivations from the elites, whereas the press cluster adds arguments and dispositions from the public sphere to ensure an acceptable degree of representativeness.The following chapter shows several considered text passages to illustrate the analysis. The National Cluster The state of source material in this cluster is not ideal.Education is in the process of conducting education reform to develop its education quality to the ASEAN standards. Cultural Awareness + Regional Identity The categories K1 and K2 are not mentioned in the analyzed material.However, category K3 -part of umbrella category 1 related to economic performance -is the category which contains the most mentions in the material.44% of the mentions are allotted to UC 2 which covers regional awareness.Messages concerning UC 3 cannot be found in the material. Figure 1. Frequency Distribution National Cluster The fact that 44% of the allotted text passages correspond with UC2, which is not supported by any hypothesis presented here, should not be overlooked.However, the evidence here also shows that most text passages analyzed are assigned to UC1 (55%) which supports neofunctionalist reasoning and H1.Norm diffusion arguments do not seem to be in play, as no text passage corresponds with categories K8 and K9, which were deduced from norm diffusion logic.Support for hypotheses H2a and H2b cannot be observed in this segment. The Regional Cluster In order to be applicable to this cluster, the sources must have a direct link to the regional organization.It is worth noting here that the high percentage of messages are assigned to UC 2 (K4: 18% and K5: 18%).Categories K1, K3 and K7 -which relate to economic performance -are also very frequently mentioned.Categories K8 and K9, which are deduced from norm diffusion research, do not seem to play an important role in the regional cluster.In general, the same tendencies as in the two previous segments can be observed in the Press Cluster as well.The frequent mentioning of UC2 member category K5 is not explainable with the hypotheses presented here.Although being mentioned in this cluster, K8 and K9 do not seem to be significant categories.Moreover, categories which firm under UC1 are mentioned the most, A fact, which supports H1.The most passages can be allotted to Category K5: Regional Identity -19% of the analyzed sample fit into this category.K7: Economic Performance and K1: Labor Migration follow second and third, respectively.K2: Development Gap (5%), K3: Human Capital (16%) and K6: Knowledge-based Society (3%) complete UC1.K4: Cultural Awareness -K5's counterpart contains 15% of the analyzed passages.Not a 5 Exact distribution in percent: K1: 17,14 / K2: 5,71 / K3: 16,19 / K4: 15,24 / K5: 19,05 / K6: 2,86 / K7: 18,1 / K8: 5,71 / K9: 0,0.single passage fits into K9: Appropriate Acting and only 6% of the passages go into its co-category under UC3 (K8: Political Learning). The results translate into the following UC-percentages: 60 % of the passages fall upon UC 1 which represents economic performance.36% are allotted to UC 2 (Regional Awareness) and UC 3, standing for references to both political learning and appropriate acting contain 6% of the sample.UC1 is most mentioned in each of the three clusters.Despite this clear distribution it needs to be acknowledged that K4 and K5 are strongly represented in every cluster (except for the Press Cluster where only 8% of the passages fall upon K4). Discussion Looking at the analyzed material and the distribution of the assigned passages, it becomes apparent that 60% of the justifications and motivations for education integration are connected to migration of skilled labor, the narrowing of the developmental gap between ASEAN-6 and the CLMV states, the promotion of a knowledge-based society, as well as generally enhancing the economic performance. 6 Hence, it can be deduced that functional appeals from the economic sector play a vital role to justify education integration.It becomes clear that arrangements need to be made in order to meet economic targets which are expressed in the AEC blueprint (see ASEAN Economic Community Blueprint in ASEAN Secretariat, 2008).The ASEAN State of Education Report, for example, states several measures that serve economic integration: "To strengthen the economic pillar, it was agreed that there should be: (i) a national skills framework in each of the ASEAN Member States, as an incremental step towards the establishment of an ASEAN skills recognition framework; (ii) conditions supportive of greater cross-border mobility for students and skilled workers; (iii) an ASEAN competency-based occupational standard; and (iv) a common set of competency standards especially for technical and vocational education and training (TVET) as a basis for benchmarking with a view to promoting mutual recognition" (ASEAN State of Education Report, in ASEAN Secretariat, 2013, p. 14). This shows that education integration does also play a role in achieving economic targets.This process can be called spill-over.Therefore hypothesis H1: "Education Integration initiatives are spillovers from the economic sector" can be confirmed.This argumentation is additionally supported by the fact that in all three clusters the most passages fall upon umbrella category 1. Moreover, the central research question can be explained and answered with the help of neofunctionalism's spillover component; Functional appeals for integration in the education sector are derived from economic integration measures.This assumption becomes strengthened when taking the hypotheses H2a: "Integration in the education sector is a result of political learning" and H2b: "Integration in the education sector is a result of appropriate acting" into account.Neither H2a, nor H2b can be confirmed.Category K8: Political Learning accounts for 6% of the mentions found in the material.K9: Appropriate Acting cannot be found in the material at all.If the spillover is accepted as bearing the applicable concept to this issue, this point serves well as a starting point for a reflection about the eligibility of Neofunctionalism for this part of the integration process in ASEAN, as well as to answer the central research question. Neofunctionalism postulates that integration depends on cooperation initiated by a high degree of interdependency in one sector which then spills over to others, rather than on specific national policies.This aspect cannot be detected easily in ASEAN's integration process.Decisive steps are usually decided in official meetings of the heads of states in order to boost national interests (Kim, 2014, p. 382).But at this point, it can also be argued that the tables have turned with the implementation of the AEC.National economic interest can absolutely be driven by functional appealsespecially with regard to the AEC.Neofunctionalism generally asks how economic integration turns into political integration, or as Haas puts it: "Political integration follows economic integration immediately" (Conzelmann, 2006, p. 163;Haas, 1968, p. 311). This development is accompanied by the dynamic spillover component.When looking at the entire course of the integration process in ASEAN since its foundation in 1967, the clear order "form follows function" does not hold true.If turning to the considered part of this process, however, at least the "expansive logic of sectoral integration" (Conzelmann, 2006, p. 166) can be ascertained when taking the AEC or economic integration in general as motivation for education integration into account. Further arguments for the application of Neofunctionalism in the context of the ASEAN integration process can be observed.For example, the overall focus on political elites (Kim, 2014, p. 378).But here, the focus lies on the eligibility of neofunctionalism as a theoretical framework for the question of justifications and motivations for deeper education integration. The principle focus on the starting point of cooperation in more technical sectors is generally in accordance with Neofunctionalism.The lacking desire for supranational solutions can be explained with the degree of the elite's socialization, which is decisive for further vertical development (Dougherty & Pfaltzgraff, 2001, p. 516;Kim, 2014, p. 388).The strongly functional justification for further sector-overlapping cooperation from elitist circles is also in accordance with the neofunctionalist argumentative logic (Dougherty & Pfaltzgraff, 2001, p. 513).The analysis of the material supports this assumption.Most mentions fall upon UC1, both in the national and regional cluster, which represent the justifications and motivations of the political elites.H1 and also the central research question can be explained with the help of the spillover component. As mentioned above already, H2a and H2b cannot be confirmed.Subsequently norm diffusion approaches do not offer explanation or further understanding of integration initiatives in the sector of integration.If anything, this only cements the confirmation of H1.Nevertheless, the frequent mentions of categories K4 and K5, summarized under UC2, cannot be ignored.K5 is the most mentioned category of all (19%) and at least ranked second in each cluster.The creation of a regional identity with respect to all cultures of ASEAN member states is a declared goal.The ASEAN Work Plan on Education even formulates this as top priority: "Priority 1 -Promoting ASEAN Awareness: ASEAN aims to build the ASEAN identity by promoting awareness and common values at all levels of society and in the education sector" (ASEAN 5 Year Work Plan on Education, in ASEAN Secretariat, 2012, p. 17). None of the considered theoretical approaches offers a plausible explanation here.Only if regional identity is being understood as an act of socialization within the context of the EU, this result could be seen as a sign of norm diffusion.However, the emphasis of cultural awareness can rather be related to central codes of conduct; the respect for territorial sovereignty and non-interference with domestic issues of other member states.Furthermore, no clear references to the regional identity of the EU or its advantages can be detected in the material which could point to strategies of socialization or legitimation.The emphasis on respect for other states' cultures and a shared identity of the member states could also be seen as a low-cost alternative to actual, costly measurements for the development of integration in the education sector. Conclusion This chapter tried to detect justifications and motivations for integration in the education sector in the ASEAN area.This relatively new phenomenon in the course of the ASEAN integration process is an interesting case to study because no extensive scientific analysis has yet been written on the topic.Through the visible connection of the public sphere, facilitated through the Press Cluster as well as official ASEAN statements, a connection of education integration and economic integration can be observed.Therewith the central research question for functional appeals for education integration deriving from economic integration could be answered. Justifications and motivations for further education integration have been presented on the basis of a content-structured content analysis with a following frequency analysis.Two major theoretical perspectives were presented to form a theoretical framework, from which three hypotheses were deduced.On the one hand Neofunctionalism as a classical theory of regional integration, which seeks to explain how cooperation spills over from one sector into another and propels regional integration further.H1: "Education Integration initiatives are spillovers from the economic sector" was derived from this argumentative logic.On the other hand, a rational, as well as a reflexive understanding of third generation norm diffusion research was introduced as the countering approach to Neofunctionalism. Hypotheses H2a: "Integration in the education sector is a result of political learning."and H2b: "Integration in the education sector is a result of appropriate acting."were deduced.By classifying 40 documents into a category system which contains four deductive categories (K1, K2, K8, K9) and additional five inductive categories (K3, K4, K5, K6, K7), justifications and motivations for education integration could be presented in a structured way.Most passages were allotted to Umbrella Category 1, followed by UC2 and UC3.H1 has been confirmed on the basis of this categorization -pointing at the fact that 60% of the passages have been assigned to UC1.H2a and H2b, however, have to be negated.Thus, the central research question can be answered; Functional appeals for integration in the education sector derive from integration in the economic sector.Additionally, it can be noted that Neofunctionalism offers valuable input to the understanding of the Southeast Asian integration process.We can then conclude that ASEAN members' major interest lies in enhancing economic performances through strengthening the education sector, rather than emphasizing the education sector itself. Furthermore, the frequent mentions of K4 and K5 should not be disregarded. None of the here presented theoretical perspectives offers viable explanations.It would be interesting to further apply Neofunctionalism to the context of different phases of the ASEAN integration process and to develop a more specific theoretical construct for the context of ASEAN.Moreover it would be insightful to analyze domestic debates concerning the constitution of interest of the political elite before they take it to the regional level, in order to determine in what way and to what extent they are driven by processes of diffusion and subsequently to understand the focus on cultural awareness and regional identity better.At this point, another research design is needed, which builds on these findings and then aims on understanding these processes better.Especially with regard to the future and further developments after the official implementation of the ASEAN Economic Community, as well as the progressing ASEAN-EU cooperation in the sector of education. , the development of the ASEAN Vision 2020 as well as the establishment of the ASEAN Community 2015 was announced.The ASEAN Community rests upon three central pillars: the Political-Security Community (APSC), the Socio-Cultural Community (ASCC) and the Economic Community (AEC), (see Declaration of ASEAN Concorde II, in ASEAN Secretariat, For example, various joint statements from the ASEAN Education Ministers Meeting (ASED; see ASEAN Secretariat, 2015), the Southeast Asian Ministers of Education Organization (SEAMEO), or the ASEAN 5-Year Work Plan on Education (2011-2015) are considered here: Statement of the 5th ASED Meeting.The Ministers were pleased with the progress in AUN activities, including the projected implementation of the ASEAN Credit Transfer System (ACTS) in AUN Member Universities this year.The ACTS seeks to enhance and facilitate student mobility among AUN Member Universities, which is one of the targets to be achieved under the 'Free Flow of Skilled Labour' of the ASEAN Economic Community Blueprint.2. ASEAN Secretariat: ASEAN 5-Year Work Plan on Education.Promoting ASEAN Awareness: ASEAN aims to build the ASEAN identity by promoting awareness and common values at all levels of society and in the education sector. Table 1 . Categories along the Theoretical Framework Thai students can learn more about the cultures of other ASEAN countries.[...] The Lao Deputy Minister of Education said that Laos is in the initial stage of using IT to help in education and would like to learn from Thailand, so that they move together toward the ASEAN Community in the future.The Lao Ministry of cultural understanding, generating knowledge and promoting networking, all of which had an impact on ASEAN's ability to be competitive globally.2. The Government Public Relations Department (2014): Thailand Steps Up Educational Cooperation with ASEAN Partners.Dean of the Faculty of Education, Chulalongkorn University, Associate Professor Bancha Chalapirom, said that, in the exchange program, Thai-language teachers will be sent to help develop Thai language skills in other ASEAN nations, especially neighboring countries.At the same time, he said, teachers of other ASEAN languages will be accepted to teach students at Chulalongkorn University.The exchange program will create a new environment in which Thailand can become familiar with ASEAN matters and
8,986.6
2017-07-31T00:00:00.000
[ "Education", "Political Science", "Economics" ]
INDUCED MEASURES ON WALLMAN SPACES LetX be an abstract set and .t; a lattice of subsets ofX. To each lattice-regular measure we associate two induced measures and on suitable lattices of the Wallman space Is(L) and another measure IX' on the space I,(L). We will investigate the reflection of smoothness properties of IX onto t, sequence {B,, } of sets of" with B,, , O then there exists a sequence {A,, } of sets of Z:l such that B,, CA,, andA,, , O. IfZ:l CL2 and IX EM(,:,)then the restriction of IX on .,(f_.t)will be denoted by v-Ix REMARK 2.1.We now list a few known facts found in [1 which will enable us to characterize some previously defined properties in a measure theoretic fashion.1. , is disjunctive if and only if Ixx E IR(,), Vx .X ." is regular if and only if for any IX, I(.,) such that IX on " we have S(IX) S()." is T2 if and only if S() O or a singleton for any IX tE I(,).., is compact if and only if S(la) , for any Ix IR(,). THEOREM 3.6.Let L be a complement generated and normal lattice of subsets of X.If t is strongly o-smooth on L, then Ix C M,(Z;). Next, we generalize a result of Gardner [8]. Next, we have a restriction theorem, which although generally known, we prove for the reader's convenience. We will briefly review the fundamental properties of this Wallman space associated with a regular lattice measure ix, and then associate with a regular lattice measure Ix, two measures t and on certain algebras in the Wallman space (see [3]).We then investigate how properties of ix reflect to those of t and !, and conversely, and then give a variety of applications of these results. Let X be an abstract set and/; a disjunctive lattice of subsets of X such that O and X are in/;.For anyA in A(L), defined to be W(A {Ix EIR(L):IX(A 1 }.IfA,B CA(L) then 1) W(A UB)= W(A U W(B ). 4) W(A C W(B) if ana only ira C B. 5) W(A W(B if and only ira B. 6) W[A()] ., W()]. Let W() { W(L ),L z; }.Then W() is a compact lattice of IR(), and IR() with :W() as the topology of closed sets is a compact T1 space (the Wallman space) associated with the pair X,.It is a T2-space if and only if is normal. One is now concerned with how further properties of IX reflect over to and respectively.The following are known to be true (see [1]) and we list them for the reader's convenience.TItEOREM 4.1.Let be a separating and disjunctive lattice.Let Ix MR(Z;), then the following statements are equivalent. (K) 0 for all K C I()-X and K THEOREM 4.3.Let be a separating and disjunctive lattice.If IX MR() then the following statements are equivalent: ff {L,} L , and W(L C l(L X then W(L) 0. THEOM 4.4.ff is a separating and disjunctive lattice of subse of X then, M(L) if and only vanishes on every closed subset of Is(L), contained in Es(L) X. THEOM 4.5.t be a separating and disjunctive lattice of subse ofX and the o statements aw equivalent: . (z). .x is t*-measureabe and t*(X)- We now establish some eurther properties pertaining to the inuce measures t and t.First we snow TEOM 4.7.t e a separating an isjunctive lattice, an M() then is W() regular on (w())'. This theorem is a generalization of the previous one in which we used the compactness of W(/;) to have a regular restriction of the measure.Also this theorem enables us to improve corollary 3.12 namely: IfX is coutably paracompact and normal then each measure Ix tE Ms(2;) extends to a measure v Ms(Y) which is 2;-regular on 0. THEOREM 4.10.Suppose/; is a separating and disjunctive lattice.Let x if and only if {x} t Wo(L'.). Therefore .6 must be replete. The proof is a simple combination of the two previous theorems. n.1 n-1 This theorem is somewhat more general than the previous corollary because we ask less from the lattice .6,however we get a set B o[W(L)] rather than a zero set z Z(xW(L)).EPL 4.15.We are going to apply corollary (4.13) to special cases of lattices. 1. t X be a space and L 3 then X is 3-replete if and only if Vp X ] Z a zero set of such that p Z C -X. t X be a To countably paracompact space and L-Y then X is a-real compact if and only if p-X]Z a zero set of such thatpZC-X.ere is the Wallman compactification ofX. 3. t X be a Tt space and L ( is normal and countably paracompact and ln() I()) then X is Borel-replete if and only if VpI(B)-X=I(B)-XZ a zero set of I(B) such that eZ CI(B)-X. Let (CI) be the following condition: W(L) C I(L)-X ere exists a countable sequence {L} such that W(L) W(L) I(L)-X.so c c W(L,)c Z c 4(z,)-X so/; is Lindel6f if and only if for each compact K Mn(/;) there exists a zero set Z Z;((W(/;)) such that K CZ CIn(,)-X. Let X be a T space and/; Z; then/; is Lindel6f if and only if for each compact K C I,(/;) X there exists a zero set Z such that K cZ c -X,Z e z(CWCz)). 2. Let X be a 0-dim T space and/; C then/; is Lindel6f if and only if for each K C 0X -X there exists a zero setZ such thatZ Z;['W(/;)] andK CZ CX-X. 3. X is a T space and/; B then B is Lindel6f if and only if for each compact K C I($) X there exists Z ;[:W($)] such thatK CZ CI($)-X. Finally we give some further applications to measure-replete lattices. 2. IfX is T1; , s then Ms(S) M(S) and M,'(S) M,(S) if and only if *(f) f) 0 for every F C I(B) -XF closed in I(B).IfX is a T1 space and L -9 then M,(.T) M(br) if and only if '(F) 0 for all F C wX-X; F closed in wX. 5. IfX is T and , --2; thenZ is measure-compact if for each F C []X-X andF is closed in [d(, there exists a Baire set B of such that F CB C X-X. S. THE SPACE I(): DEFINITION 5.1.Let be a disjunctive lattice of subsets ofX. PROOF.The proof of this Theorem is known.Let (C2) be the following condition: For each t H l(Z;) there exists at most one v THEOREM 5.5.LetL be a separating and disjunctive lattice of subsets ofX.Then (I(L),r.Wo(L)) is T2 if and only if (C2) holds. PROOF. Restrict :k to t(Mg(W(r')).The restriction is unique because W(r') separates xW(L) and since t'(l,(L)) (I(L)) then .t. .tprojects onto l,(r') and is denoted by v. It' M(Wo()) and has a unique extension to M](xWo(r')) and of course v is that extension. TItEOIM 5.11.Suppose is a separating, disjunctive and normal lattice of subsets of X, then the following statements are equivalent: 1. us F C W(L 'i) but since W(L is compact en F C W(L ') W(L ') where L L and -1 -1 OM 4 .Let . t L and L be o lattices of subsem of X such that L C L= and L separates Lz Ifv M(L) then v " on L'= and v on L'= where v PROOF.t v M(L=) then since L separates L M(L).Since L C L= then o(L) C 3 . IfX is a 0-dim T space L C then Ms(C) M() and M(C) M(C) if and only if t(F) [*(F) 0 for F C [0X-XF closed in[oX.
1,783.6
1990-01-01T00:00:00.000
[ "Mathematics" ]
Merging of relaxations and step-like increase of accompanying supercooled liquid region in metallic glasses via ultrafast nanocalorimetry Glassy materials under external stimuli usually display multiple and complex relaxations. The relaxations and the evolution paths of glassy materials significantly affect their properties and are closely related to many key issues in glass physics, such as glass transition and thermoplastic forming. However, until now, the relaxation dynamics in the presence of external stimuli and the microscopic atomic motion of glassy materials have been unclear due to the lack of structural information. By combining Flash and conventional differential scanning calorimetry (DSC), we applied a very large range of heating rate of six orders of magnitude and investigated the relaxation dynamics of three typical metallic glasses. We discovered the merging of distinct relaxation events with increasing rate of heating. Most interestingly, the experiments revealed new behaviors with step-like increases in the supercooled liquid region and excess heat capacity during the merging of multiple relaxations. A comprehensive scheme was proposed for the evolution of the thermal relaxation spectrum, the heterogeneity of the corresponding atomic motion and the potential energy landscape with rate of heating. These experimental results shed light on the mechanism of atomic rearrangement during heating and provided a new approach to regulate the physical properties of amorphous materials by controlling their intrinsic relaxation dynamics. Glasses are non equilibrium materials that develop by cooling of a supercooled liquid, where the rapidly increasing viscosity results in a kinetic arrest of long range atom rearrangements. During heating from glass state, different relaxations are thermally activated and display different relaxation spectrum. Here, we used one ultrafast nanocalorimetry to examine the evolution of multiple relaxations and discover the merging of the relaxation modes with increasing heating rates, resulting in step-like increases in both the supercooled liquid region and excess heat capacity. Our findings provide new insights on the evolution of the relaxation spectrum and the associated heterogeneous atomic motion. Introduction The relaxation phenomena of metallic glasses (MGs) have attracted a great deal of attention from researchers who seek to unveil the nature of the glass transition and other key issues in glass physics, such as physical aging, the memory effect and thermoplastic forming [1][2][3][4][5] . Compared to their crystalline counterparts, MGs display diverse relaxation modes over wide ranges of temperature and timescale, such as the primary α relaxation, the secondary β (Johari-Goldstein) relaxation and the faster relaxation modes called β' or γ relaxations (shown in the upper part of Fig. 1a) [6][7][8][9][10][11] . Different relaxation modes gradually decouple during vitrification. At a sufficiently high temperature, there is only the α relaxation; when the liquid quickly quenches into the supercooled liquid region, the relaxation mode splits into α and β relaxations. In the glass state below the glass transition temperature, the α relaxation disappears, and the relaxation mode further decouples into β and β' relaxations 5,12 . However, few studies have focused on the evolution of the relaxation mode during the glass-to-liquid transition (GLT) by fast heating to bypass crystallization [13][14][15][16] . From the view of practical applications, the GLT process is one of the critical issues in the net-shape thermoplastic forming processes of MGs, such as injection molding, blow molding, and microreplication 14,17,18 . Meanwhile, researchers have recently reported that the macroscopic GLT is induced by the activation and percolation of numerous flow units that are closely correlated with local β and β' relaxations 9,15 . Thus, understanding how atomic rearrangements govern these relaxation processes and the detailed evolution paths of different relaxations is of great importance in fundamental research and practical applications for MGs. For thermal activation processes in MGs, such as crystallization and different relaxations, including the primary α relaxation, the relationship between the typical temperature (crystallization temperature or glass transition temperature) and the heating rate roughly follows the Kissinger equation or the Arrhenius equation 9,10,[19][20][21][22] . Thus, different thermal activation processes can be separated under high heating rates. For example, the hidden glass transition (α relaxation) in marginal Al-based MGs has been separated from primary crystallization, and the weak β relaxation in fragile MGs is separated from the α relaxation when the heating rate reaches a critical value 19,20 . Furthermore, the activation processes of different relaxations during the GLT with increasing temperature in MGs are usually accompanied by obvious thermal signals, such as endothermic peaks before the glass transition, which usually correspond to the appearance of β and β' relaxations and the unfreezing of the corresponding local regions 9,15,16,20 . Considering that the activation energies of β and β' relaxations are much smaller than that of the α relaxation, a relaxation mode coupling should appear for different relaxations with increasing heating rates. Moreover, recent studies have established that the secondary β relaxations in MGs are governed by string-like clusters of particles with cooperative motion 23,24 and that fast β' or γ relaxations result from caging-breaking motion driven by internal stress in nonequilibrium states (shown in the lower part of Fig. 1a) 9,10,25,26 . Therefore, the above possible relaxation mode coupling with increasing rate of heating may imply the coupling of various atomic rearrangement motions, which provides critical information for understanding hidden physical process during the GLT. However, due to the limited experimental capacity of calorimetry instruments, until now, there has been no report about this kind of mode coupling for different relaxation events with heating. Recently, one advanced commercial chip-based instrument for fast differential scanning calorimetry (DSC) (Mettler Toledo, Flash DSC) enabled the use of heating rates for thermoanalytical measurements over a range of six orders of magnitude, which provides a good opportunity to systematically study relaxation events during heating and related relaxation dynamics in amorphous materials [27][28][29][30] . In this paper, we systematically investigate the relaxation dynamics of three typical MG systems with different relaxation patterns in a large range of heating rates, six orders of magnitude, by combining advanced Flash and conventional DSC instruments. First, we find the step-like emergence of three distinct relaxation events as the temperature and heating rate increase in a LaCe-based MG system. Moreover, we observe and discover a large increase in the supercooled liquid region during the merging of multiple relaxations. Additionally, the corresponding excess heat capacity exhibits a similar step-like increase with the merging of multiple relaxations. In addition, similar relaxation merging behaviors with heating rate take place in the other two MG systems. These results reveal the novel activation path of different relaxation events during the GLT with the heating rate and provide a new approach to tune the relaxation dynamics by ultrafast heating to increase the supercooled liquid region. Materials Ingots of La 55 Ce 15 Ni 20 Al 10 , La 60 Ni 25 Al 15 and Al 90 Ca 10 were prepared by arc melting the elemental components several times in a Ti-gettered argon atmosphere to ensure homogeneity. Then, ribbon samples were prepared by single-roller melt spinning on a copper wheel with a tangential speed of 55 m/s. The ribbons had a cross section of approximately 2 mm × 20 μm and were approximately several meters in length. The cooling rates for preparing the ribbon-like samples in this study were estimated as approximately 2.5 × 10 6 K/s. The glassy nature for all ribbon-like samples was ascertained by X-ray diffraction (D8 Discover Diffraction with Cu K α radiation, Bruker, United States) and DSC (Diamond DSC, Perki-nElmer Inc., United States). Calorimetry measurements The samples for all the DSC tests, including conventional DSC and Flash DSC tests, were cut from the above as-cast ribbons in sizes that were suitable for different DSC instruments. A high-rate differential scanning calorimeter with chip sensors (Flash DSC 2, with maximum heating and cooling rates of 4 × 10 4 and 1 × 10 4 K/s, respectively, Mettler Toledo, Switzerland) was used to investigate the thermal behaviors of various MG systems under different heating rates from 5 K/s to 10,000 K/s. The measurement temperature range for the Flash DSC 2 was between −90°C and 1000°C. The as-cast ribbon-like samples were cut into tiny rectangular pieces of approximately 150 μm × 150 μm × 20 μm (length × width × thickness) and then loaded onto the Flash DSC chip. As a reference, the heat flow curve at a low heating rate of 0.33 K/s was also measured by Diamond DSC in this study. For conventional DSC tests, the as-cast ribbon-like samples were cut into small pieces by scissors; each piece was approximately 5 mm × 2 mm × 20 μm (length × width × thickness), which was suitable to be loaded into an aluminum pan for conventional DSC measurements. The masses of the samples in conventional DSC measurements were weighed. For the mass of the tiny Flash DSC samples, the mass was estimated from the corresponding volume (measured by optical microscope) and density (estimated by the Archimedean technique). Heat capacity measurement based on the Flash DSC platform and confirmation of excess heat capacity at the glass transition temperature Based on the Flash DSC platform, the heat capacity c p was measured 31 . The detailed temperature-time program for the step response analysis included a temperature jump of 2 K (far less than the thermal fluctuations) at one heating rate φ h and isotherms of duration Δt. The value of Δt was set to guarantee a frequency of 10 Hz. Thus, for different applied heating rates, the value of Δt was adjusted. The complex heat capacity, c p *(ω) = c p ' (ω) − ic p "(ω), was calculated by a Fourier transform of the heat flow rate HF(t) and the instantaneous heating rate φ h (t), , where c p * is the complex heat capacity, c p ' and c p " are the real part and imaginary part of the complex heat capacity, ω is the frequency, HF is the heat flow and φ h is the heating rate. From the obtained complex heat capacity, the reversible heat capacity c p was calculated byc p ðwÞ ¼ ðc 0 p ðwÞ 2 þ c 00 p ðwÞ 2 Þ 0:5 . For each applied heating rate φ h , the temperature of the onset of the glass transition was obtained from the corresponding heat flow curve. Then, the excess heat capacity Δc p at the glass transition temperature was calculated by subtracting the linear fit of the heat capacities of the supercooled liquid state and glass state. Observation of multiple relaxation events via the combined DSC platform in LaCe-based MG Two typical heat flow curves by conventional DSC with a heating rate of 0.33 K/s and by Flash DSC with a heating rate of 100 K/s for the LaCe-based MG are exhibited in Fig. 1b, c, respectively. The conventional heat flow curve displays only the glass transition signal corresponding to the primary α relaxation. In contrast, the Flash DSC heat flow curve exhibits two distinct endothermic peaks before the glass transition. For LaCe-based MG, the primary α relaxation, the secondary β relaxation and the fast β' relaxation have been reported based on results of dynamic mechanical analysis (DMA) 9 . However, the conventional calorimetry method with limited heating rates of several K/s cannot observe the activation of the secondary β relaxation and the fast β' relaxation except when the thermal signal of secondary β relaxation is induced by a thermal annealing treatment 32,33 . By comparison, in this study, based on ultrafast Flash DSC, the secondary β relaxation and the fast β' relaxation are observed for the first time by the calorimetry method, as shown in Fig. 1c. Previous studies have reported that the secondary β relaxation and fast β' relaxation commonly observed in rare earth-based MGs are separately correlated to stringlike motion and more local cage-breaking motion, which can be attributed to the presence of highly 'mobile' atomic pairs, such as the rare earth element-Ni pair 9,10,16 . Therefore, from this perspective, ultrafast nanocalorimetry provides a new and powerful tool to investigate dynamic behaviors in glasses, such as the GLT and various relaxation behaviors during heating, and the accompanying local atomic rearrangement motions during these thermally activated processes in glassy materials. To investigate the evolution of various relaxation events with heating rates in the LaCe-based MG, a series of heating rates ranging over six orders of magnitude were applied to measure the corresponding heat flow curves. All heat flow curves were normalized by sample mass and are shown in Fig. 2a, b. From Fig. 2a, b, when the heating rate φ h increased to approximately 10 K/s and 50 K/s, the thermal signals of the endothermic peaks for the secondary β relaxation and fast β' relaxation appeared, respectively. Considering that the thermal signals for the secondary β relaxation and fast β' relaxation correspond to the activation of local string-like and cage-breaking motions, as shown in Fig. 1a, the thermal signal intensity was relatively weak, and it was difficult to detect these signals at low heating rates by conventional DSC instruments 20 . In contrast, when the heating rates were larger than the corresponding critical heating rates for the appearance of the β and β' relaxations, the thermal signals corresponding to the activation of the β and β' relaxations were strong enough to be observed by Flash DSC; they appear as two obvious endothermic peaks before the glass transition in Fig. 1c. From the perspective of recovery enthalpy during heating, when the heating rate is very small, the effect of the decrease in the recovery enthalpy induced by structural relaxation during heating is very large, and the thermal signals from the β and β' relaxations are relatively weak. When the applied heating rates become large enough, the effect of the decrease in the recovery enthalpy can be minimized, and thus, the thermal signals from the β and β' relaxations become strong enough to be detected 16,20 . Moreover, it is apparent that the thermal signals for the primary α relaxation (glass transition), the secondary β relaxation and the fast β' relaxation evolve with increasing rate of heating. In particular, it is intriguing to observe in Fig. 2b that the secondary β relaxation started to merge with the primary α relaxation when the heating rate increased into one critical value φ β→α of approximately 2400 K/s. The merging of the initial α relaxation (marked as α I ) and the secondary β relaxation led to the second stage of the α relaxation (marked as α II ). Furthermore, when the heating rate continued to increase to another critical value φ β'→α of approximately 15,000 K/s, the fast β' relaxation merged with the α II relaxation and entered the third stage of α relaxation (marked as α III ). Therefore, the above results directly show that the thermal relaxation spectra for MGs at different heating rate ranges are not the same and that there appear to be different activated relaxation modes for one MG system within different heating rate ranges. Activation energies for various relaxation events To further illustrate the evolution paths of the different relaxation events for LaCe-based MG, the detailed evolutions of the onset temperature T onset and the end temperature T end for the different relaxation events with heating rates were obtained based on Fig. 2a, b and plotted in Fig. 2c. The α relaxation process within the whole experimental heating rate range clearly displayed three stages: stage I (heating rate below 2400 K/s), stage II (heating rate between 2400 K/s and 15,000 K/s) and stage III (heating rate above 15,000 K/s). When the heating rate increased to 2400 K/s, the end temperature of the secondary β relaxation reached the onset temperature of the α relaxation, which implied the merging of the secondary β relaxation and α relaxation (shown in the upper-right plot of A in Fig. 2c). When the heating rate increased to 15,000 K/s, the end temperature of the fast β' relaxation reached the onset temperature of the α relaxation, which implied the merging of the fast β' relaxation and the α relaxation (shown in the lower-right plot of B in Fig. 2c). Notably, the shift of all three relaxations to higher temperatures with increasing heating rate in Fig. 2a and b directly implied that these relaxations were thermally activated processes. Although the representation of the temperature and heating rate dependence has been presented differently in the literatures 9,10,20-22 , such as the Kissinger equation or Arrhenius equation, the values of the effective activation energy for the relationships obtained by fitting the Kissinger equation and Arrhenius equation within a small temperature range were very close 20,34 . Thus, in this study, the effective activation energy for relaxations was obtained by simply fitting the onset values of the relaxation events in Fig. 2c by the Kissinger equation, as shown in Fig. 2d. The values of the effective activation energies for the α, β and β' relaxations were 129.3 ± 2.6, 75.6 ± 1.5 and 41.6 ± 1.2 kJ/mol (1.34 ± 0.03, 0.78 ± 0.02 and 0.43 ± 0.01 eV), respectively, which were consistent with the previous results obtained by the DMA method for LaCe-based MG 9 . Moreover, two step-like decreases in the onset temperature for the α relaxation were observed, 10 K and 19 K, when the β and α relaxations merged and the β' and α relaxations merged (shown in the right plots of Fig. 2c). Meanwhile, the corresponding activation energies for the α relaxation showed two step-like decreases, from 129.3 ± 2.6 kJ/mol (1.34 ± 0.03 eV) to 106.8 ± 2 kJ/mol (1.11 ± 0.02 eV) and from 106.8 ± 2 kJ/mol (1.11 ± 0.02 eV) to 84.2 ± 3.2 kJ/mol (0.87 ± 0.03 eV). Thus, for the three stages of the α relaxation, the corresponding activation energies differed. These results indicated that the α relaxation was more easily activated at a large heating rate than a slow heating rate, considering the decrease in the α relaxation activation energy with heating rate. Supercooled liquid region evolution with heating rates In general, for glassy materials, the glass transition behavior during heating (the GLT process) can be characterized by a single endothermic reaction, where the specific heat, c p , increases abruptly to a maximum value and then remains constant or slightly decreases to the crystallization onset temperature, T x . The temperature range between the glass transition and crystallization is called the supercooled liquid region (ΔT = T x − T g ), which is closely related to glass formation ability and thermoplastic deformation ability. Moreover, considering the different evolution paths for the glass transition and crystallization with temperature during heating, the supercooled liquid region should continuously change with heating rate. Here, from Fig. 2c, the extent of the supercooled liquid region ΔT for LaCe-based MG is plotted against the heating rate in Fig. 3a. The supercooled liquid region displays three stages: I, II and III, which are consistent with the three stages of the α relaxation in Fig. 2c. When the heating rate was below 2400 K/s, ΔT slowly increased with increasing heating rate; and when the heating rate increased to 2400 K/s, ΔT exhibited an obvious step-like increase from 20 K to 70 K. Similarly, when the heating rate was between 2400 K/s and 15,000 K/s, ΔT slowly increased with the heating rate; when the heating rate increased to 15,000 K/s, ΔT developed a second step-like increase from 70 K to 110 K. The inserted graphs in Fig. 3a give the thermal relaxation spectrum corresponding to heating rates of 800 and 3000 K/s. Most interestingly, these two step-like increases in ΔT were accompanied by the merging of the secondary β relaxation and α relaxation and the merging of the fast β' relaxation and α relaxation, respectively. For MGs, the larger the supercooled liquid region is, the greater the resistance to crystallization, which is one of the prerequisites for thermoplastic forming applications 17 . Excess heat capacity evolution with heating rates From the perspective of statistical physics, the heat capacity of matter in an equilibrium state reflects the possible motions of the basic atoms in the system 35 . Compared to the absolute heat capacity for one state, the excess heat capacity Δc p of supercooled liquids relative to their glassy state contains contributions from the influence of changes in configuration states on the thermodynamic contributions of atomic rearrangement processes, which also give rise to secondary relaxations below the glass transition [36][37][38][39] . Thus, the excess heat capacity between the glass and liquid can reveal the kinds of atomic rearrangement motions that are involved in dynamic processes. Here, the heat capacities corresponding to different heating rates for LaCe-based MG were measured (discussed in detail in the Methods section), and one heat capacity curve for 20,000 K/s is shown in the inset of Fig. 3b. The excess heat capacity between the glassy state and the supercooled liquid state with different heating rates is plotted in Fig. 3b. Similar to the step-like increase in the supercooled liquid region, the excess heat capacity displayed two step-like increases with the merging of multiple relaxations. For stage I, before the β relaxation merged into the α relaxation, the value of Δc p was almost constant, 12.4 ± 0.3 Jmol −1 K −1 . For MGs with only the thermal α relaxation, previous studies reported that the values of excess heat capacity were almost constant, 13.7 ± 2.1 Jmol −1 K −1 38,39 , which was consistent with the above results of stage I in this study. After the merging of the β, β' relaxations and α relaxation in stages II and III, the values of Δc p jumped to 15 ± 0.1 Jmol −1 K −1 and 16.5 ± 0.1 Jmol −1 K −1 , respectively. This behavior can be used to understand the possible atomic rearrangement motions during the GLT. When the heating rate was in stage I, only the atomic rearrangement motions related to the α relaxation contributed to the GLT. When the heating rate increased into stage II, both rearrangement motions related to the α and β relaxations contributed to the GLT. When the heating rates further increased into stage III, all the rearrangement motions related to the α, β and β' relaxations participated in the GLT. Two additional typical MG systems For MGs, the relaxation patterns are closely related to their chemical compositions and structural heterogeneities. Various MG samples with different chemical compositions and microscopic structures display different relaxation behaviors [7][8][9][10][11]21,32,33 . Specifically, for the LaCe-based MG in this study, three different relaxation events developed during heating 9 . Thus, to verify whether the enlargements of the supercooled liquid region and the excess heat capacity with multiple relaxation merging via ultrafast Fig. 4a but not for Al 90 Ca 10 MG (Fig. 5a). In contrast, the thermal signal corresponding to the α relaxation for Al 90 Ca 10 MG can be observed by Flash DSC (Fig. 5b, c). For marginal MGs, such as Al-based MGs, the weak glass transition signal is prone to overlap due to strong primary crystallization 40 . Thus, conventional DSC with a heating rate of only several K/s is not a suitable calorimetric measurement method to detect the glass transition of Al-based MGs. However, considering that the glass transition and crystallization are significantly different kinetic processes, they follow different evolution paths with heating rates. For higher heating rates, the thermal signals corresponding to the glass transition and crystallization evolve along two different paths, and the glass transition and crystallization separate when the heating rate increases to one critical heating rate 19 . Thus, the glass transition for marginal Al 90 Ca 10 MG can be detected by Flash DSC with a large heating rate. Moreover, compared to the LaCe-based MG with three relaxation events in Fig. 1c, only two relaxation events, the secondary β relaxation and the α relaxation, appear in Figs. 4b, c and 5b, c. For Al-based and La-based MGs, with increasing rates of heating, the evolution rate of the β relaxation is much faster than that of the α relaxation, and finally, the β relaxation and the α relaxation merge when the heating rate increases to a critical value. In addition, based on the heat flow curves in Fig. 4b, c and Fig. 5b, c, the corresponding activation energies of the secondary β relaxation and the α relaxation for the Al-and La-based MGs were calculated by fitting the Kissinger equation; the detailed values are shown in Fig. 6a and d. Similar to the LaCe-based MG, the activation energy of the α relaxation decreases after the merging of the β relaxation and the α relaxation. Moreover, the supercooled liquid regions for the Al-and La-based MGs underwent a large increase from 34 K to 73 K and from 104 K and 221 K, respectively. These results indicate that the supercooled liquid region can be enlarged by multiple relaxation merging via ultrafast heating. In addition, the excess heat capacity exhibited a similar step-like increase with the merging of the secondary β relaxation and α relaxation for the La-and Al-based MGs in Fig. 6c, f. The excess heat capacity Δc p increased from 12.6 ± 0.2 Jmol -1 K −1 to 15.2 ± 0.1 Jmol −1 K −1 and from 12.7 ± 0.2 Jmol −1 K −1 to 15.1 ± 0.1 Jmol −1 K −1 for the Labased and Al-based MGs, respectively. Compared to the LaCe-based MG in Fig. 3b, it is interesting to note that the increases in Δc p induced by the merging of the β relaxation and the α relaxation were very similar for all three MG systems, approximately 2.5 ± 0.1 Jmol −1 K −1 . This result indicated that the similar value of the increase in Δc p resulted from the contribution of the atomic rearrangement motions corresponding to the β relaxation. To illustrate the contribution of the fast relaxation to the increase in the excess heat capacity during the α relaxation, a series of schemes for the heat capacity corresponding to different heating rate ranges are displayed in Fig. 3c. These schemes clearly show a common physical mechanism for the merging of the β relaxation and the α relaxation during heating in these MG systems, which implies that different atomic rearrangement motions became more uniform with the merging of different relaxations. Discussion One of the major questions in glass physics is whether vitrification from a viscous liquid into a nonequilibrium glass and devitrification from a glass to a liquid by heating are related exclusively to the primary structural relaxation process, the α relaxation process, which is attributed to the cooperative motion of several structural units, or whether other atomic rearrangement motions play a role. The conventional view, based on experimental evidence for various glassy materials, indicates that the cooling rate dependence of the glass transition temperature exhibits the same behavior as the temperature dependence of viscosity or the structural α relaxation time [1][2][3] . However, recent experiments on polymer glasses under geometrical confinement and MGs directly indicate that the vitrification kinetics and the α relaxation are decoupled, and the role of other atomic motion mechanisms of fast relaxation play a key role 31 . In this work, we used Flash DSC with a heating rate range of six orders of magnitude and discovered the enlargement of the step-like supercooled liquid region enlargement and the excess heat capacity induced by multiple relaxations merging in three MGs. In particular, the increase in the excess heat capacity between the supercooled liquid and the glass during the GLT comes from the contribution of the atomic rearrangement motions corresponding to fast relaxations with the merging of the relaxation events. Therefore, these results indicate that the atomic rearrangement motions related to fast relaxation modes in addition to α relaxation may be engaged in the GLT during heating at a fast heating rate. Moreover, for glassy materials, the endothermic signals before the glass transition are actually the calorimetric features of the unfreezing of atomic rearrangement in local regions 16 . Here, in this work, the observed endothermic thermal signals for different relaxation events in Fig. 2a and b imply that the atomic rearrangement motions corresponding to the activation modes of multiple local structural heterogeneities in MGs are heterogeneous 9,10,16,25,26 . Therefore, from the viewpoint of atomic rearrangement, the merging of multiple relaxations implies the merging of different modes of atomic motion, and the atomic rearrangement during the GLT evolves from spatially heterogeneous modes to more homogeneous modes with increasing rates of heating. In addition, for the merging of relaxations with the heating rate, it is interesting to see whether these relaxations interact. If the relaxations undergo noninteraction mixing, then these relaxations may separate when the heating rate continues to increase to a critical value. Unfortunately, according to the evolution of the onset and end points of different relaxations in Figs. 2d, 4d and 5d, there are no new merging or separation behaviors after the merging of the β', β and α relaxations for these three MG systems within the experimental heating rate limit of the Flash DSC instrument. New calorimetric technology with a faster heating rate is needed to investigate this question. However, considering that the transition of atomic rearrangement motion with the merging of different relaxations is usually irreversible, the merging of the relaxations may be due to interaction mixing. This point will be the next focus of our research. However, from the perspective of the potential energy landscape 5,15 , when the volume of the glass sample is constant, the potential energy landscape should be fixed. The energy landscape contains multiple metabasins separated by high barriers. For the relaxation events in glasses, the β relaxation is considered to involve stochastic and reversibly activated hopping events across "subbasins" confined within the inherent "metabasin" (intrabasin hopping), and the α relaxation is considered to involve irreversible hopping events extending across different metabasins in the landscape (as shown in Fig. 1a) 5 . Considering the fractal nature of the potential energy landscape in glasses, the hopping barriers corresponding to the α relaxation should be multiple rather than single, as indicated for a trimodal scheme at the top of Fig. 7. However, it is still unclear how a glass sample selects a corresponding energy landscape under different external stimuli. In this study, based on the above experimental results for the activation energy and the excess heat capacity for the α relaxation within different heating rate ranges, three different potential energy landscapes corresponding to different heating rate ranges are displayed, as shown in Fig. 7. In the low heating rate range (stage I), the potential energy landscape displays a rough pattern with a three-level metabasin pattern as shown in Fig. 1a. In the medium heating rate range (stage II), the potential energy landscape displays a less rough pattern with a two-level metabasin pattern including the β' relaxation and the α relaxation. In the high heating rate range (stage III), the potential energy landscape displays a smooth pattern with only a onelevel pattern including the α relaxation. Thus, the above evolution of the relaxation pattern with heating rate reflects the change in sampling of the potential energy landscape for MGs. The above results for new relaxation behavior by ultrafast heating provide a new possibility to tune the relaxation pattern and enlarge the supercooled liquid region by selectively designing applied heating rates. The activated relaxation pattern and the corresponding activated energy landscape are selected according to the range of heating rate. To clearly display the evolution of the thermal relaxation spectrum, the atomic mobility heterogeneity and the activated energy landscape within different heating rate ranges, a simple scheme was proposed; it is shown in Fig. 7. Here, the atomic mobility refers to the dispersion of the atomic mobility compared to the average atomic mobility at a given heating rate, which is indicated by different colors in Fig. 7. Strong atomic mobility means a larger dispersion of the atomic mobility compared to the average atomic mobility, and weak atomic mobility corresponds to a smaller dispersion of the atomic mobility compared to the average atomic mobility. Thus, with the experimental strategy of probing the evolution of the relaxation dynamics over a larger hearing rate range, new details about the potential energy landscape for a glass sample can be obtained. These new details can be extended to allow the study of the chemical composition and the effects of external stimuli on the evolution of the energy landscape. Fig. 7 Scheme of the evolution of the thermal relaxation spectrum, atomic dynamic heterogeneity and activated energy landscape at different heating rates.
7,572.6
2022-07-22T00:00:00.000
[ "Materials Science", "Physics" ]
Long-Term Stability of New Co-Amorphous Drug Binary Systems: Study of Glass Transitions as a Function of Composition and Shelf Time The amorphous state is of particular interest in the pharmaceutical industry due to the higher solubility that amorphous active pharmaceutical ingredients show compared to their respective crystalline forms. Due to their thermodynamic instability, drugs in the amorphous state tend to recrystallize; in order to avoid crystallization, it has been a common strategy to add a second component to hinder the crystalline state and form a thermally stable co-amorphous system, that is to say, an amorphous binary system which retains its amorphous structure. The second component can be a small molecule excipient (such as a sugar or an aminoacid) or a second drug, with the advantage that a second active pharmaceutical ingredient could be used for complementary or combined therapeutic purposes. In most cases, the compositions studied are limited to 1:1, 2:1 and 1:2 molar ratios, leaving a gap of information about phase transitions and stability on the amorphous state in a wider range of compositions. In the present work, a study of novel co–amorphous formulations in which the selection of the active pharmaceutical ingredients was made according to the therapeutic effect is presented. Resistance against crystallization and behavior of glass transition temperature (Tg were studied through calorimetric measurements as a function of composition and shelf time. It was found that binary formulations with Tg temperatures higher than those of pure components presented long-term thermal stability. In addition, significant increments of Tg values, of as much as 15 ∘C, were detected as a result of glass relaxation at room temperature during storage time; this behavior of glass transition has not been previously reported for co-amorphous drugs. Based on these results, it can be concluded that monitoring behavior of Tg and relaxation processes during the first weeks of storage leads to a more objective evaluation of the thermomechanical stability of an amorphous formulation. Introduction It is well-known that the number of active pharmaceutical ingredients (APIs) with high therapeutic potential but low water solubility is constantly growing due to sustained drug discovery efforts. Currently, around 40% of commercial drugs are sparingly soluble in water. The disadvantage of formulating drugs with poorly soluble active ingredients is that they can lead to low bioavailability [1]. Transformation of a material from the crystalline state into its amorphous state is a strategy applied to increase the solubility of pharmaceutical products; glassy or amorphous materials are therefore of great interest in the pharmaceutical field particularly because their preparation by melt-quenching has advantages of shorter processing time compared to other strategies [2][3][4]. The structure of an amorphous material is characterized by a long range disordered arrangement of its molecules leading to a higher chemical potential compared to the more stable crystalline form. This higher chemical potential is the driving force for a higher dissolution rate and saturation concentration when dissolved in water [5,6], but it is also the driving force for the crystallization process. Due to their thermodynamic instability, active pharmaceutical ingredients in the amorphous state tend to recrystallize [7][8][9]. In order to avoid crystallization, it has been a common strategy to add a second component to hinder the crystalline state and form a thermally stable co-amorphous system, that is to say, an amorphous binary system which retains its amorphous structure. The second component can be a small molecule excipient (such as a sugar or an amino acid) or a second drug, with the advantage that a second component could be used for complementary or combined therapeutic purposes [2]. A review of the current literature shows that efforts along this line are modest, there being around twenty drug-drug binary systems reported to be stable in the amorphous state that have been studied thermally and structurally [3,10]. If we compare this number with the large number of poorly soluble drugs [11], there is a great area of opportunity to find new stable amorphous binary systems with increased solubility. In addition, most of the studies on co-amorphous drugs report limited compositions of the binary systems, at most 1:1, 1:2 and 2:1 ratios, leaving a gap of information that needs to be filled by the exploration of a wider composition range to fully characterize the effects of the formulation on the stability and phase transitions of the mixtures, both in the crystalline and amorphous state. The reason why most of the studies of binary systems are performed in the 1:1 molar ratio is that this composition has been considered the most stable in the amorphous mixture; in some cases, it has been reported that an excess of either API in the sample may destabilize the system enough as to return it to its crystalline state. The hypothesis for a preferable 1:1 formulation is based on the idea that heterodimers are formed through intermolecular interactions, like hydrogen bonding or ion pairing, leading to a structure that is unable to find a new crystalline order [12,13]. The lack of systematic studies reporting long-term thermal stability of binary systems against crystallization in a wide range of compositions has motivated us to establish a strategy for the selection of the components of a binary system based on the construction of phase diagrams and the behavior of glass transition temperatures as a function of composition. With this knowledge, it is possible to define the feasibility of stable formulations of a binary system of active ingredients at a ratio relevant to complementary therapeutic doses. The aim of this study was to prepare and evaluate the thermal properties of new co-amorphous binary systems whose components (nifedipine, NIF; nimesulide, NIM; carvedilol, CAR and cimetidine, CIM) were selected considering potential pharmaceutical interest in formulations of these APIs for combined or complementary therapy. Cimetidine was chosen because it is one of the most prescribed drugs for treatment of gastric ulcers and reflux diseases and lately has been reported as a possible anticancer agent [14]. In order to take advantage of two drugs in a co-amorphous system, and since there is a considerably high number of patients with cimetidine treatment, there is an obvious need to study this API in combination with other drugs. In the present work, cimetidine was studied as a second component of a binary system with nifedipine, a class II drug (low solubility and high permeability) that is unstable in the amorphous state. Nifedipine is used for the treatment of angina pectoris and primary hypertension. There are reports that cimetidine causes increments of pharmacokinetics of nifedipine with no apparent effect on the pharmacological response [15]; keeping this information in mind, it is of interest to explore the stability of these two APIs in a wide range of compositions in order to find possible combined formulations. In relation to NIM-CAR, nimesulide is also a class II drug, and one of the most commonly prescribed drugs for its anti-inflammatory , antipyretic, and analgesic activities [16]. It belongs to a sulfonanilide compound class with a low incidence of side effects. It is indicated for patients with disorders such as arthritic conditions, musculoskeletal disorders, headache and vascular diseases [17]. Carvedilol is a class II drug used as an antihypertensive agent and in the treatment of heart failure [18]; considering that there is a high number of patients that may present both illnesses, carvedilol was selected to be studied in combination with nimesulide. Phase transitions of the new binary systems were fully characterized on a wide range of molar ratios. Preparation and characterization of the new co-amorphous system were performed in order to study the long-term stability on the amorphous state as a function of behavior of their glass transition temperatures, composition and shelf time. Results As an initial step towards the characterization of the proposed systems, samples of pure APIs (NIF, NIM, CAR, and CIM) shown in Table 1 were prepared to evaluate the thermal properties of their crystalline and amorphous state. Two new binary co-amorphous systems were prepared, NIF-CIM and NIM-CAR, whose components were selected according to their therapeutic complementarity. These mixtures were prepared in a wide range of molar fractions to construct phase diagrams and to gather enough information to evaluate the effect of composition in the glass transition temperature and stability of the amorphous state. The measurements of the glass transition temperatures after a long storage time showed a significant increase in T g compared to the values observed right after the preparation of the glass, indicating a structural relaxation process as a result of storage time. This was monitored as a function of time with the interest of describing the effect of this relaxation in the stability of the amorphous state. Table 1 also shows properties of the pure active ingredients selected in this work. The chemical structures of these substances are shown in Figure 1. Figure 2a shows the thermograms corresponding to the heating process of crystalline samples of the NIF-CIM system. For the pure components, the endothermic peak corresponds to the melting of the active ingredient (mol fraction x c = 1.0 corresponds to cimetidine and mol fraction x c = 0 corresponds to nifedipine). Melting temperature of CIM occurred at 141.5 • C, which corresponds to the forms A or D of polymorphs of this drug; these two polymorphs have very close melting temperatures according to the results reported by Bauer-Brandl et al. [19]. For pure nifedipine, the endothermic peak corresponding to its melting temperature occurred at 172 • C; it has been reported that there are three polymorphs of this drug, and according to data reported, this peak corresponds to melting process of polymorph Form I or α [8,25,26]. In the case of crystalline mixtures, the first endothermic peak corresponds to the melting of the eutectic composition and the second to the liquidus temperature. The eutectic temperature or invariant temperature corresponds to the lowest temperature at which a specific proportion of the components of the binary mixture start to melt (eutectic composition). As the temperature is increased, the proportion of the liquid phase present gradually increases until the whole sample is molten at the liquidus temperature. Figure 2b shows the thermograms resulting from the thermal analysis of the amorphous in situ quenched samples where the blue lines correspond to the measurements performed immediately after preparation of the amorphous samples in differential scanning calorimetry pans. According to the heating curves, no crystallization process was observed during heating for pure CIM (x c = 1), and it showed a glass transition temperature signal at 43.6 • C. For the case of NIF (x c = 0), the heating scan performed immediately after preparing an amorphous sample also showed a T g signal of around 43 • C and also an exothermic peak at 98 • C due to crystallization while heating up. The binary system with molar fraction x c = 0.2 also showed T g and a crystallization process which indicates that this particular composition is also unstable when sample is heated; this can be explained by the fact that this mixture is rich in NIF. It is worth mentioning that, although the sample with composition x c = 0.2 crystallizes when heated, it was stable during storage at 25 • C. Curves with mole fractions from x c = 0.3 to x c = 0.9 showed very similar values of T g and none of these systems presented a crystallization process, indicating that these amorphous binary samples are stable against crystallization. These results showed that stabilization of the amorphous state of NIF can be achieved by adding CIM (in a composition range x c = 0.3-0.9). The NIF-CIM System After storing the samples at room temperature in dry conditions (in a desiccator at 25 • C) for 133 days, a second measurement was performed. These thermograms are shown with gray thin lines (Figure 2b). A higher glass transition temperature and an overshoot are observed for each of the stored samples, except for pure nifedipine which shows almost complete crystallization (close to 90% of the amorphous material was lost due to spontaneous crystallization). The overshoot observed during the glass transition measurement corresponds to the recovery of the lost enthalpy as a result of a spontaneous molecular reorganization of the amorphous material in which its structure shifts towards its thermodynamic equilibrium [27] .This molecular reorganization also explains the increment of about 5 to 15 degrees in the glass transition temperatures of CIM and CIM-NIF mixtures. An increment of the glass transition temperature has been reported by Pikal et al. in studies of annealing of sugar glasses performed at several temperatures [28], but a similar observation has not been previously reported for active pharmaceutical ingredients during storage at room temperature. For stored samples with compositions from x c = 0.3 to x c = 1, the absence of crystallization and melting endotherms indicate stability in the amorphous state during storage of these binary mixtures. Figure 3a shows the thermograms corresponding to the heating process of crystalline samples of the NIM-CAR system. In a similar manner as the NIF-CIM system, the onset of the endothermic peaks from the pure components correspond to the melting temperatures of each active ingredient. For each composition of the binary mixtures, the onset of the first endothermic process corresponds to eutectic temperature and the peak of the second endothermic process is taken as the liquidus temperature. [29]. Nimesulide is unstable in an amorphous state, and it can be observed that stabilization of the amorphous state can be achieved by adding carvedilol, since, for compositions from x c = 0.3 to x c = 0.8, only a glass transition process is observed and no crystallization process is present. Together with the thermograms for the samples measured immediately after preparation, Figure 3b also shows the thermograms of stored samples (85 days of storage at 25 • C). As it was observed for the NIM-CIM system, the thermograms of the stored NIM-CAR samples also present a clear overshoot at the glass transition due to the structural relaxation of the material. Once again, the glass transition temperatures are in all cases higher by almost 10 • C in the stored samples compared to the values measured right after the quenching. Phase Transition Diagrams From the processes observed in the thermograms for the samples in the crystalline state (see Figure 2a), a phase diagram was constructed for the system NIF-CIM, and it is presented in Figure 4a. T g values for samples right after prepared (just after quenching) and after 133 days of storage are also included in the same phase diagram. Glass transition temperatures for the pure components and for several of the compositions prepared are very similar and are near 43 • C. In the case of the stored samples, the T g is observed at a higher temperature at values close to 56 • C, resulting in an increment of 13 • C. This is an important finding because, in most cases, T g of co-amorphous systems are evaluated only right after the amorphous material has been prepared and measurements should be made after enough time has passed as to reach a stable T g temperature corresponding to a relaxed glassy structure. This time may vary for each sample, but for organic molecules, which are typically classified as fragile liquids, three to four weeks may be needed. It is important to mention that, although there are observations of relaxation as a result of exposing amorphous samples to annealing processes in previous studies [30], there are no prior reports of increments in T g during storage at room temperature for amorphous drugs. From the transition temperatures observed in the thermograms shown in Figure 3a, a phase diagram for NIM-CAR was constructed in which the glass transition temperatures for the recently prepared amorphous samples and those measured after a storage period of 85 days are also included. It can be observed that, in this binary system, the glass transition temperatures are higher for the mixtures than for the pure components, and, for all stored samples, an increment of T g was observed of about 11 • C. The increments in T g during storage at room temperature have not been reported for co-amorphous systems, since, in most cases, T g is only evaluated right after the amorphous material has been prepared. These findings suggest that measurements of glass transition should be made after enough time has passed as to reach a more relaxed glassy structure. To study this increment of T g during storage, a detailed monitoring of this phenomenon was performed. Behavior of Glass Transition during Storage Measurements of T g after a certain storage time were performed to study behavior of glass transition and relaxation processes for samples stored at 25 • C. Results are shown in Figure 5 for pure cimetidine and carvedilol (nimesulide and nifedipine samples presented crystallization and therefore are not included in this analysis) and Figure 6 for the binary mixtures NIF-CIM and NIM-CAR of composition x c = 0.5. The analysis of the behavior of T g as a function of time, presented in Figure 7, shows a two-step relaxation process: one that occurs relatively fast, generally responsible for the largest increment of T g , and a second step, showing a slower relaxation process whose change in T g is not as significant as the first step. With these results, it is concluded that measurements should be made after enough time has passed as to allow the fast relaxation process to occur. This time may vary for each sample, but it seems that, for organic molecules, which are typically classified as fragile liquids, ten to twenty days may be needed. It is important to mention that, although there are observations of relaxation as a result of exposing amorphous samples to annealing processes in previous studies [30], there are not prior reports of observations of increments in T g for amorphous drugs during shelf life at room temperature. The stability of two unstable active ingredients in amorphous state (NIM and NIF) was markedly increased to almost two years by formulating them as co-amorphous binary systems. As evidence of the long-term stability of the new co-amorphous formulations presented in this work, Figure 8 shows thermograms of binary systems that have remained stable in the amorphous state for a period longer than fifteen months (samples are still stable and still being monitored). Spectroscopic Analysis Fourier transform infrared (FTIR) spectra of pure active ingredients in crystalline and amorphous state were acquired in order to gather additional information on the structural modifications caused by loss of directional order and changes in intermolecular interaction. In addition, spectra of binary amorphous mixtures of molar composition 1:1 are presented for comparison to the spectra of pure amorphous samples. As it can be seen in Figure 9a, a broadening of the signal corresponding to the -NH functional group of cimetidine is observed going from the crystalline to the amorphous state with a corresponding shift from 3220 to 3260 cm −1 . This shift to a higher wavenumber suggests a loss of hydrogen bonding between the amino hydrogen and the nitrogen in the guanidine group. This observation correlates with the merge of cimetidine signals at 1620 and 1580 cm −1 , associated with a sp 2 C-N double bond, and merges in a strong single band in the amorphous state. Signals at 1200, 1160 and 1080 cm −1 , attributable to the stretching of the sp 2 C-N single bond, also show a significant broadening in the amorphous state. Similarly, in the case of crystalline nifedipine, the -NH signal in the crystalline state at 3220 cm −1 shifts to 3340 cm −1 , while the ester -C=O absorption occurring at 1680 cm −1 in the crystalline state increases to 1700 cm −1 , indicating a loss of hydrogen bonding interactions due to molecular disorder. The symmetric -NO 2 signal at 1310 cm −1 apparently splits into two signals at 1280 and 1300 cm −1 in the amorphous state. After inspection of the spectrum for the amorphous CIM-NIF binary mixture, it can be deduced that, in general, all IR signals are in good approximation a superposition of signals present in amorphous cimetidine and nifedipine, except for the broad band observed in amorphous nifedipine from 1600 to 1700 cm −1 , which is reduced to a narrower band around 1680 cm −1 in the mixture. In Figure 9b, it can also be found that the combined -NH and -OH signal in carvedilol presents a shift to a higher wavenumber, from 2940 cm −1 to 3400 cm −1 , suggesting a weakening in the hydrogen bonding in the amorphous state. Above 1400 cm −1 , crystalline and amorphous carvedilol are not significantly different and signals below 1000 cm −1 observed in the crystal are missing in the amorphous state. Comparing crystalline and amorphous nimesulide, all signals are in general broadened and weakened. The -NH signal of the sulfonamide shifts from 3280 cm −1 to a broad signal at 3260 cm −1 . In this case, the slight shift observed is consistent with an intramolecular hydrogen bond that is not greatly disturbed by the molecular disordering introduced by amorphization. In the amorphous 1:1 mixture, an apparent shift of the signals corresponding to nimesulide at 1100 and 1120-1150 cm −1 of about 50 cm −1 is observed, probably merging into signals belonging to carvedilol present in the same range. Materials Nifedipine (98%), cimetidine, carvedilol (USP) and nimesulide were purchased from Sigma-Aldrich (St. Louis, MO, USA), and they were used as received without further purification. The co-amorphous binary systems (NIF-CIM, NIM-CAR) were prepared by melt-quenching. The chemical structures of these substances are shown in Figure 1. Determination of Phase Transitions by Thermal Analysis To identify phase transitions of pure components and mixtures in the amorphous and crystalline samples, a Diamond Perkin-Elmer Differential Scanning Calorimeter, equipped with an intra-cooler system, (Waltham, MA, USA) was used. The amount of sample for thermal analysis was ca. 3 to 5 mg, which was packed and sealed in aluminum cells with a volume of 50 µL. The instrument was calibrated with Indium. The heating and cooling method applied during thermal analysis to identify phase transitions of crystalline samples of pure active ingredients was as follows: samples were heated at a rate of 10 • C/min from 30 • C until samples were completely molten. After identification of endothermic signal corresponding to melting temperature, molten samples were cooled in the DSC instrument at a cooling rate around 70 • C/min. The purpose of this cooling step was to produce amorphous samples in situ by the method of melt-quenching. Once amorphous samples were obtained, a second heating scan at 10 • C/min was performed to identify the glass transition temperature of the recently prepared amorphous material. For the case of the binary systems, mixtures with different composition were prepared by gently grinding solids in a mortar and then analyzed in a similar fashion as the pure components. To monitor the behavior of glass transition temperature as function of shelf-time, amorphous samples in sealed DSC cells were stored in a desiccator at room temperature (25 • C) and then analyzed after days of storage. FTIR Spectroscopic Analysis For the spectroscopic analysis, a Fourier transform infrared spectrometer, Perkin-Elmer Spectrum 400 FTIR-NIR operating in the near-infrared region (Shelton, CT, USA) was used. Samples were placed in contact with a horizontal attenuated total reflectance (ATR) accessory (Shelton, CT, USA) with a zinc selenide prism. All spectra were acquired using four scans with a resolution of 4 cm −1 . The range selected was from 380 to 4000 cm −1 . For amorphous samples, 5 mg of the crystalline sample were placed on an aluminum foil, melted in an oven and cooled at room temperature to vitrify. Amorphous samples that adhered to the aluminum foil were then immediately placed on the ATR prism for measurement. Conclusions Stabilization of amorphous nifidipine and nimesulide (that are unstable APIs on the amorphous state as pure ingredients) was achieved by the preparation of new co-amorphous formulations, NIF-CIM and NIM-CAR, that are potential candidates for future use as combined therapy. It was found that T g of binary systems can increase as a function of storage time, so it is recommended to monitor the behavior of this parameter after storage (at least three weeks after preparation) in order to obtain a more objective value of the thermomechanical stability of the samples. Considering that, for the amorphous state, the solubility of active ingredients is enhanced, compositions of new formulations intended for its use as therapeutic formulations will have to be adjusted. For this reason, in order to develop new co-amorphous drugs for combined therapy, the study of phase transitions in a wide range of compositions (both on the amorphous and crystalline states) as well as the monitoring of T g as a function of storage time is necessary to make sure the systems will remain stable at complementary therapeutic doses.
5,804.4
2016-12-01T00:00:00.000
[ "Materials Science" ]
Succinyl-proteome profiling of Pyricularia oryzae, a devastating phytopathogenic fungus that causes rice blast disease Pyricularia oryzae is the pathogen for rice blast disease, which is a devastating threat to rice production worldwide. Lysine succinylation, a newly identified post-translational modification, is associated with various cellular processes. Here, liquid chromatography tandem-mass spectrometry combined with a high-efficiency succinyl-lysine antibody was used to identify the succinylated peptides in P. oryzae. In total, 2109 lysine succinylation sites in 714 proteins were identified. Ten conserved succinylation sequence patterns were identified, among which, K*******Ksuc, and K**Ksuc, were two most preferred ones. The frequency of lysine succinylation sites, however, greatly varied among organisms, including plants, animals, and microbes. Interestingly, the numbers of succinylation site in each protein of P. oryzae were significantly greater than that of most previous published organisms. Gene ontology and KEGG analysis showed that these succinylated peptides are associated with a wide range of cellular functions, from metabolic processes to stimuli responses. Further analyses determined that lysine succinylation occurs on several key enzymes of the tricarboxylic acid cycle and glycolysis pathway, indicating that succinylation may play important roles in the regulation of basal metabolism in P. oryzae. Furthermore, more than 40 pathogenicity-related proteins were identified as succinylated proteins, suggesting an involvement of succinylation in pathogenicity. Our results provide the first comprehensive view of the P. oryzae succinylome and may aid to find potential pathogenicity-related proteins to control the rice blast disease. Significance Plant pathogens represent a great threat to world food security, and enormous reduction in the global yield of rice was caused by P. oryzae infection. Here, the succinylated proteins in P. oryzae were identified. Furthermore, comparison of succinylation sites among various species, indicating that different degrees of succinylation may be involved in the regulation of basal metabolism. This data facilitates our understanding of the metabolic pathways and proteins that are associated with pathogenicity. Scientific RepoRts | (2019) 9:3490 | https://doi.org/10.1038/s41598-018-36852-9 (Toxoplasma gondii), plants (Solanum lycopersicum, Taxus × media, Oryza sativa, Brachypodium distachyon and Dendrobium officinale) and mammals (Rattus norvegicus, Homo sapiens and Mus musculus) [7][8][9][10][11][12][13][14][15][16] . A large number of succinyl-lysine residues were identified by mass spectra (MS) and protein sequence alignments in various organisms 17 . Based on previously published data sets, proteomic lysine succinylation is evolutionarily conserved and occurs frequently in the proteins that are involved in some metabolic pathways, such as glycolysis, tricarboxylic acid (TCA) cycle and carbohydrate metabolism 10,12,18 . Thus, identifying succinylated proteins may provide useful information for biological research. Rice (Oryza sativa L.) blast, a destructive disease of rice, is caused by the ascomycete Pyricularia oryzae (synonym, Magnaporthe oryzae) 19,20 . Each year, enormous reduction (approximately 10% to 30%) in the global yield of rice was caused by this disease. Due to the quickly variation velocity of P. oryzae, it is difficult to find rice cultivars resistant to rice blast 21 . For years, massive fungicides were used to control this disease 22 . With the increase of application dose, a series of potentially hazardous health and environmental issues have emerged 23 . Thus, it is urgent to develop an efficient and sustainable strategy to long-lasting resistance of rice to changing fungal pathogens. Studies on P. oryzae pathogenic genes, which owned the potential to against fungal disease, have been carried out in recent decades. Numbers of fungal genes involved in pathogenicity have been identified in P. oryzae 24,25 . For example, MPG1, a gene expressed during infectious growth of the fungal pathogen in its host, was involved in pathogenicity from P. oryzae 26 . Four LIM domain proteins involved in infection-related development and pathogenicity are important regulators of infection-associated morphogenesis in the rice blast fungus 27 . Various metabolic pathways were found essential for understanding pathogenicity in P. oryzae. For example, the fungus can suppress host defenses via nitro oxidative stress response to facilitate the development within host cells 28 . Cellular glucose mediated TOR pathway plays an important role in the cell cycle progression during infection process of P. oryzae 29 . Glutaminolysis has a close relationship with appressorium formation in P. oryzae 30 . Lipid degradation and peroxisomal metabolism were found playing key roles in appressorial turgor generation and host invasion [31][32][33][34][35][36] . On the other hand, avirulence genes in P. oryzae, such as AvrPiz-t, AVR-Pik, Avr-Pi54, and AVR-Pita1, were isolated and intensively investigated for their contributions to rice resistance and interactions with resistance genes [37][38][39][40][41] . Additionally, host-induced gene silencing of three predicted pathogenicity genes, ABC1, MAC1 and PMK1, significantly inhibited the development of rice blast disease 42 . PTMs in P. oryzae were paid attention in very recent years. The Lysine acetylated proteins in vegetative hyphae were identified 43 and the sirtuin mediated-deacetylation was found crucial for plant defense suppression and infection of the fungus 44 . Inhibition of histone deacetylase causes reduction of appressorium formation of P. oryzae 45 . Increasing studies indicated that lysine succinylation largely participated in the regulation of various metabolic pathways in both microbe and plant cells 9,12,13,46,47 . However, succinylated proteins in P. oryzae have not yet been identified so far. In the present work, we systematically identified the succinylated proteins in P. oryzae, which may facilitate our understanding of the metabolic pathways and proteins that are associated with pathogenicity. Methods Materials and protein extraction. The P. oryzae strain, Guy11 were used in our study 48 . The culture and storage of P. oryzae were performed using standard procedures on complete media (CM) 26 . The fungal strain was grown in CM solution, shaking at 150 rpm, in 28 °C darkness for 4 days before harvest. The samples were then placed in liquid nitrogen and sonicated three times on ice using a high intensity ultrasonic processor (type number JY92-IIN, Scientz, Ningbo, China) in lysis buffer [8 M urea, 1% Triton-100, 10 mM dithiothreitol and 0.1% Protease Inhibitor Cocktail IV, 3 μM trichostatin A, 50 mM nicotinamide, 2 mM ethylenediaminetetraacetic acid (EDTA)]. Proteins were extracted as previously described 12 . In brief, after centrifugation at 15,000 × g for 15 min at 4 °C, the supernatant was incubated in ice-cold acetone for more than 2 h at −20 °C. The proteins were precipitated and then redissolved in buffer (8 M urea and 100 mM NH 4 CO 3 , pH 8.0) for further tests. A 2-D Quant kit (GE Healthcare, Uppsala, Sweden) was used to determine the protein concentrations according to the manufacturer's instructions. Trypsin digestion. Three protein samples were precipitated with 20% trichloroacetic acid overnight at 4 °C, and the resulting precipitate was washed three times with ice-cold acetone. Then, the protein solution was diluted in 100 mM NH 4 HCO 3 and digested with trypsin (Promega, Beijing, China) at an enzyme/substrate ratio of 1:50 at 37 °C overnight. Then, the protein solution was reduced with 5 mM dithiothreitol at 37 °C and alkylated with 20 mM iodoacetamide for 45 min at 25 °C in the dark. To terminate the reaction, 30 mM cysteine was added and incubated for 20 min at RT. Then, to ensure complete digestion, trypsin was added at an enzyme/substrate ratio of 1:100 and incubated 4 h. High performance liquid chromatography (HPLC) and affinity enrichment. Three replicate samples were fractionated by high pH reverse-phase HPLC using an Agilent 300 Extend C18 column (Agilent, Beijing, China). The detailed parameters for this column are 5 μm particles, 4.6 mm ID and 250 mm length. First, the protein samples were separated using a gradient of 2% to 60% acetonitrile in 10 mM ammonium bicarbonate for 80 min at pH 10. Then, samples were combined into eight fractions for further analyses. Lysine succinylated peptides were enriched by the immune-affinity procedure. The digested samples were redissolved in NETN buffer (100 mM NaCl, 1 mM EDTA, 50 mM Tris-HCl, pH 8.0, and 0.5% Nonidet P-40) and incubated with pre-washed anti-succinyl-lysine agarose beads (PTM Biolabs, Hangzhou, China) at 4 °C overnight with gentle rotation. After incubation, the antibody beads were removed and washed carefully with NETN buffer three times, and twice with NET buffer (100 mM NaCl, 1 mM EDTA and 50 mM Tris-Cl, pH 8.0), and once with ddH 2 O. The bound peptides were eluted from the beads with 1% trifluoroacetic acid and dried in a vacuum dryer. The obtained peptides were desalted with C18 ZipTips (Millipore, Shanghai, China) according to the manufacturer's instructions and then subjected to HPLC−MS/MS analysis. Liquid chromatography tandem-mass spectrometry (LC-MS/MS) analysis. The LC-MS/MS anal- ysis was performed following the procedure described previously 12 . Briefly, the peptides were dissolved in 2% acetonitrile with formic acid and were directly loaded on an Acclaim PepMap 100 reversed-phase pre-column (Thermo scientific, Shanghai, China . Trypsin/P was determined as a cleavage enzyme, and the search has a maximum four missing cleavage sites allowance per peptide. Mass error was set to 20 and 5 ppm for the first round and main searches, respectively, and 0.02 Da for fragment ions. Succinylation on the N-terminal of a selected protein was identified as a variable modification. False discovery rate thresholds for the modification sites on peptides were specified at 1%. The parameters used in MaxQuant were set as follows: minimum length of the peptide was set at 7 and the site localization probability was set as > 0.75. Protein annotation and enrichment analysis. The gene ontology (GO)-based annotation of the proteome was used as the query against the UniProt-GOA database (http://www.ebi.ac.uk/GOA/). The IDs of our identified proteins were first converted to UniProt IDs. The proteins that were not annotated by the UniProt-GOA database were annotated by InterProScan software (http://www.ebi.ac.uk/interpro/) using the alignment method. The Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway was annotated using the online service tool KEGG Automatic Annotation Server, and the annotation results were mapped using the online service tool KEGG Mapper. For subcellular localization predictions, the software 'wolfpsort' (http://psort.hgc.jp/) was used to predict subcellular localizations of identified proteins. Enrichment analyses of GO, KEGG and protein domains were performed using a two-tailed Fisher's exact test. A correction for multiple hypothesis testing was performed using the standard false discovery rate control method. For GO and KEGG categories, P value < 0.05 was the threshold of significance. For the bioinformatics analyses, such as the GO-base, KEGG-base and domain-base enrichments, all of the sequences in the database were used as the background. Phylogenetic tree analysis. Multiple sequence alignments using the full-length protein sequences from various species were performed by ClustalW (http://www.ebi.ac.uk/Tools/msa/clustalw2/). The alignments were subsequently visualized using GeneDoc software (http://www.nrbsc.org/gfx/genedoc/), and the phylogenetic trees related to each glycolytic enzyme were constructed with 10 aligned sequences from different species using MEGA7.0 (http://www.megasoftware.net/) employing the neighbor-joining method. Bootstrap values were calculated from 1,000 iterations. The sequences of all of the proteins used in our study were obtained from the NCBI protein database. Motif clustering analysis. First, an enrichment analysis of all lysine succinylation substrates was carried out based on their P values. Based on the filter criteria, categories that were at least enriched in one cluster with P value < 0.05 were filtered out. Then, the P value matrix was calculated according to our previous work 12 . A heatmap was used to visualize the cluster membership using the 'ggplots' R-package (http://ggplot2.org/). Results Proteome-wide analysis of lysine succinylation sites of P. oryzae. By combining affinity enrichment and high-resolution LC-MS/MS analysis, the systemic lysine-succinylated sites and proteins were revealed in P. oryzae (Fig. 1a). First, western blotting using a succinyl-lysine-specific antibody was carried out to determine the extent of lysine succinylation on the total proteins of P. oryzae. A wide mass-range of protein bands indicated the diversity of the succinylated proteins (Fig. 1b). Then, two quality parameters, mass error and peptide length, of the identified peptides were checked. The mass errors of the majority of succinylated peptides were lower than 0.02 Da and the lengths of most succinylated peptides varied from 7 to 18 amino-acid residues (Fig. 1c,d). The data suggested that the sample preparation and LC-MS/MS data reached approval standards. Altogether, 2109 lysine succinylation sites in 714 proteins with diverse biological functions and localizations were identified (Table S1). Most of proteins contained only one to three succinylation sites (Fig. 1e). Interestingly, five proteins, including two HSP70-like protein (G4MNH8 and G4NF57), aconitate hydratase (G4N7V9), Glucose-regulated protein (G4MK90), HSP90 (G4MLM8), contained more than 20 succinylation sites, indicating a deep involvement of succinylation in various biological processes. Then, the P. oryzae succinylome was compared with several previously published succinylomes of other species. The average number of succinylation sites in each protein in P. oryzae ( Characterization of the P. oryzae lysine succinylome. GO functional classification is a powerful tool in understanding the possible roles of the identified succinylated proteins in various biological processes 49 . GO term classifications indicated that protein succinylation was involved in a diverse range of biological and molecular processes in P. oryzae (Fig. 3a). The identified results from 'biological process' showed that the largest number of succinylated proteins were associated with 'organic substance metabolic process' (level 3) and 'organic substance metabolic process' (level 3). In 'molecular function' , a number of succinylated proteins were grouped into the 'structural constituent of ribosome' (level 3) and the 'ligase activity' (level 3) terms. In 'cellular component' , most of the succinylated proteins were associated with 'intracellular part' and 'cytoplasm' (Table S2). In P. oryzae, the largest group of succinylated proteins is mitochondria-located proteins (45.7%), which is the major organelle from which succinyl-CoA and succinate are mainly derived 50 . The second largest group, accounting for 40.4% of the total, is comprised of cytoplasm-located proteins, which are essential for regulating cellular metabolism. The third largest group of succinylated proteins (11.9%) were located on nucleus ( Fig. 3b and Table S3). Additionally, the relative proportions of succinylated proteins in three common organelles, the mitochondria, cytoplasm and nucleus, were compared among 11 published organisms. Our data showed that P. oryzae possessed almost the same proportion of mitochondria-located succinylated proteins (45.7%) and cytosol-located succinylated proteins (40.4%). The relative proportions of succinylated proteins in P. oryzae was similar to that in T. rubrum and S. lycopersicum (Fig. 3c). Identification of pathogenicity-related succinylated proteins. In the present study, 42 proteins which were previously demonstrated to be involved in pathogenicity, such as a hydrophobic-like protein (MPG1), an iscitratelyase (ICL1), a heat shock protein (SSB1) and a subtilisin-like proteinase (SPM1), were identified as succinylated proteins (Table S7). Among them, 15 proteins contained 1 succinylation site and 7 proteins contained 2 succinylation sites, while SSB1 and a 5-methyltetrahydropteroyltriglutamate-homocysteine S-methyltransferase (MGG_06712) were found containing 12 succinylation sites. The identification of these proteins suggested the association of lysine succinylation with the fungal pathogenicity. Comparison of succinylation sites among various species. Previous studies have undertaken the succinyl-proteome profiling of different species. In our study, the data extracted from two microbes (M. tuberculosis and P. oryzae), two mammals (H. sapiens and M. musculus), and six plants (T. media, D. officinale, O. sativa, T. aestivum, L. esculentum and B. distachyon) were used to evaluate the potential conserved mechanism of succinylation in the regulation of TCA cycle (Table S8) and glycolysis (Table S9). The number of succinylation sites in the 20 glycolysis-related proteins and 30 TCA-related proteins in ten representative species were counted and shown in Fig. 7. For glycolysis pathway, three enzymes, including dihydrolipoamide dehydrogenase (EC:1.8.1.4), Discussion P. oryzae, an ascomycete fungus responsible for the most fatal rice disease (blast), is a model species for host-pathogen interaction studies 51 . Blast disease causes enormous reduction in the yield of rice and is a major threat to food security in Asian and other regions 52 . In nature, the infection and establishment of disease depends on the asexual spores of P. oryzae, which colonizes leaves by producing necrotic lesions 24 . Undergoing a series of developmental and metabolic events during pathogenesis, the conidia of P. oryzae spread to other parts of host plants at any growth stage 20 . Further, the strains of the pathogen always show a rapid variability during rotation of rice varieties 21 . Molecular genetic analysis may help us to find potential genes to control the rice blast disease 53 . However, limited information on the PTMs has been revealed in P. oryzae [43][44][45] . Lysine succinylation is a novel identified PTM involved in the diverse protein functions in both prokaryotic and eukaryotic cells 11 . In our study, a large number of succinylated proteins were identified in P. oryzae and the average succinylation sites per protein in P. oryzae is larger than most published species [7][8][9][10][11][12][13][14][15] . A higher succinylation degree of P. oryzae suggested an important role of succinylation in PTMs. Our data also showed that P. oryzae possessed the highest proportion of mitochondria-located succinylated proteins (45%), being much higher than that in and H. sapiens (38%). In several mammals, lysine succinylation has been identified on various mitochondrial enzymes, such as glutamate dehydrogenase, malate dehydrogenase and citrate synthase, and has positive effects on the activities of enzymes associated with mitochondrial metabolism 50,54 . The succinylation is biased to occur on mitochondrial proteins in P. oryzae, suggesting variability in the proportion of succinylated mitochondrial proteins 7 . Moreover, 10 preferred sequence patterns were identified in the identified succinylated proteins of P. oryzae. However, some of these motifs were not unique when compared with those identified in other species, such as 'K******K suc ' in B. distachyon, D. officinale, T. aestivum, S. lycopersicum and V. parahemolyticus 10 'K*****K suc ' in M. tuberculosis and B. distachyon 15,46 ; and 'K*******K suc ' in S. lycopersicum and V. parahemolyticus 10,16 . This suggested that some motifs may be shared by both plants and microbes. Ubiquitin-mediated protein degradation, a highly conserved process, occurs in proteasome and plays diverse roles in cellular processes 56 . Previous study revealed that ubiquitin-mediated proteolysis plays an important role in fungal development and pathogenicity of P. oryzae 57 . There was a close relationship between protein degradation and infection structure development of P. oryzae 58 . Increasing evidence shows that a large portion of lysine-succinylated proteins are involved in the central metabolism, including the glycolysis pathway, TCA cycle and photosynthesis 16 . The TCA cycle and glycolysis pathway, which mainly provides the energy for life processes, interacts with sugar, fat and protein metabolism 18 . The number of succinylation sites in the key enzymes of the 10 representative species were counted and are shown in Fig. 7. For DLST, no succinylation sites were identified in the three mammals, while 10 sites in V. parahemolyticus and 12 sites in M. tuberculosis were identified 16,46 . In P. oryzae, 11 succinylation sites in DLST were also identified, suggesting a highly succinylated DLST in microbes. For OGDH, the greatest number of succinylation sites was identified in P. oryzae (14 sites) and no sites were identified in most of the tested microbes. However, for IDH1, ACLY, CS and SDHA, succinylation sites were found in all of the tested species, suggesting the succinylation of TCA-related proteins is ubiquitous during evolutionary process. The average number of succinylation sites on TCA-related enzymes among various species varied from 2.78 to 11.00. Interestingly, the average number of succinylation sites on each glycolysis-related enzyme in P. oryzae was larger than that in the plant species and was similar to that in mammals. Similarly, the average number of succinylation sites on each TCA cycle-related enzyme in P. oryzae was larger than that in plant species and was similar to that in mammals. The frequency of lysine succinylation sites, however, was varied greatly among organisms, indicating that different degrees of succinylation may be involved in the regulation of basal metabolism. Proteinogenic amino acids are not only the building blocks of proteins, but also participate in various biological processes. Rice immune responses were reported to be induced by amino acids and their metabolites. For example, systemic disease resistance against rice blast could be induced in leaves by the treatment of rice roots with glutamate 59 . In P. oryzae, a number of enzymes involved in amino acid metabolism were identified as succinylated proteins, indicating an important role of succinylation in amino acid biosynthesis and metabolism of P. oryzae. Notably, more than 40 well reported pathogenicity-related proteins, such as MPG1, SSB1 and 1,3-β-GTF, were identified as succinylated proteins. MPG1 is an important hydrophobic protein required for full pathogenicity of the fungus 26,60 . During the initial stages of host infection, high-level expression of the MPG1 gene was reported to be involved in appressorium formation. Cell wall biogenesis is essential for fungal growth and pathogenesis during infection process. Cell wall biogenesis protein phosphatase SSD1 is a potential alternative regulators of cell wall biogenesis 61 . Additionally, 1,3-β-GTFs regulate the structure of the rice blast fungal cell wall during appressorium-mediated infection 62 . The presence of succinylation sites in such number of pathogenicity related proteins indicated a potential role of succinylation in the pathogenicity of P. oryzae. The details on the functions of these succinylation sites require further in-depth investigations. In conclusion, we presented a comprehensive succinyl-proteome of P. oryzae, a filamentous heterothallic ascomycete with highly pathogenicity to rice plant. Our data are basic resources for the functional validation of succinylated proteins and a starting point for investigations into the molecular basis of lysine succinylation in P. oryzae.
4,942
2019-03-05T00:00:00.000
[ "Biology", "Environmental Science" ]
Efficient and flexible Integration of variant characteristics in rare variant association studies using integrated nested Laplace approximation Rare variants are thought to play an important role in the etiology of complex diseases and may explain a significant fraction of the missing heritability in genetic disease studies. Next-generation sequencing facilitates the association of rare variants in coding or regulatory regions with complex diseases in large cohorts at genome-wide scale. However, rare variant association studies (RVAS) still lack power when cohorts are small to medium-sized and if genetic variation explains a small fraction of phenotypic variance. Here we present a novel Bayesian rare variant Association Test using Integrated Nested Laplace Approximation (BATI). Unlike existing RVAS tests, BATI allows integration of individual or variant-specific features as covariates, while efficiently performing inference based on full model estimation. We demonstrate that BATI outperforms established RVAS methods on realistic, semi-synthetic whole-exome sequencing cohorts, especially when using meaningful biological context, such as functional annotation. We show that BATI achieves power above 70% in scenarios in which competing tests fail to identify risk genes, e.g. when risk variants in sum explain less than 0.5% of phenotypic variance. We have integrated BATI, together with five existing RVAS tests in the ‘Rare Variant Genome Wide Association Study’ (rvGWAS) framework for data analyzed by whole-exome or whole genome sequencing. rvGWAS supports rare variant association for genes or any other biological unit such as promoters, while allowing the analysis of essential functionalities like quality control or filtering. Applying rvGWAS to a Chronic Lymphocytic Leukemia study we identified eight candidate predisposition genes, including EHMT2 and COPS7A. Introduction The rapidly improving yield and cost-effect ratio of Next Generation Sequencing (NGS) technologies provide the opportunity to study associations of genetic variants with complex multifactorial diseases in large cohorts at a genome-wide scale. As opposed to genome-wide association studies (GWAS), which are based on counting of genotypes at predefined genomic positions with alternative alleles of medium to high minor allele frequency in the population (MAF >1%), whole-exome and whole-genome sequencing (WES, WGS) enable the study of rare genetic variants (RV) across the whole exome or genome, respectively. Previous studies have shown that RVs play an important role in the etiology of complex genetic diseases [1][2][3][4]. Furthermore, it has been demonstrated that RVs are more likely to affect the structure, stability or function of proteins than common variants [5,6]. Therefore, statistical analysis of the combined set of rare variants across genes or regulatory elements has the potential to reveal new insights into the genetic heritability of complex diseases and the predisposition to cancer. To this end, rare variant association studies (RVAS) that facilitate identification of novel disease loci based on the burden of rare and damaging variants with low to medium effect size within genomic units of interest have been developed [7]. One of the major difficulties when associating rare variants to disease is the lack of power when using traditional statistical methods like GWAS. Given that few individuals are carriers of the rare alternative allele, association studies based on single variant positions would require extremely large sample sizes. To overcome this obstacle and to increase statistical power, studies of RV consider simultaneously multiple variable positions within functional biological units, such as genes, promoters or pathways, for association to disease. Different statistical methods that address the problem of aggregated analysis of rare variants in case-control studies have been proposed. For example, score based methods pool minor alleles per unit into a measure of burden, which is used for association with a disease or phenotypic trait [8][9][10][11]. These burden tests are powerful when a high proportion of RVs found in a gene affect its function and their effects on the disease are one-sided, i.e. either protective or deleterious. This is rarely the case since usually few deleterious variants coexist with many neutral and possibly some protective variants. Hence advanced methods have been developed to consider heterogeneous effects among RVs on the disease (or trait), which are mainly based on variance component tests, e.g. SKAT and C-alpha [12,13]. These methods are more powerful than burden tests when the assumption of unidirectional effects does not hold [14]. More recently, novel methods have been introduced. These allow that both types of genetic architectures may coexist throughout the genome, by being constructed as a linear combination between burden and variance-component tests, such as SKAT-O [15]. He et al. [16] developed an alternative method, a hierarchical Bayesian multiple regression model (HBMR) additionally accounting for variant detection errors commonly produced using NGS data, by incorporation of genotype misclassification probabilities in the model. Sun et al. [17] proposed a mixed effects test (MiST) within the framework of a hierarchical model, considering biological characteristics of the variants in the statistical model. In brief, MiST assumes that individual variants are independently distributed, with the mean modeled as a function of variant characteristics and random effects that account for heterogeneous variant effects on phenotype caused by unknown factors. In the resulting generalized linear mixed effects model (GLMM) variant-specific effects are treated as the random part of the model and patient and variant characteristics as the fixed part. The authors claim that, under the assumption that associated variants share common characteristics such as similar impact on protein function (e.g. primarily loss of function), using this prior information increases the power of the test. However, they also note that attempting to estimate the full model for inference purposes requires multiple integration, such that it becomes too computationally intensive for a genome-wide scan. Instead, a score test under the null hypothesis of no association is proposed, avoiding multiple integration. Building on the concept of MiST, but with the motivation of making inference based on full model estimation, we propose a Bayesian alternative to the GLMM, using the Integrated Nested Laplace Approximation (INLA) for efficient model estimation [18]. Calculating the marginal likelihood to estimate complex models in a fully Bayesian manner is often infeasible. Therefore, approximate procedures such as the heuristic Markov Chain Monte Carlo (MCMC) method are conventionally applied [16]. MCMC is a highly flexible approach that can be used to make inference for any Bayesian model. However, evaluating the convergence of MCMC sampling chains is not straightforward [19]. Another concern with MCMC is the extensive computation time, especially in large-scale analyses such as genome-wide scans. INLA is a non-sampling based numerical approximation procedure, developed to estimate hierarchical latent Gaussian Markov random field models. Being based on numerical approaches instead of simulations renders INLA substantially faster than MCMC. Furthermore, Rue and Martino [20] demonstrated for several models that INLA is also more accurate than MCMC when given the same computational resources. The flexibility of modeling within the Bayesian framework combined with rapid inference approaches opens new possibilities for genetic association testing. Here, we present a novel Bayesian rare variant Association Test using INLA (BATI), implemented as part of the 'Rare Variant Genome Wide Association Study' (rvGWAS) framework. rvGWAS combines quality control (QC), interactive filtering, detection of data stratification (technical or population based), integration of functional variant annotations and four commonly used rare variant association tests (Burden, SKAT-O, KBAC and MiST) as well as the two Bayesian alternatives, HBMR and BATI. We demonstrate using realistic benchmarks that BATI substantially outperforms existing methods if prior information on the effect of variants on protein function is used. We further show that BATI successfully copes with complex population structure and other confounders. Finally, we propose how to use 'difference in deviance information criterion' (ΔDIC) for model selection. Bayesian rare variant Association Test based on Integrated nested Laplace approximation (BATI) Integrated Nested Laplace Approximation is a recent approach to implement Bayesian inference on latent Gaussian models, which are a versatile and flexible class of models ranging from (generalized) linear mixed models (GLMMs) to spatial and spatio-temporal models. A detailed definition of INLA can be found in [18,21,22]. Here we applied INLA using the implementation of the R-INLA project (R package INLA version 17.06.20) to build a hierarchical Bayesian approach to the GLMM for the association of rare variants with phenotypes in the context of case-control studies. Our method termed BATI can efficiently and flexibly integrate a large number of categorical and numeric characteristics of genetic variants as covariates, as INLA facilitates estimation of the full model even for complex structures of random effects. Model specification Assume we have N individuals, and let Y i (i = 1,. . .,N) be the observed phenotype of the i th individual that belongs to an exponential family: where the expected value μ = E(Y i ) is linked to a linear predictor η i through a known link function g(�), so that g(�) = η i . In our case Y i is a binary variable representing affected individuals (cases) vs. unaffected individuals (controls). We propose to construct the likelihood of the data based on a logistic distribution and use the identity function for g(�). The linear predictor η i is defined to account for potential confounding covariates at the individual level as well as for covariates at the variant level such as a variant's functional impact: where X i is a m×1 vector of individual-based confounding covariates, such as gender, age or ethnicity, and G i denotes a p×1 vector of genotypes for p RVs. Each genotype is coded as 0, 1, or 2, representing the number of minor alleles. α is the regression vector of coefficients that represent the effects of an individual's covariates on phenotype and β is the regression vector of coefficients reflecting the genetic variant's effect on phenotype. BATI can account for individual variant characteristics under the assumption that similar variant-specific characteristics, such as similar functional impact scores or gene annotation categories (missense, LoF, splice-donor/acceptor, InDel, regulatory), have a similar effect on the function of the protein and hence the phenotype, while still allowing for potential variantspecific heterogeneity effects. Thus β can be modeled in a hierarchical way as: where Z t is a p×q matrix (for q different variant characteristics, i.e. each row of this matrix represents a specific functional annotation of a single variant), ω is a vector of q×1 (j = 1,. . .,q) variant-specific regression coefficients leveraging variant effects on phenotype based on variant characteristics, and δ is a p×1 random effects vector representing unknown factors leading to heterogeneous variant effects on phenotype. The random effects vector is assumed to follow a multivariate Gaussian distribution with mean 0 and covariance matrix τQ. If no dependency structure is defined across variants, Q is a p×p identity matrix and τ the random effects variance. Even though correlation structure across variants is not considered in this work, it is worth mentioning that INLA has the potential to estimate the elements of Q in such a way that it would reflect the dependency structure. This is enabled by INLA because it provides Laplace approximation of the posterior distributions, potentially enabling the estimation of the full model for complex structures of random effects. Plugging Eq (3) into (2) we obtain the expression of a generalized linear mixed effects model (GLMM): with α and ω as fixed effects coefficients and δ as random effects coefficients. Given the vector of parameters θ = {α,ω,δ}, the objectives of the Bayesian computation are the marginal posterior distributions for each of the elements of the parameter vector p(θ s |y) and for the hyper-parameter p (τ|y). In order to compute the marginal posterior for the parameters, we first need to compute p (τ|y) and p(θ s |τ,y). The INLA approach exploits the assumptions of the model to produce a numerical approximation to the posteriors of interest, based on the Laplace approximation [23]. Model selection The classical approaches of association tests are based on hypothesis testing, where the null hypothesis assumes no genetic effects, and the alternative hypothesis assumes a genetic effect on the phenotype. In the context of BATI this can be specified as follows: A classic Bayesian criterion for model goodness of fit is the Deviance Information Criterion (DIC) [24]. DIC is calculated as the expectation of the deviance over the posterior distribution plus the effective number of parameters. Thus, difference in DIC between the H 0 and the H 1 models, DDIC ¼ DIC H 0 À DIC H 1 , can be used as the model selection criterion. As a rule of thumb values of ΔDIC>10 are recommended to reject the null-hypothesis. However, to evaluate the ability of ΔDIC to correctly choose between null or alternative models we suggest the use of simulations, as proposed by Holand et al. [25]. To find an estimate of the probability of type I error, concluding that there are genetic effects when in truth there is none, we randomly assign individuals to either cases or controls. We then adjust models under the null and the alternative hypothesis for each gene or biological unit included in the genome wide study, obtaining the empirical distribution of ΔDIC. Finally, we rank the genes by ΔDIC values in ascending order and select a ΔDIC threshold from the quantile corresponding to the desired significance level. For instance, if a significance level of 0.1% is desired, we pick the ΔDIC value of the gene ranked at top 0.1% position as the significance threshold. For more robust threshold estimation, we propose to generate S datasets by randomly shuffling cases and controls, such that S ΔDIC thresholds can be obtained and the median of the thresholds can be used. We used S = 10 for model selection in our benchmark study. A comprehensive framework for rare variant association analysis (RVAS) We developed the 'Rare Variant Genome Wide Association Study' (rvGWAS) framework ( Fig 1A and S1 Fig), an all-in-one tool designed for RVAS tests using case-control cohorts analyzed (4) it integrates six conceptually different rare-variant association methods. It is implemented in a modular way and provides great flexibility, allowing for the analysis of a wide range of association study designs. BATI and five other RVAS methods are integrated in the rvGWAS framework. KBAC, SKAT-O, and MiST, were chosen to be included due to their superior performance compared to eight other RVAS methods in a benchmark study by Moutsianas et al. [14]. In addition, we included the classical Burden test representing the most simple and intuitive form of RVAS tests. Finally, we incorporated HBMR, which is conceptually the most similar to BATI in terms of its estimation approach (while MiST is more similar in terms of model specification). The six supported RVAS tests represent a broad spectrum of approaches, including classic aggregation of variants as a Burden variable, variance component bidirectional tests, mixed effect models and Bayesian inference. rvGWAS is implemented as a pipeline of R scripts, and is available online at https://github. com/hanasusak/rvGWAS. Detailed descriptions of the tool, included methods as well as parameters are provided in S1 Text. 'Semi-synthetic' simulations of whole-exome sequencing based case-control studies To allow for benchmarking using highly realistic disease cohorts, which correctly represent all expected sources of noise, we developed a new disease cohort simulator combining thousands of real WES datasets from various studies with known risk variants for a selected disease type. The simulator randomly assigns WES samples to the case or control group and introduces predisposition variants found in ClinVar or the Human Gene Mutation Database (HGMD) [26] for a disease of choice into the VCF files of cases. We used two large datasets as basis for the simulation: 1) WES data of the 1000 Genomes Project (1000GP), and 2) an in-house dataset combining patients diagnosed with various conditions and healthy individuals subjected to WES during 2012 to 2017. VCF files from 1000GP (phase3) [27,28] parent-child trios we included the parents (if not consanguineous), but not the child. To minimize issues with population stratification due to highly diverse populations we only included individuals not belonging to African ancestry populations, as Africans had on average 25% more variants than individuals from other ancestry groups. Nonetheless, the remaining cohort still represents a mixed population, allowing us to benchmark the RVAS tests' performances on genetically diverse populations. The in-house 'Iberian' WES cohort includes 1189 individuals of Spanish ancestry and is highly homogeneous as demonstrated by PCA ( S2F Fig). WES libraries were prepared using three different oligo enrichment kits: (1) Agilent SureSelect 50, (2) Agilent SureSelect 71, and (3) Nimblegen SeqEz V3. Computational analysis and variant calling was performed according to GATK best practice guidelines (https://software.broadinstitute.org/gatk/best-practices/)). For simulation purposes we only considered genomic loci that were targeted and covered with at least 10 sequence reads by all oligo enrichment kits, and variants with a call rate higher than 85%. Samples that were identified as outliers based on the number of called variants, transition to transversion (Ti/Tv) ratio, or their projection on the first two principal components from principal component analysis were removed from further analysis. The filtered datasets, named 1000GP and Iberian cohort, consisted of 1,810 and 1,167 samples harboring 493,314 and 285,658 unique loci with alternative alleles, respectively. From 1000GP we randomly selected half of the samples as cases, the other half as controls. An important technique for adding power to case-control studies, especially when the number of cases is limited, is to enroll more than one control per case, although it has been shown that little is gained by including more than two controls per case [29]. Therefore, we chose to use one case per two controls in the relatively small Iberian cohort. Simulating a breast cancer risk cohort To introduce realistic disease variants into a 'semi-synthetic' breast cancer predisposition cohort, we queried the ClinVar and HGMD databases for breast cancer risk variants annotated as exonic or splicing. We removed variants that had MAF higher than 0.01 in any ancestry population in any of three commonly used exome databases: EVS, 1000GP or ExAC. Six genes had more than five annotated disease risk variants in ClinVar and HGMD: Table), which we used as a pool of variants to simulate risk patients by adding variants to the VCF files (zero or one variant per case). As expected, all six genes already had rare variants, likely benign, in the unmodified cohorts (S2 and S3 Tables). This type of noise is expected in any case-control study using WES data, and hence makes the simulation more realistic. Using the 1000GP cohort we simulated a rare variant etiology by ethnicity scenario. Samples in assigned cases and controls were separated into populations according to 1000GP annotation and each variant from the pool was assigned to one of the super populations; EUR +AMR, SAS or EAS. Variants annotated in ExAC were assigned to the population with the largest observed allele frequency and variants not observed in ExAC were assigned to one of the three populations randomly. In the case of the Iberian cohort, as we have an homogeneous population, variants were assigned without any stratification. We generated three genetic architectures per gene, with~2% (1),~1% (2) or~0.5% (3) of phenotypic variance explained (VE) by introducing ClinVar and HGMD risk variants. To this end we used the method of So et al. [30] for calculation of cumulative VE each time a variant was added to a gene until the targeted VE was reached. Calculation of VE requires three parameters per each variant: the prevalence of the trait, the population frequency of the risk allele, and the genotype relative risk (RR). In practice, only odds ratios (OR) are available in many case-control studies. However, OR approximates RR when the disease prevalence in a population is low [30]. We selected an estimate for the breast cancer prevalence 5 years' period of adult Spanish women (0.66%), defined as the percentage of current cases (new and preexisting) over the specified period of time. This estimate is obtained from Global Cancer Observatory website https://gco.iarc.fr/today/online-analysis-table [31]. In order to generate realistic RR distributions, we generated a distribution (S3 Fig) assuming that the likelihood of having high RR is negatively correlated with MAF [14]. For BRCA1 and BRCA2 we simulated two different types of genetic architectures, by introducing in one architecture only missense variants, and in the other only loss of function (LoF) SNVs (i.e. stop-gain, stop-loss or splicing). This allowed us to test if MiST and BATI benefit from features that capture biological function and context of variants. For the four remaining genes, the variants were simulated regardless of their functionality. The simulation procedure is repeated 100 times for each of the 8 architectures in order to generate 100 datasets for evaluation of statistical power and type I error rates (TIER). Quality control and filtering of benchmark WES cohorts Cohorts used for benchmarking of test methods consisted of 1,810 individuals in the 1000GP cohort and 1,167 individuals in the Iberian cohort, harboring 493,314 and 285,658 unique loci, respectively, deviating from the GRCh37 (hg19) reference genome, but only including SNVs and short InDels. Both datasets were analyzed and filtered using the rvGWAS quality control modules (see Material and Methods and S1 Text). For benchmarking purposes, we only considered variants in regions targeted by all used oligo enrichment kits. However, in the case of the Iberian cohort we observed that a small subset of regions supposed to be targeted consistently showed low coverage in a kit-specific manner, leading to strong biases identified by the data stratification module of rvGWAS. The bias disappeared when excluding regions with less than 10x average coverage in at least one kit ( Benchmarking RVAS Tests using semi-synthetic breast cancer risk cohorts We used the rvGWAS framework to benchmark the six RVAS tests (Burden, SKAT-O, KBAC, MiST, HBMR and BATI) on the 1000GP and Iberian cohorts with simulated breast cancer risk variants. In order to simulate a realistic breast cancer predisposition case-control study we randomly split each of the original cohorts in a case (1000GP: 905, Iberian: 389 samples) and a control group (1000GP: 905, Iberian: 778 samples), and, in the case group samples, added ClinVar and HGMD risk variants to the genes BRCA2, BRCA1, PALB2, BRIP1, CHEK2 and BARD1 using realistic variance explained (VE) rates (see Material and Methods). Before performing the RVAS we filtered out common variants (AF>0.01 in public databases or in the randomized control group) as well as variants that were annotated as synonymous or had a CADD score below 10 (likely benign, see https://cadd.gs.washington.edu/info). For BATI and MiST we used prior information on variant characteristics as covariates: CADD scores as a quantitative variable and exonic function (missense, loss-of-function, InDels) as a categorical variable. We repeated the simulation and benchmarking process 10 times, including the randomized case-control assignment in order to randomize background noise in each benchmark cycle. Type I error rate estimates The six benchmarked RVAS tests use diverse criteria for statistical significance (p-value, Bayes factor or ΔDIC). To generate comparable significance thresholds, we performed RVAS tests on randomly split cohorts, but without introduced ClinVar and HGMD risk variants. Hence, significant associations should only be found by random chance and constitute false positives. This procedure allowed us to obtain comparable thresholds for desired type I error rates for all methods. For each of the 10 random cohort splits we obtained p-value significance thresholds for Burden, KBAC, SKAT-O and MiST that translate to 5%, 0.1% and 0.01% TIER. Similarly, for HBMR and for BATI we calculated thresholds for Bayes factor and ΔDIC resulting in the same TIER levels. Estimated thresholds are highly similar across all 10 randomized case-control splits (S4 Fig). At 0.01% TIER only 2 genes (out of~20,000) are expected as significant by chance, therefore the observed small fluctuation of estimated significance thresholds is not surprising. We finally used the test-specific median from 10 random splits as thresholds to label a gene as significant for subsequent power analyses (S4 Fig and Tables 1 and S4). We noticed that MiST shows zero Power analysis for six RVAS test methods We next determined the power of the competing RVAS tests to identify the 8 breast cancer risk genes (BRCA1-Missense, BRCA1-LoF, BRCA2-Missense, BRCA2-LoF, PALB2, BRIP1, CHEK2 and BARD1) at the three TIER levels 5%, 0.1% and 0.01% and at three levels of VE of 2%, 1% and 0.5% (1000GP: Fig 2, Iberian: S6 Fig). For the 1000GP cohort we found that all methods showed a power close to 100% at a TIER of 5% across all tested VE levels, except for Burden and SKAT-O, which showed decreased performance for VE = 0.5% (Fig 2A-2C left). Testing 20,000 genes (whole exome) at a TIER of 5% we expect around 1000 false positive genes, which is a poor choice for most studies. Using a TIER of 0.1% (~20 false positive genes expected), differences between the tests become more pronounced, with Burden, KBAC, SKAT-O and MiST showing decreased power already for 1% VE, and all methods showing Table 1. P-value, Bayes Factor (HBMR) and ΔDIC (BATI) thresholds for Type I error rates (TIER) of 0.05, 0.001 and 1e-04 estimated on 1000GP. We randomly permuted case and control labels 10 times and for each estimated empirical thresholds for each RVAS test. The median TIER values from 10 random permutations are used as thresholds for benchmark comparison. decreased power at 0.5% VE (Fig 2A-2C middle), however BATI maintains the highest median power of 70% at TIER 1%. Using a strict TIER of 0.01% (2 false positives expected for the whole exome), all tools except for Burden and MiST are able to identify risk genes at 2% VE at almost 100%. However, performance of all methods except BATI drops substantially for 1% VE. At 0.5% VE most methods miss the majority of risk genes in the majority of simulations (median power close to zero) except for BATI, which maintains a power of 15% (Fig 2A-2C right). Note that MiST performed very poorly for the strict TIER thresholds of 0.1% and 0.01%, likely due to the aforementioned zero-p-value inflation issue, which results in a large number of false positives. Results are mostly similar in the benchmark using the Iberian cohort (S6 Fig). Risk gene-wise power analysis Each gene has a different architecture, i.e. rate of (likely benign) rare variants in the original cohorts, functional impact estimates for known risk variants, fraction of stop-gain or splicing variants etc. We therefore benchmarked the performance of all RVAS tests across 100 simulations of risk variants for each gene separately (1000GP cohort: Fig 3 and Table 2, Iberian cohort: S7 Fig). In the gene-wise power plots we indicate the three TIER thresholds using red (5%), green (0.1%) and blue (0.01%) lines. Note that due to different y-Axis scaling these lines are not on the same height for different tests. All methods except Burden and MiST identify all risk genes at 0.01% TIER in the 2% VE setting. However, substantial differences in power of the tests appear when VE is only 1% or 0.5%. While BATI calls most genes with TIER 0.01% with >88% power except for BARD1 (Table 2), Burden, KBAC and SKAT-O recurrently fail to call BRCA2 (both missense and LoF versions), BARD1 and CHECK2 ( Table 2). The performance of Burden, KBAC and SKAT-O varies considerably between genes, while MiST, HBMR and BATI show relatively small differences. Interestingly, the power plots at 0.5% VE look very similar when comparing Burden, KBAC and SKAT-O, indicating that these methods share the same strengths and weaknesses. Only MiST and BATI are able to leverage categorical variant characteristics, here represented as functional annotations such as 'missense', 'LoF', 'indel'. As background LoF variants are rare we expected that both methods excel at predicting BRCA1 and BRCA2 under the LoFarchitecture simulation. Indeed, for both methods we see a better performance for BRCA1-LoF and BRCA2-LoF compared to the BRCA1-missense and BRCA2-missense, respectively. For BATI, this difference is significant for BRCA1 (p = 4.0e-13) and a tendency is found for BRCA2 (p = 0.0025) for VE = 0.1 using Wilcoxon rank test. As a result, BATI predicts BRCA2-LoF at the highest significance level (TIER 0.01%), while all other methods perform poorly. BRCA1-LoF shows the highest ΔDIC value from all 8 risk genes, demonstrating that the BATI method strongly benefits from categorical functional annotations. Measuring the impact of categorical and numerical variant characteristics BATI can account for individual variant characteristics under the assumption that similar variant-specific characteristics have a similar effect on the function of the protein and hence the phenotype. This is enabled by INLA, which provides Laplace approximation of the marginal posterior distributions for each of the elements of the parameter vector in Eq (4). Hence, we can obtain estimates of the parameters and their corresponding credible intervals in order to test if the disease risk is driven by a particular category of variants (e.g. LoF) or if one damage score discriminates pathogenic variants. Table 3 shows an example based on the analysis of one of our target gene architectures, BRCA1-LoF, where only LoF risk variants were added to a background of many non-pathogenic missense variants in the original samples. In this scenario we would expect that LoF variants have a significantly higher mean effect than the other variant categories, which is exactly what we observe (Table 3). For LoF SNV, the mean effect is significantly higher than 0 as shown by the 95% credible interval (non overlapping the 0 value), meaning that LoF SNVs show the strongest effect on disease predisposition for this gene. The impact of CADD on this scenario is weak, as all LoF SNVs have similar high CADD score. RVAS of chronic lymphocytic leukemia identifies candidate risk genes Chronic lymphocytic leukemia (CLL) is a cancer of B-lymphocytes, which expands in the bone marrow, lymph nodes, spleen and blood. With the aim to identify the landscape of germline risk genes that can predispose an individual to CLL, we applied BATI and the other five competing RVAS methods integrated in rvGWAS. The CLL cohort of 436 cases was collected and sequenced following the guidelines of the International Cancer Genome Consortium (ICGC) [32] within the framework of the Spanish ICGC-CLL consortium [33]. In addition, 725 individuals from our Iberian cohort were used as controls. For the gene-wise RVAS test we preselected rare (MAF� 0.01 in our control cohort, ExAC and 1000GP) and potentially damaging variants (CADD score > 10). All RVAS methods were adjusted for the first 10 principal components to account for population stratification and technical biases. For BATI and MiST we additionally added the exonic function of the variants (i.e. LoF, missense, indel) and the CADD damage score as covariates. We tested all genes with a variant call rate of at least 95% and removed genes flagged by Allele Balance Bias (ABB) [34] as enriched with false positive variant calls (see S5 and S6 Tables and S8 Fig and S1 Text for details). BATI identified 12 candidates that passed the significance threshold of 10 −4 (S7 Table). Among those, EHMT2 and COPS7A are promising CLL risk gene candidates. The heterodimeric methyltransferases EHMT1 and EHMT2 have recently been implicated with prognosis of CLL and CLL cell viability [35]. COPS7A (previous name COP9) is involved in the Transcription-Coupled Nucleotide Excision Repair (TC-NER) pathway and the COP9 signalosome complex (CSN) is involved in phosphorylation of p53/TP53, JUN, I-kappa-B-alpha/NFKBIA, ITPK1 and IRF8/ICSBP. However, replication of results in independent cohorts is required to evaluate these findings. Discussion Here we presented a comprehensive framework, rvGWAS, to facilitate user-friendly and intuitive analysis of RVAS in case-control studies using whole genome or custom-captured next generation sequencing data. rvGWAS integrates data quality control and filtering, several existing rare variant association tests and the newly developed BATI test. We showed how BATI leverages both categorical and numerical variant characteristics and strongly benefits from their inclusion as covariates. We demonstrated BATI's significant gain in power if risk genes contain mostly LoF variants, while still performing at least as well as other methods when testing genes containing mostly missense variants. Here we used CADD as a numerical functional impact score (representing deleteriousness). Other meta-predictors such as the more recently developed FatHMM [36], M-CAP [37] or REVEL [38] might improve results compared with CADD scores. With BATI, users can readily reanalyze cohorts with any functional impact or evolutionary conservation score of choice (or Table 3. BATI output of the genetic model estimates for the BRCA1-LoF gene architecture derived from one of the simulations. Parameter estimates: mean effect (the mean increase of the variant effect on phenotype depending on the variant type, for numerical score CADD indicates the increase of the variant effect for a unit increase of the score), standard deviation of the mean (sd), 95% credible intervals of the mean(CI) and ΔDIC of the genetic model considered in the BATI-based RVAS test. multiple scores). The optimal selection of functional impact scores likely depends on the incidence of the disease or the targeted genomic regions. For instance, analysis of non-coding regions might benefit from specialized impact scores (e.g. FunSeq2), while the impact of very rare variants is better estimated by REVEL and M-CAP. Model estimation when using complex data structures, including exome-wide genetic variants, numerical damage estimates and functional annotations, becomes computationally heavy. Therefore, existing tests resort to heuristics affecting accuracy (MiST) or are highly computationally intensive (HBMR). BATI addresses this issue by estimating the full model using Integrated Nested Laplace Approximation, which requires reasonable computational resources even when using complex data structures. INLA provides approximations to the posterior marginals of the latent variables, which are accurate and extremely fast to compute [18]. INLA was originally developed as a computationally efficient alternative to MCMC and presents two major advantages. On the one hand, INLA's fast speed allows it to work on models with huge dimensional latent fields and a large number of covariates at different hierarchical levels (for example in case of RVAS at the patient level and at the variant level). On the other hand, INLA treats latent Gaussian models in a unified way, thus allowing for greater automation of the inference process. Thanks to these characteristics, INLA has already been used in a great variety of applications [39][40][41][42][43][44]. While MiST only constructs a score test under the null hypothesis, BATI, leveraging the efficiency of INLA, can make inference based on full model estimation, and provides comprehensive information on estimates of model parameters. This leads to higher accuracy in terms of statistical inference and therefore higher power. Furthermore, BATI allows for the inclusion of many numerical or categorical features as covariates. Which other features, in addition to functional impact and functional annotation of variants, could be beneficial for association testing remains to be determined. Promising categories include variant call quality, tissue-specific gene expression measures, biological pathways or copy number variants. BRCA1-LoF Previous benchmark studies of RVAS tests typically relied on pure simulations of variants, for instance based on HapMap statistics, resulting in completely artificial cohorts [14]. Furthermore, simulations were often restricted to small regions of the genome, limiting their power for benchmarking exome-wide association tests. Simulated variant data is well-known to lack the complexity and noise-level of real data, resulting in overly optimistic benchmark performances and unrealistic expectations of the clinical researchers. Moreover, the use of random 'causal' variants hampers the benchmarking of methods that leverage characteristics of causal disease variants, which are enriched in high damage scores and high impact changes such as LoF variants. Here we combined real WES cohorts, representing realistic background noise, with real disease variants, featuring realistic functional impact profiles and variant distributions, to form semi-synthetic benchmark cohorts. We developed sampling methods allowing to test different disease architectures featuring various levels of variance explained in multiple risk genes. Furthermore, since the RVAS tests evaluated here use diverse criteria for statistical significance, e.g. p-value, Bayes factor or increase in deviance information criterion (ΔDIC), we generated groups of cases and controls randomly from the 1000GP and Iberian cohorts without adding risk variants for the disease and benchmarked the six RVAS tests. As significant results from such tests can be considered false positives we could establish comparable significance thresholds across the different statistical criteria resulting in similar false positive rates. Hence, we estimated empirically the type I error rate per each RVAS test at different significant levels. In the case of ΔDIC, a rule of thumb values of ΔDIC>10 are usually recommended to significantly distinguish different models under consideration, here the genetic model against a null model without genetic effects. This is in line with our results obtained from the type I error study in the 1000GP cohort to establish a significance level of 0.1%. In the Iberian cohort, where the sample size is smaller, the specified threshold has been found to be slightly stricter (>12). From our simulations, we show that methods vary substantially in power, especially for risk genes explaining a small fraction of the variance in a cohort. We found that differences between methods when VE is low (1% and 0.5%) are substantially more profound than previously appreciated, with some methods showing strongly fluctuating success rates for different genes and close to zero power at VE of 0.5%. For example, MiST showed favorable results on purely artificial benchmark sets [14], but performed poorly on our realistic WES cohorts, likely due to an issue with zero-inflated p-values caused by inappropriate handling of low variant counts. Specifically, MiST failed to identify any risk gene at low VE or low TIER thresholds. We further found that the performance patterns of Burden, KBAC and SKAT-O across the 8 risk gene architectures are highly similar when compared to MiST, HBMR and BATI. Burden, KBAC and SKAT-O fail to predict the same genes at 0.5% VE, namely BRCA2, BARD1 and CHEK2, which are characterized by high numbers of benign background variants. In those situations, limited sample sizes are a problem and it is therefore likely beneficial to combine Burden-and SKAT-type methods with completely different approaches to compensate for Burden and SKAT specific weaknesses. The strong performance of BATI in terms of precision and recall comes at the price of longer run time (S8 Table). Inference based on full model estimation leads to a higher computational complexity and hence higher run time of BATI compared to all competing methods. The computational time and complexity of RVAS test methods is a concern, as exome and genome sequencing datasets have recently been increasing dramatically in sample size. However, the INLA implementation used by BATI (R-INLA project) facilitates the use of multiple cores, and scales close to linearly with the number of used cores, allowing for analysis of large cohorts on modern servers. Moreover, lowering the allele frequency threshold of included rare variants (e.g. from AF < = 1% to AF < = 0.1%) for very large cohorts can dramatically reduce computation times. However, in order to facilitate BATI-based RVAS tests in a 'mega biobank scenario' with sample sizes larger than 100K, we recommend to perform a fast RVAS test, such as SKAT-O, to select the top-500 gene candidates and to re-analyze these with BATI to potentially achieve higher power. In summary, leveraging variant characteristics and using the fast and accurate INLA model estimation, BATI outperforms existing RVAS test methods on realistic WES cohorts using real disease variants in 8 breast cancer risk genes, in hundreds of permutations. By facilitating integration of large numbers of covariates, BATI represents a flexible testing approach that can be further extended and enhanced in the future. The large number of variants found in FTCD in cases or controls that show a deviation from the expected 50:50 allele ratio expected for heterozygous SNVs, and the different distribution in cases and controls indicate a large number of false positive calls, leading to false gene-phenotype associations. This phenomenon is often caused by un-annotated segmental or tandem duplications in the reference genome, simple sequence repeats or copy gains in the samples. Using the method ABB we identified and excluded these genes from the RVAS test with the ICGC-CLL cohort. (TIF) S1 Table. BRCA risk variants in ClinVar used for simulation as introduced causal variants. (DOCX) S2 Table. Number of variants in six BRCA risk genes in the 1000GP cohort before introduction of risk variants from ClinVar and HGMD (counting only rare coding or splicing variants with CADD > 10). (DOCX) S3 Table. Number of variants in six BRCA risk genes in the Iberian cohort before introduction of risk variants from ClinVar (counting only rare coding or splicing variants with CADD > 10). Due to less than 100% variant call rates in some positions the number of possible cases and/or controls can be lower than total number of cases (389) and controls (778) used in the Iberian cohort. (DOCX) S4 Table. P-value, Bayes Factor (HBMR) and ΔDIC (BATI) thresholds for Type I error rates (TIER) of 0.05, 0.001 and 1e-04 estimated on Iberian cohort. We randomly permuted case and control labels 10 times and for each estimated empirical thresholds for each RVAS test. The median TIER values from 10 random permutations are used as thresholds for benchmark comparison.
9,485
2020-03-12T00:00:00.000
[ "Computer Science" ]
Cloning and Functional Characterisation of the Duplicated RDL Subunits from the Pea Aphid, Acyrthosiphon pisum The insect GABA receptor, RDL (resistance to dieldrin), is a cys-loop ligand-gated ion channel (cysLGIC) that plays a central role in neuronal signaling, and is the target of several classes of insecticides. Many insects studied to date possess one Rdl gene; however, there is evidence of two Rdls in aphids. To characterise further this insecticide target from pests that cause millions of dollars’ worth of crop damage each year, we identified the complete cysLGIC gene superfamily of the pea aphid, Acyrthosiphon pisum, using BLAST analysis. This confirmed the presence of two Rdl-like genes (RDL1 and RDL2) that likely arose from a recent gene duplication. When expressed individually in Xenopus laevis oocytes, both subunits formed functional ion channels gated by GABA. Alternative splicing of RDL1 influenced the potency of GABA, and the potency of fipronil was different on the RDL1bd splice variant and RDL2. Imidacloprid and clothianidin showed no antagonistic activity on RDL1, whilst 100 μM thiacloprid reduced the GABA responses of RDL1 and RDL2 to 55% and 62%, respectively. It was concluded that gene duplication of Rdl may have conferred increased tolerance to natural insecticides, and played a role in the evolution of insect cysLGICs. Introduction The insect γ-aminobutyric acid (GABA) receptor, known as RDL (resistant to dieldrin), plays a central role in neuronal signaling, and is involved in various processes, including regulation of sleep [1], aggression [2], and olfactory or visual learning [3,4]. The GABA receptor is a member of the cys-loop ligand-gated ion channel (cysLGIC) superfamily, which, in insects, also includes nicotinic acetylcholine receptors (nAChRs), histamine-gated chloride channels (HisCls), and glutamate-gated chloride channels (GluCls) [5]. CysLGICs consist of five subunits arranged around a central ion channel. Each subunit contains an N-terminal extracellular domain where neurotransmitter binding occurs (binding of GABA in the case of RDL), and four transmembrane (TM) domains, the second of which lines the ion channel [6]. RDL is also of interest as it is the target of several classes of highly effective insecticides such as cyclodienes (e.g., dieldrin), phenylpyrazoles (e.g., fipronil) and isoxazolines (e.g., fluralaner) [7]. In the genomic DNA of the model organism, Drosophila melanogaster, a mutation resulting in an alanine to serine substitution located in TM2 of Rdl was identified, which underlies resistance to several insecticides, including dieldrin, picrotoxin and fipronil [8,9]. This alanine to serine mutation, also found as alanine to glycine or to asparagine [10], has since been associated with insecticide resistance in various species, ranging from disease vectors (the malaria mosquito Anopheles gambiae [11,12]), to pests afflicting livestock (the horn fly Haematobia irritans [13]) or domesticated animals (the cat flea Ctenocephalides felis [14]) and crop pests (e.g., the planthopper Laodelphax striatellus [15]). Despite the emergence of insecticide resistance, RDL is still a potential target for insect control, since novel compounds have been developed that are unaffected by the TM2 resistance mutation [16]. Analyses of genome sequences have shown that insects of diverse species, such as D. melanogaster, Musca domestica, Apis mellifera, Nasonia vitripennis, and Tribolium castaneum, possess a single Rdl gene [5,[17][18][19]. However, other insects, notably of the Lepidoptera order, possess more Rdl subunits. For example, Chilo suppressalis and Plutella xylostella have two Rdl-encoding genes [20,21], whilst Bombyx mori has three [22]. There is evidence that insects in other orders can also possess multiple Rdl genes. For instance, Southern blot analysis demonstrated the presence of two independent Rdl loci in the aphid, Myzus persicae [23]. In accordance with this, two candidate Rdl genes were observed in the genome of the pea aphid, Acyrthosiphon pisum [24]. Since many aphid species, such as A. pisum, are important crop pests which cause hundreds of millions of dollars' worth of damage each year [24,25], it is prudent to study insecticide targets from these species in order to understand further mechanisms of resistance, as well as to facilitate the identification and development of improved insecticides that show specificity towards aphids whilst sparing non-target organisms. We report here that the two Rdl genes in A. pisum encode for GABA-gated ion channels, upon which the insecticides fipronil and thiacloprid act as antagonists. We also show that A. pisum possesses an unusual cysLGIC gene superfamily, in that it lacks clear orthologues of LCCH3, GRD and CG8916. These subunits have been found in all other insect species so far where their complete cysLGIC superfamilies have been identified [5,19]. It was concluded that the duplicated Rdl in A. pisum may represent diversification, leading to the evolution of novel cysLGIC subunits in higher insects. The A. pisum cysLGIC Superfamily Possesses Two Rdl Genes Using tBLASTn, 22 candidate cysLGIC subunits were identified in the A. pisum genome. Eleven of these subunits are candidate nAChRs which have been previously described [24]; thus, in this report we focus on the remainder of the aphid cysLGIC superfamily. Alignment of their protein sequences ( Figure 1) shows that the A. pisum subunits possess features common to members of the cysLGIC superfamily. These include: an extracellular N-terminal region containing distinct regions (loops A-F) [26] that form the ligand binding site; the dicysteine loop (cys-loop), which consists of two disulphide bond-forming cysteines separated by 13 amino acid residues; four transmembrane regions (TM1-4); and a highly variable intracellular loop between TM3 and TM4. As with other cys-loop LGIC subunits, the aphid sequences also possess potential N-glycosylation sites within the extracellular N-terminal domain, and phosphorylation sites within the TM3-TM4 intracellular loop. A comparison of sequence identities between A. pisum and T. castaneum cysLGIC subunits (Table 1), as well as the use of a phylogenetic tree with A. pisum, T. castaneum and A. mellifera cysLGICs (Figure 2), indicates orthologous relationships between the aphid, beetle, and honeybee subunits. To facilitate comparisons between species, Acyrthosiphon subunits were named after their Tribolium counterparts. For example, the aphid orthologs of Tribolium HisCl1 and Tcas 12344 were designated Apisum HisCl1 and Apisum 12344, respectively. A. pisum possesses two putative subunits belonging to Insect Group I (Figure 2), which consists of Drosophila CG7589, CG6927 and CG11340 [18]. These two subunits were denoted Apisum CLGC1 and Apisum CLGC2, similar to the equivalent subunits in T. castaneum [18], since the orthologous relationships of both aphid subunits are uncertain. Two putative Rdl subunit genes were identified in the A. pisum genome, encoding for protein products denoted Apisum RDL1 and Apisum RDL2 (Figures 1 and 2). Apisum RDL1 and Apisum RDL2 share notably high sequence identity with Tcas RDL, with 70% and 69%, respectively. However, Apisum RDL1 is considered the true ortholog of RDL in many other species, including D. melanogaster, T. castaneum, and A. mellifera, since it possesses alternative splicing at exons 3 (variants a or b) and 6 (variants c or d) [27], whereas Apisum RDL2 has alternative splicing only at exon 3 ( Figure 3). Also, the NATPARVA peptide sequence preceding TM2 in RDL of many species is conserved in Apisum RDL1, whilst in Apisum RDL2 it is CATPARVS (Figure 1). The A. pisum genome also contains two subunits showing highest identity to the pH-sensitive subunit chloride channel [28], and thus, has been denoted Apisum pHCl1 and pHCl2 ( Table 1). The identity of another subunit in the A. pisum genome was more difficult to assign, as it showed similar identity of 29% to Tcas GluCl and Tcas HisCl2. This subunit was tentatively denoted Apisum GluCl2, based on its slightly higher identity to Apisum GluCl1 as opposed to Apisum HisCl2 (Table 1), and that when considering the extracellular N-terminal region only, Apisum GluCl2 showed 33% identity to Tcas GluCl, as opposed to 30% identity to Tcas HisCl2. Interestingly, whilst A. pisum appears to have three duplicated subunits (Apisum RDL2, Apisum pHCl2 and Apisum GluCl2), the aphid lacks clear orthologs of LCCH3, GRD, and CG8916, which have been found in the genomes of other insects analysed to date ( Figure 2) [5,[17][18][19]. a or b) and 6 (variants c or d) [27], whereas Apisum RDL2 has alternative splicing only at exon 3 ( Figure 3). Also, the NATPARVA peptide sequence preceding TM2 in RDL of many species is conserved in Apisum RDL1, whilst in Apisum RDL2 it is CATPARVS ( Figure 1). The A. pisum genome also contains two subunits showing highest identity to the pH-sensitive subunit chloride channel [28], and thus, has been denoted Apisum pHCl1 and pHCl2 ( Table 1). The identity of another subunit in the A. pisum genome was more difficult to assign, as it showed similar identity of 29% to Tcas GluCl and Tcas HisCl2. This subunit was tentatively denoted Apisum GluCl2, based on its slightly higher identity to Apisum GluCl1 as opposed to Apisum HisCl2 ( Table 1), and that when considering the extracellular N-terminal region only, Apisum GluCl2 showed 33% identity to Tcas GluCl, as opposed to 30% identity to Tcas HisCl2. Interestingly, whilst A. pisum appears to have three duplicated subunits (Apisum RDL2, Apisum pHCl2 and Apisum GluCl2), the aphid lacks clear orthologs of LCCH3, GRD, and CG8916, which have been found in the genomes of other insects analysed to date ( Figure 2) [5,[17][18][19]. Acyrthosiphon residues that differ from those of the orthologous Drosophila exon are highlighted in bold. N-glycosylation sites are boxed and Loops C and F, which contribute to ligand binding, are indicated. A phylogenetic tree was constructed using RDL peptide sequences from various insects ( Figure 4). As previously observed [20], the RDLs segregated according to insect order, including the multiple RDL subunits found in Lepidoptera. When considering the RDL sequences of many species, both Apisum RDL subunits clustered close together. In line with this, the two aphid Rdl genes are arranged close together in the A. pisum genome, within 207 kb, indicating a recent duplication event. A phylogenetic tree was constructed using RDL peptide sequences from various insects ( Figure 4). As previously observed [20], the RDLs segregated according to insect order, including the multiple RDL subunits found in Lepidoptera. When considering the RDL sequences of many species, both Apisum RDL subunits clustered close together. In line with this, the two aphid Rdl genes are arranged close together in the A. pisum genome, within 207 kb, indicating a recent duplication event. Tree showing relationships of RDL protein sequences from insects of various species. ELIC, which is an ancestral cysLGIC from the bacterium Erwinia chrysanthemi [29], was used as an outgroup. Numbers at each node signify bootstrap values with 1000 replicates, and the scale bar represents substitutions per site. A. pisum RDLs are shown in boldface type. Cloning and Functional Expression of Apisum Rdl1 and Apisum Rdl2 The full coding regions of Apisum Rdl1 and Apisum Rdl2 were amplified by reversetranscriptase PCR, and cloned into the pCI plasmid. Ten clones for each subunit were sequenced. For Apisum Rdl1, one clone lacked exon 3, whilst for Apisum Rdl2, one clone lacking exon 3 and another missing both exons 2 and 3 were observed. Rdl variants lacking exon 3 were also observed in other insects such as B. mori [22]. Excision of the exons lead to a frame shift and the introduction of a premature stop codon, generating shortened open reading frames of 339 bp, 228 bp and 228 bp for Apisum Rdl1∆exon3, Apisum Rdl2∆exon3 and Apisum Rdl2∆exon2 + 3, respectively. The remaining nine clones of Apisum Rdl1 were full length open reading frames consisting of 1704 bp encoding 567 amino acid residues. One of these clones encoded for the Apisum RDL1ad splice variant, whilst the remaining eight were Apisum RDLbd, consistent with previous findings that bd is the predominant splice variant [30]. All the eight full length clones for Apisum Rdl2 encoded for the exon3b variant; however, four of these clones had open reading frames of 1674 bp, whilst the other four had 1677 bp, encoding 557 and 558 amino acids, respectively. The difference in the open reading frame lengths is due to the presence of either a TVR or TEVR peptide motif in the TM3-TM4 intracellular domain, Figure 4. Tree showing relationships of RDL protein sequences from insects of various species. ELIC, which is an ancestral cysLGIC from the bacterium Erwinia chrysanthemi [29], was used as an outgroup. Numbers at each node signify bootstrap values with 1000 replicates, and the scale bar represents substitutions per site. A. pisum RDLs are shown in boldface type. Cloning and Functional Expression of Apisum Rdl1 and Apisum Rdl2 The full coding regions of Apisum Rdl1 and Apisum Rdl2 were amplified by reverse-transcriptase PCR, and cloned into the pCI plasmid. Ten clones for each subunit were sequenced. For Apisum Rdl1, one clone lacked exon 3, whilst for Apisum Rdl2, one clone lacking exon 3 and another missing both exons 2 and 3 were observed. Rdl variants lacking exon 3 were also observed in other insects such as B. mori [22]. Excision of the exons lead to a frame shift and the introduction of a premature stop codon, generating shortened open reading frames of 339 bp, 228 bp and 228 bp for Apisum Rdl1∆exon3, Apisum Rdl2∆exon3 and Apisum Rdl2∆exon2 + 3, respectively. The remaining nine clones of Apisum Rdl1 were full length open reading frames consisting of 1704 bp encoding 567 amino acid residues. One of these clones encoded for the Apisum RDL1 ad splice variant, whilst the remaining eight were Apisum RDL bd , consistent with previous findings that bd is the predominant splice variant [30]. All the eight full length clones for Apisum Rdl2 encoded for the exon3b variant; however, four of these clones had open reading frames of 1674 bp, whilst the other four had 1677 bp, encoding 557 and 558 amino acids, respectively. The difference in the open reading frame lengths is due to the presence of either a TVR or TEVR peptide motif in the TM3-TM4 intracellular domain, which were previously found in A. mellifera RDL, and were denoted variants 1 or 2, respectively [31]. No potential A-to-I RNA editing was observed in the twenty clones analysed. Xenopus laevis oocytes were injected with plasmids encoding Apisum Rdl1 ad , Apisum Rdl1 bd and Apisum Rdl2 b variant1. Two-electrode voltage-clamp electrophysiology showed that oocytes injected with each of the Rdl constructs responded to GABA in a concentration-dependent manner (Figure 5a). GABA concentration curves were generated (Figure 5b) for each of the Apisum RDL constructs. The EC 50 values of Apisum RDL1 ad and Apisum RDL1 bd were significantly different to each other (Table 2), and as is the case for Drosophila RDL [30], the bd splice variant for Apisum RDL1 has the highest EC 50 . 2), and as is the case for Drosophila RDL [30], the bd splice variant for Apisum RDL1 has the highest EC50. Table 2. Effects of GABA on membrane currents from X. laevis oocytes expressing A. pisum RDL, with maximum amplitude (Imax), EC50 and hill coefficient (nH) displayed. The Imax was obtained from the initial 250 μM GABA response measured from eggs clamped at −60 mV. Also shown are the effects of fipronil and the neonicotinoids imidacloprid (IMI), clothianidin (CLO), and thiacloprid (THI) on GABA EC50 induced membrane currents. IC50 values are shown for fipronil, as well as the fraction of response to GABA at EC50 after exposure to 100 μM neonicotinoid. All data are the mean ± SEM of 4-5 oocytes from ≥3 different frogs. [-] indicates that this value was not measured. Data is the mean ± SEM from n = 4-5 oocytes from ≥ 3 different frogs. Antagonistic Actions of Fipronil and Neonicotinoids on Apisum RDL1 and Apisum RDL2 The actions of insecticides (fipronil and neonicotinoids) on Apisum RDL1 and Apisum RDL2 expressed in Xenopus oocytes were measured. Fipronil acted as an antagonist on both aphid RDLs, and inhibition curves were generated ( Figure 6). The IC 50 values for Apisum RDL1 bd and Apisum RDL2 b were significantly different from each other ( Table 2). expressed in Xenopus oocytes were measured. Fipronil acted as an antagonist on both aphid RDLs, and inhibition curves were generated ( Figure 6). The IC50 values for Apisum RDL1bd and Apisum RDL2b were significantly different from each other ( Table 2). The neonicotinoid, imidacloprid, has been shown to act as an antagonist of heterologously expressed RDL [12]. We investigated to see whether imidacloprid also acted on the A. pisum RDLs. Unlike for An. gambiae and A. mellifera RDLs [12,31], imidacloprid at 100 μM had no detectable effect on responses of Apisum RDL1ad or Apisum RDLbd induced by GABA at EC50 concentration ( Figure 7a) or at 1 mM. We therefore tested to see whether other neonicotinoids showed any actions on the aphid RDLs. Similar to imidacloprid, clothianidin also had no antagonistic actions on responses induced by GABA, either at EC50 concentration (Figure 7b) or at 1 mM. However, thiacloprid reduced the GABA-induced responses of Apisum RDL1bd and Apisum RDL2b var 1 to 55% and 62%, respectively (Figure 7c, Table 2). The neonicotinoid, imidacloprid, has been shown to act as an antagonist of heterologously expressed RDL [12]. We investigated to see whether imidacloprid also acted on the A. pisum RDLs. Unlike for An. gambiae and A. mellifera RDLs [12,31], imidacloprid at 100 µM had no detectable effect on responses of Apisum RDL1 ad or Apisum RDL bd induced by GABA at EC 50 concentration (Figure 7a) or at 1 mM. We therefore tested to see whether other neonicotinoids showed any actions on the aphid RDLs. Similar to imidacloprid, clothianidin also had no antagonistic actions on responses induced by GABA, either at EC 50 concentration (Figure 7b Discussion We report here the cloning and functional expression of two RDL subunits from the aphid, A. pisum, which is a significant pest of legume crops [25]. Phylogenetic analysis and their close proximity in the aphid genome suggest the two Rdl genes arose from a recent duplication event. Insects of the Lepidoptera order also have more than one Rdl gene. For example P. xylostella has two Rdl genes appearing to originate from a recent duplication [21,22]. In contrast, RDL1s of B. mori and C. suppressalis co-segregate, as is also the case for RDL2s of the same species [20], perhaps reflecting more distant gene duplications, with a second duplication event giving rise to RDL3 in B. mori [22]. A. pisum possesses the most unusual cysLGIC gene superfamily characterised to date, in that as well as having a duplicated Rdl gene, it also possesses duplicates of pHCl and GluCl subunits ( Figure 2, Table 1). However, this does not result in an expanded cysLGIC gene superfamily, as no LCCH3, GRD or CG8916 subunits were detected in the A. pisum genome. This feature appears to be particular to the aphid, since insects of the Lepidoptera order have at least the LCCH3 and GRD subunits, as shown by B. mori [22]. With the cys-LGIC superfamily of A. pisum being the most evolutionary ancient characterised to date [32], it is tempting to speculate that duplication of Rdl, pHCl and GluCl represent diversification leading to the generation of LCCH3, GRD, and CG8916 in more highly evolved insects. Discussion We report here the cloning and functional expression of two RDL subunits from the aphid, A. pisum, which is a significant pest of legume crops [25]. Phylogenetic analysis and their close proximity in the aphid genome suggest the two Rdl genes arose from a recent duplication event. Insects of the Lepidoptera order also have more than one Rdl gene. For example P. xylostella has two Rdl genes appearing to originate from a recent duplication [21,22]. In contrast, RDL1s of B. mori and C. suppressalis co-segregate, as is also the case for RDL2s of the same species [20], perhaps reflecting more distant gene duplications, with a second duplication event giving rise to RDL3 in B. mori [22]. A. pisum possesses the most unusual cysLGIC gene superfamily characterised to date, in that as well as having a duplicated Rdl gene, it also possesses duplicates of pHCl and GluCl subunits (Figure 2, Table 1). However, this does not result in an expanded cysLGIC gene superfamily, as no LCCH3, GRD or CG8916 subunits were detected in the A. pisum genome. This feature appears to be particular to the aphid, since insects of the Lepidoptera order have at least the LCCH3 and GRD subunits, as shown by B. mori [22]. With the cys-LGIC superfamily of A. pisum being the most evolutionary ancient characterised to date [32], it is tempting to speculate that duplication of Rdl, pHCl and GluCl represent diversification leading to the generation of LCCH3, GRD, and CG8916 in more highly evolved insects. A similar finding was noted when characterising A. pisum nAChRs, where it was concluded that the α5 subunit was the newest member of the insect core group of nAChR subunits [24]. As with RDL of many other insect species, Apisum RDL1 has alternative splicing at exons 3 and 6, giving rise to four possible variants [10,27]. Functional expression of Apisum RDL variants showed that alternative splicing diversifies the functional properties of aphid RDL, as demonstrated by significantly different GABA EC 50 values for Apisum RDL ad and Apisum RDL bd . The use of differential splice sites can generate TM3-TM4 intracellular loops of varying length [10]. In the miridbug, Cyrtorhinus lividipennis, this can effectively create a 31 amino acid insertion, which decreased sensitivity to fipronil [33]. For the aphid RDL, only Apisum RDL2 was found to have variants where the intracellular loop varied in length. Here, an insertion of a single amino acid (TVR to TEVR) was identified. We did not functionally characterise these variants, as we have already shown that they have similar responses to GABA and fipronil in A. mellifera RDL [31]. With no potential A-to-I RNA editing isoforms detected, the extent of functional diversification of aphid RDL is less than that of other insects such as D. melanogaster and An. gambiae, which have at least 8 and 24 isoforms, respectively, arising from RNA editing [30,34]. RNA editing of An. gambiae RDL was found to influence the actions of ivermectin [34]. Without RNA editing, the aphid RDLs lack this mechanism to potentially alter target site sensitivity to insecticides. It has been previously noted that duplicated RDLs possess an amino acid substitution at the 2 -position of TM2, which is associated with insecticide resistance. For example, RDL2 of C. suppressalis possesses 2 serine, instead of the highly conserved alanine present in RDL1 [20], an amino acid change found in dieldrin-resistant insects [8]. For B. mori, either alanine, serine, or glutamine are present at 2 in RDL1, RDL2 and RDL3, respectively [22]. Consistent with previous findings in the aphid, M. persicae [23], we found that alanine (in RDL1) or serine (in RDL2) were present at 2 in A. pisum RDLs. Functional expression of C. suppressalis RDLs showed that the alanine-to-serine substitution decreased sensitivity to dieldrin, but that both RDLs had similar IC 50 s in response to fipronil [20]. Studies of RDL from other insect species, such as Nilaparvata lugens, have also shown that alanine-to-serine mutation does not affect the antagonistic action of fipronil [35]. In contrast, we found that Apisum RDL1 bd and Apisum RDL2 b had significantly different fipronil IC 50 s. Apisum RDL2 has the amino acid sequence CATPARVS at TM2, which differs to NATPARVS present in C. suppressalis RDL2 [20]. Perhaps the unusual presence of the cysteine residue accounts for Apisum RDL2 showing lower sensitivity to fipronil. However, Apisum RDL1 ad , which possesses NATPARVA at TM2, has a similar IC 50 to Apisum RDL2 b , suggesting that the cysteine residue does not underlie the differential sensitivity to fipronil. Apisum RDL1 ad and Apisum RDL1 bd differ by four amino acid residues located in the N-terminal extracellular domain, which is not associated with the actions of fipronil. Further experiments, such as site-directed mutagenesis, are required to clarify the basis of the differential sensitivity of Apisum RDL1 bd and Apisum RDL2 b to fipronil. It will be of interest to see whether differential expression of the aphid RDLs and their splice variants are associated with resistance to insecticides such as fipronil. It is also tempting to speculate that the evolution of insect cysLGICs may have been driven, in part, by gene duplication events conferring increased tolerance to naturally-found compounds with insecticidal properties. The neonicotinoid, imidacloprid, was shown to reduce GABA-induced responses in cultured honey bee Kenyon cells [36]. In line with this, more recent studies have shown that 100 µM imidacloprid acted as an antagonist of An. gambiae and A. mellifera RDL expressed in Xenopus oocytes [12,31]. We show here that imidacloprid at 100 µM has no antagonistic actions on Apisum RDL1 or Apisum RDL2, highlighting the fact that RDL can respond to neonicotinoids in a species-dependent manner. In addition, no reduction in GABA response was observed with clothianidin. We found, however, that thiacloprid was able to reduce GABA responses to a similar degree in Apisum RDL1 bd and Apisum RDL2 b . Both imidacloprid and clothianidin are nitro-substituted neonicotinoids, whilst thiacloprid is cyano-substituted [37]. Perhaps this structural difference may underlie the differential actions of the neonicotinoids on the aphid RDLs. The concentration of thiacloprid required to antagonise Apisum RDLs is notably high, and it remains to be determined whether aphid RDL plays any role in the insecticidal effects of neonicotinoids. In conclusion, two RDL subunits in the aphid A. pisum, which appear to be the result of a recent gene duplication event, were cloned and expressed in X. laevis oocytes. The heterologous expression of both aphid RDLs may provide a useful screening tool for the discovery of novel insecticidal compounds. This, in addition to screening against RDLs that have been cloned from other species such as C. suppressalis (a crop pest) [20], C. lividipennis (a predator of crop pests) [33] and A. mellifera (a pollinator) [31], may facilitate the identification of compounds which are selective for insect pests but benign for beneficial species. Furthermore, using expressed RDL with the 2 mutation [15,35] in these screens can highlight novel compounds that are still active on insects with the TM2 mutation as an important step in managing resistance. Isolation of Rdl1 and Rdl2 from A. pisum The sequence of Apisum RDL1 identified from the A. pisum genome has been previously reported as a predicted gamma-aminobutyric acid receptor subunit beta isoform X1 (XP_001947125) [24]. A second potential RDL subunit was also reported (PREDICTED: similar to GABA receptor, XP_001947277); however, this sequence lacks the highly variable N-terminal signal leader peptide. In order to clone the full length of the second Rdl subunit, the tBLASTn program [38] was used to search sequence data of the aphid Myzus persicae available at AphidBase (Available online: https://bipaa.genouest.org/is/aphidbase/). This identified a M. persicae sequence (MYZPE13164_G006_v1.0_000138140.1_pep) with a signal peptide, which was then used to identify the equivalent N-terminus and signal peptide in the A. pisum genome using tBLASTn. The sequences of Apisum RDL1 and Apisum RDL2 have been submitted to NCBI (Available online: https: //www.ncbi.nlm.nih.gov/), and have the accession numbers MH357526 and MH357527, respectively. Total RNA was extracted from 12 adult A. pisum (taken from a lab colony and provided by Jim Goodchild at Syngenta) using Trizol (Fisher Scientific, Loughborough, UK) following the manufacturer's protocol. First-strand cDNA was synthesized using the GoScript TM Reverse Transcription System (Promega, Southampton, UK). The coding sequences of Apisum Rdl1 and Apisum Rdl2 were amplified from this cDNA by a nested PCR approach using the Q5 ® High-Fidelity PCR Kit (New England Biolabs, Ipswich, MA, USA), where the first PCR reaction was used at a final dilution of 1 in 5000 as template for the second nested PCR reaction. For Apisum Rdl1, the first PCR reaction used the following primers: N-terminal 5 -CGCCGCCACGCCCGAGC-3 and C-terminal 5 -GGCGCAAAGTCTGCGAATAAG-3 . The second reaction used: N-terminal 5 -GTCTAGAATGACC GGCCGCGCCGCG-3 and C-terminal 5 AGCGGCCGCCTACTTGTCCGCCTGGAGCA-3 . For Apisum Rdl2, the first PCR reaction used the following primers: N-terminal 5 -CGCCGGC ACTCTTCTTCTTC-3 and C-terminal 5 -TATGTAACACTGTAACCGATGAG-3 . The second reaction used: N-terminal 5 -GTCTAGAATGTCCGCGTGGCTGGTGG-3 and C-terminal 5 -TGCGGCCGC TCAGTCCGCTCCCAGCAGTA-3 . Underlined sequences are XbaI and NotI, respectively, which were used to clone the aphid Rdl cloning sequences into the pCI vector (Promega). Apisum Rdl clones were sequenced at SourceBioscience (Available online: https://www.sourcebioscience.com/). Sequence Analysis The multiple protein sequence alignment was constructed with ClustalX [39] using the slow-accurate mode with a gap-opening penalty of 10 and a gap-extension penalty of 0.1, and applying the Gonnet 250 protein weight matrix. The protein alignments were viewed using GeneDoc (Available online: http://www.nrbsc.org/gfx/genedoc/index.html). Identity and similarity values were calculated using the GeneDoc program. Signal peptide cleavage sites were predicted using the SignalP 4.1 server [40], and membrane-spanning regions were identified using the TMpred program (Available online: http://www.ch.embnet.org/software/TMPRED_form.html). The PROSITE database [41] was used to identify potential phosphorylation sites. The phylogenetic trees were constructed using the neighbor-joining method and bootstrap resampling, available with the ClustalX program, and then displayed using the TreeView application [42]. Preparation and Expression of Apisum RDL1 and Apisum RDL2 in X. laevis Oocytes and Two-Electrode Voltage-Clamp Electrophysiology Functional studies of Apisum RDL1 ad , Apisum RDL1 bd and Apisum RDL2 b variant 1 were performed using the X. laevis expression system and two-electrode voltage-clamp electrophysiology. Stage V and VI X. laevis oocytes were harvested and rinsed with Ca 2+ free solution (82 mM NaCl, 2 mM KCl, 2 mM MgCl 2 , 5 mM HEPES, pH 7.4), before defolliculating with 1 mg/mL type IA collagenase (Sigma, St. Louis, MO, USA) in Ca 2+ free solution. Defolliculated oocytes were injected with 2.3 ng (23 nL) Apisum Rdl plasmid DNA into the nucleus of the oocyte and stored in standard Barth's solution (supplemented with 50 µg/mL neomycin and 10 µg/mL penicillin/streptomycin) at 17.5 • C. Oocytes 1-7 days post-injection were placed in a recording chamber and clamped at −60 mV with 3 M KCl filled borosilicate glass electrodes (resistance 0.5-5 MΩ) and an Oocyte Clamp OC-725C amplifier (Warner Instruments, CT, USA). Responses were recorded on a flatbed chart recorder (Kipp & Zonen BD-11E, Delft, The Netherlands). Oocytes were perfused with standard oocyte saline (SOS; 100 mM NaCl, 2 mM KCl, 1.8 mM CaCl 2 , 1 mM MgCl 2 , 5 mM HEPES, pH 7.6) at a flow rate of 10 mL/min. Oocytes were selected for experiments if stable after three or more consecutive challenges of GABA at EC 50 concentration. The GABA EC 50 concentration was determined using GABA concentration response curves, which were generated by challenging oocytes to increasing concentrations of GABA in SOS, with 3 min between challenges. Curves were calculated by normalising the GABA current responses to maximal responses induced by GABA before and after application. Insecticides were initially diluted in dimethylsulphoxide (DMSO), before diluting to final concentrations in SOS. Final concentrations of 1% DMSO did not affect electrophysiological readings. Fipronil inhibition curves were measured by pre-incubating the oocytes with fipronil in SOS for 3 min, immediately followed by a combination of fipronil and the respective EC 50 GABA concentration (20 µM for Apisum RDL1 ad , 50 µM for Apisum RDL1 bd and 30 µM for Apisum RDL2 b variant 1), until the maximum response was observed. This was followed by a wash step for 3 min in SOS and incubating the oocyte with 250 µM GABA, before repeating with increasing concentrations of fipronil. Inhibition curves were calculated by normalising the responses to the previous control response induced by 250 µM GABA. For measuring the antagonistic actions of neonicotinoids, oocytes were initially incubated with a perfusion of the neonicotinoid in SOS for 6 min, before challenging with a combination of neonicotinoid and either the respective EC 50 GABA concentration or 1 mM GABA. Data Analysis Data are presented as mean ± SEM of individual oocytes from three or more separate frogs. The concentration of GABA required to evoke 50% of the maximum response (EC 50 ), the concentration of fipronil required to inhibit 50% of the maximal GABA response (IC 50 ), and the Hill coefficient (nH) were determined by non-linear regression using Graphpad Prism 5 (Graphpad Software, CA, USA). Statistical significance was determined as p < 0.05, performed using one-way ANOVA (Graphpad Software, CA, USA).
7,459.2
2018-07-31T00:00:00.000
[ "Biology", "Environmental Science" ]
Formation of patina on a copper surface in polyacrylate gels with gold nanoparticles The experimental research demonstrates the effective use of patination of the copper and copper alloys in containing gold nanoparticles in order to increase their protective properties. The copper samples were examined by cyclic voltammetry method, optical microscopy, and X-ray analysis. It has been shown that patinas obtained in the polymer gel solution have a better corrosion resistance in comparison with the patinas obtained with chemical methods. Introduction There are many architectural and culturally valuable objects made of copper metal and copper alloys [1], which require the restoration. Such objects are covered by the layer of patina to make the decorative painting and keep the surface protection against external influence [2][3][4][5]. However, even if patination recipes are well-known it is highly difficult to achieve reproducible phase composition on the surface of pure copper and copper alloys. Therefore, the search for new methods of patina creation on the copper surface of culture and art objects is still an issue of concern. In this work, we propose to use the polymer a gel, the mixture of homopolymers of polymethyl methacrylate and polymethacrylic acid, polyethylene glycol containing gold nanoparticles. Experimental part The polymeric gels were based on copolymer of methyl methacrylate and methacrylic acid filled with polyethylene glycol. The reagents were immediately put into a polyetilen polymerization form and solidified by thermal polymerization at 60°С; benzoyl peroxide was used as an initiator. In experiment we obtained films with thickness of 0.5 mm by two approaches. The first method included the films formation by thermal pressing, and the second onefilms formation from the solution of the polymeric gel in composition solvents [6,7]. Methylcellosolve, butylacetate, toluene (in ratio 75:15:10) were used as the composite solvents [8]. The nanoparticles in gel solution were obtained by the method of laser ablation. Chemical patination was used for the creation of some resistance layers as the reference standard (table 1). NiSO 4 -20 g, KClO 3 -3 g, distilled water -77 ml 3 Cu(NO 3 ) 2 -200 g., AgNO 3 -8 g., HNO 3 -6.5 ml 4 PММА -20 ml, МАA -10 ml, PEG -40 ml 5 GEМА -20 ml, PМMA -10 ml, PEG -20 ml, CF 3 COONa -0.5g, NaClO 4 -0.3 g 6 PММА -20 ml, МАA -10 ml, PEG -40 ml, Au nanoparticles -44 mg/l 7 PММА -20 ml, МАA -10 ml, PEG -40 ml, Au nanoparticles -32 mg/l The solutions were placed on the surface of the 1x1 cm samples. The morphology analysis of the samples was carried out by means of an optical microscope "Metam PB-21-1" (Russia). Surface phase transformations after contact "metalgel" were examined by the method of X-ray diffraction (XRD-7000, Shimadzu). Corrosion resistance of the coatings was evaluated using a cyclic voltammetry method with the help of a potentiostat-galvanostat IPC-Pro MF (Russia). The cyclic current-voltage curves registration (CV) was performed in the three-electrode cell: the reference electrode (chloride saturated silver electrode), the auxiliary electrode (graphite rod), and a test samples (copper plates, 5x5 mm). Patina obtained on the surface was used as the indicator electrode. Registration of the cyclic current-voltage curves was performed in potential change range at1000 … 700 mV with scan rate of 10 mV/c in three media: HCl, KCl, NaOH. Results and discussion The figure 1 presents the results of X-ray analysis after patination of copper. All X-ray diffractograms (XRD) show the signals of the elemental copper. Most of patinas contain cuprite. The patina obtained in polymer gel solution also contain phase of tenorite (Сu (OH) 2 H 2 O 2 ). The figure 1b shows the XRD of patinas formed in polymer solution with nanoparticles at different temperatures. It can be seen that the patina formed at 40 °C is the most stable because the copper peaks were weaker; it means that the patina is homogeneous, that proved optical images (figure 2). Patina formed at 80 °C has a higher copper XRD signal, which indicates a thinness of the film. The areas with green color point of copper carbonates on the surface, which are generally have less corrosion resistant than cuprite. Values of the potential and the corrosion current are shown in the table 2. Among the chemically produced patinas only one has the best protective properties in all three aggressive mediums: the patina formed in the solution of copper and silver nitrate (solution #3). This patina has the lowest value of the current and the highest value of corrosion potential. The patina formed in gel solution (MMA, MAA, PEG) has stronger protective properties. When the polymeric gels include the gold nanoparticles, the reproducibility of patina increases as shown by the error value. Most informative data are received after overnight exposure in the electrolyte solution. Patina's amperage has increased by an order, but patinas prepared by the polymer gel electrolyte have smaller corrosion current than chemically prepared patinas. The results demonstrate that the addition of gold nanoparticles into the gel solution increases the corrosion resistance of patina. However, increasing the number of particles in the volume of the gel has low effect on the properties of patina. The figure 3 shows the cyclic voltammetry curves of the patinas obtained by the chemical method in solution of silver and copper nitrate. Also there is a patina formed by the polymer gel electrolyte on the figure 3. According to the data, the oxidant reduction passes more actively on the surface of patina formed in copper and silver nitrate. It testifies the presence of juvenile-free areas of copper, that may be seen on the optical images on the figure 2. However, the oxidant reduction passes not actively on the surface of patina formed in the solutions of MMA, MAA, PEG polymer gels with gold nanoparticles ( figure 3). Thus, patinas formed by polymer gel electrolyte solutions with the addition of gold nanoparticles at the temperature of 40 o C showed the greatest stability. Conclusion Cathode reduction of oxidants on the surface of patina, formed by chemical method, extends at a
1,402.8
2015-11-06T00:00:00.000
[ "Materials Science" ]
Designing Smart Shoes for Obstacle Detection: Empowering Visually Challenged Users Through ICT . The paper presents a case of Smart Shoes that uses ultrasonic sensors to detect the obstacle in front of the user. Additionally, this shoe signals a user by tapping at the foot arch. An evaluative study of the Smart Shoes was conducted with (n=31) users; (17) blind people, (9) low vision and (5) non-disabled users. The study was conducted to judge reliability of the Smart Shoes by evaluating it from (a) ratio of obstacles identified to total obstacles encountered, (b) distance of obstacle apprehension and (c) response time. The study was conducted in a controlled and definite environment. The results from the study illustrate this footwear to be 89.5% effective in detecting obstacles such as vehicles, people, furniture, footpaths, poles, and miscellaneous obstacles with a mean response time of 3.08 seconds. Users average distance of obstacle apprehension was 108cms in regular mode and 50 cms in the crowd mode. The future research & evaluative studies will be conducted in actual operational/moving environments. Introduction Worldwide, it is estimated that approximately 285 million people are visually impaired out of which 39 million are blind [1].India accounts for 20% of world's blind population.Mobility is an important aspect of human life and is adversely affected in people with visual disability.To overcome this limitation, they turn towards assistive devices to take a step closer to independent mobility.Most common and the oldest conventional device aiding them in mobility is the Hoover cane.Other options such as guide dogs, GPS and other Tech Gizmos have been continuously evolving to make these users independent [2].The design and evaluative study of smart shoes reported in this paper is an effort to leapfrog current assistive devices' development process for people with visual Impairment. The long Hoover cane is one of the world's oldest products.While Hoover cane has been a widely accepted mobility-aid, there has not been any significant innovation in this domain since last few decades.In analyzing the existing Hoover cane, we came across few limitations needed to be addressed to improve the existing mobility of its users.For instance, users get feedback about the obstacles only when the cane touches it.This means a small range of obstacle detection (the length of the cane) [3], and consequently less time to react to the obstacle.It can prove to be dangerous when it comes to obstacles like moving vehicles etc.Also, using aids like Hoover cane, guide dogs or some other technologically advanced hand-held gadgets to navigate in domestic environment keeps one hand always occupied in holding the aid, which otherwise can be used for purposes like safeguarding during an accident. Many people without disabilities perceive an assistive mobility aid as an indicator of the physical disability and they either get out of the way or rush to help the user, out of pity.What people don't realize is that these are the means used by them to be more independent.This stigma created in the society against the aids and the visually challenged have perpetuated a myth of helplessness for long and have kept them away from achieving the first-class status.In studying most of the accidents [4] among people using Hoover cane, we found that one of the primary reasons usually is the short or long length of the cane not being able to provide adequate information in time and lack of training. With reference to Neurophysiology, humans are sight dependent [5].30% -40% of our cerebral cortex is devoted to vision, as compared to 8% for touch or just 3% for audio [6].To compensate for our most dominant sense, providing the feedback tactilely [7] is an efficient and effective mechanism when coupled with vibrations.We do acknowledge that prolonged exposure to vibrations could cause problems like tactile hallucinations ("Phantom Vibrations") and Whole-Body/Hand-Arm Vibration Syndrome [8]. For blind people, hearing and touch become the major senses, respectively [9].Considering the noise pollution level of private or public spaces in India, tap (tactile) emerges as a promising tool to communicate with aforementioned users.The Smart Shoes presented in this paper explores the potential of tap in alerting users about the obstacles in front through a novel (and innocuous on prolonged usage) feedback mechanism.This footwear employs a novel method of providing feedback and has a long and customizable range of detection that can aid in independent mobility.The remaining sections will describe the design of the Smart Shoes, methods followed in conducting user testing, results of this on-going study, followed by conclusion indicating future research plans for this project.The design of Smart Shoes is our effort to serve visually impaired people by facilitating their mobility.It detects obstacles in a customizable range of up to 2 meters by making use of an ultrasonic sensor and providing the feedback to user through a tapping mechanism at the foot-arch.The footwear employs a directional ultrasound sensor that is continuously transmitting and receiving sound waves to detect the obstacles in front of the user.The intensity of the tapping varies based on the distance between the user and the obstacle.This feedback mechanism has been developed meticulously so as to be effective and at the same time, not to have any ill effects on the user's health, in case of long term product usage.The footwear works in two modes; (a) Regular mode -This is the 'by-default' mode detecting obstacles present in a range of 0-2 meters.(b) Crowd mode-It has a range of 0-1 meters and is suitable for surroundings with frequent obstacles.Based on the mode selected, processor sends the signal to arch-pad to initiate tapping with computed intensity.Ease of usability is what the user interface is based on.The users don't have to remember any complex button combinations nor do they need an extensive training to operate the device.There are two buttons; (a) Power button: This button is used for powering the system on/off, (b) Mode button: The footwear can be used in any of the two modes using the mode button.When it is pressed, the buzzer present in the system indicates current mode.Figure-3 depicts two modes of the Smart Shoes and the two buttons. Fig.2. Depicts two modes of the Smart Shoes and power button The system is powered by using a rechargeable Lithium-ion battery.Battery level is indicated by the buzzer when switching on the system.Once fully charged the system can work for at least 5 hours.The pleasant aesthetics of the footwear helps in diminishing the prejudice held by the society against the assistive devices. User Group A total of (n=31) users participated in the study where 17 users were Blind People, 9 Low vision [10] and 5 non-disabled users (see figure 3).Non-disabled and low vision users were included in the study to understand if presence of sense of sight affects the usage of the product.These users were blindfolded during the testing. Experiment Design To assess the effectiveness of Smart Shoes in detecting the obstacles, definite control trials with (n=31) users were conducted on an artificial obstacle course.The course was designed in such a way so as to emulate the obstacles encountered in day-to-day life.All the visually impaired users (n=26) were selected from BPA, Ahmedabad and were from different backgrounds (age, educational qualification, profession etc.).All these participants were using different modes of navigation from one place to another (cane, human aid etc.).However during the testing they were asked to use only smart shoes to navigate among the obstacles.The non-disabled users (n=5) belonged to Ahmedabad, Gujarat and were happy to volunteer by themselves on seeing the experiment. Before the testing, each user was introduced to the Smart Shoes and its functionality for an average of 5 minutes.They were shown a demonstration of the Smart Shoes; they were asked to be stationary and one team member from development team moved in front of them; changing the distance between them and allowing them to get a feel of the tapping and its varying intensity.The simple UI of the Smart Shoes allowed the users to get acquainted with it relatively quickly and ready for the test. The testing took place in the campus of BPA, Ahmedabad, Gujarat.A parking space of 12 x 6 meters was used as the test arena where all the frequently encountered obstacles (such as vehicles, people, footpath, poles, pillars, furniture and other miscellaneous obstacles) were re-created artificially (see Figure 4).The obstacles were categorized as follows: • Vehicle: Motorbike, Car, Cycle, Rickshaw, Three-wheeler, Carts, Bus. • People: Our team members tried to simulate the encounters such that the user might have in their daily life with fellow human beings (stationary and moving of 3 & 2 people respectively, refer Fig. 5). • Footpath: Peripheral walls of the obstacle course ranging from 15cms to 45cms high. • Furniture: Chair with cylindrical thin legs with the ground clearance of more than 45cms, Sofa, etc. • Miscellaneous Obstacles: Cardboard blocks of different dimensions. Fig.4. Depicts artificial obstacle course All the obstacles were of different dimensions and users encountered one obstacle at a time.To ascertain the ability and effectiveness of the Smart Shoes, the following performance indicators were studied: The ratio of obstacles identified to total obstacles encountered: Out of all the obstacles that were encountered by the user while navigating the course, how many were detected before coming in contact with the obstacle.A high ratio indicates a high awareness about the obstacle presence in the environment.This will also reflect the collision rate of a user with the obstacles when he was mobilized using the Smart Shoes.The relation being inversely proportional to the metric studied. The distance of obstacle apprehension: The distance between the user and the obstacle when it was detected with the help of the smart shoes.A large distance means early detection, alerting the user about the obstacle just in time to take effective measures to avoid it. Response time: Response time is characterized as the time between the user stopping due to an obstacle detection using the Smart Shoes and starting to walk again, assuming that an obstacle-free path has been identified.A low response time means that the user has adapted well to the ways of the Smart Shoes and is able to quickly ascertain the free path when encountered with an obstacle. 4 Results The results showed that the users were able to detect 89.5% obstacles out of all encountered obstacles with a mean response time of 3.08 seconds.Users' average distance of obstacle apprehension was 108cms in regular mode and 50 cms in the crowd mode.Sample image shown in figure-5 Ratio of obstacles identified to total obstacles encountered The overall ratio of the obstacles identified, against the total obstacles encountered were 0.8948.Users encountered up to a maximum of 40 obstacles during the experiment in the obstacle course.The responses were manually scored and the collisions with the obstacles were noted down during the experiment.The ratio of obstacles identified to total obstacles encountered was calculated post-experiment from the data (see table-1). Distance of obstacle apprehension The Smart Shoes has two modes: Regular mode (range 0 cms -200 cms) and crowd mode (0 cms -100 cms).The average distance of early detection was 108 cms in regular mode with a standard deviation of 21.49 and 50 cms in crowd mode with a standard deviation of 7.89 (see Table 2). Response time The average response time was 3.08 seconds.Prolonged usage of the product may result in a shorter response time for the user.The responses were manually scored meticulously from a thorough study of the video recordings of the experiments post testing (see Table 3).The users were skeptical about the feedback mechanism of the Smart Shoes in the beginning of the experiment leading to a cautious gait.As they became familiar with the product usage and understood how the feedback works, their walking resumed to normal and had a better understanding of the change in the tapping intensity.The users became more confident as the experiment went on and they were able to check for an obstacle-free path using the Smart Shoes more efficiently.In rare cases, users went astray and needed experimenter's intervention to get back on course. Conclusions This paper presented an obstacle detection system embedded Smart Shoes that enabled people with visual impairment to identify obstacles in advance.Quantitative controlled trials conducted on (n=31) users in an artificially designed obstacle course showed that the system is able to detect the obstacles with an efficiency of 89.5%.Users' average distance of obstacle apprehension was 108cms in regular mode and 50 cms in the crowd mode.The results reflected that the non-disabled users had the longest response time and the least distance of obstacle apprehension when it comes to using the Smart Shoes.It may have happened due to their sudden loss of sense of sight, implicitly relating this finding to the recently turned blinds.Further study including recently turned blind users will be able to reflect more on the same.Low vision users had the highest obstacle detection ratio.This may have happened due to their self-confidence in their mobility that their residual sight gives them even when they are blind-folded.They may also be dependent on their other senses for mobility that may have led to a high obstacle detection ratio using the Smart Shoes.From this study, we conclude that the Smart Shoes could be used by the blind and low vision users to enhance their visibility in daily life.In future, longitudinal research would be required to judge if the Smart Shoes would be able to augment their conventional way of mobility, thereby make them independent after prolonged usage.Next evaluative studies will be conducted in actual operational/moving environments. 6 2 DesignFig. 1 . Fig. 1.Depicts Ultrasound sensor that detects the obstacles (left).Depicts design of Figure-1 depicts the design of Smart Shoes.Figure-2 depicts positioning of the ultrasound sensor. Of the 26 users with visual disability, 20 users had it since birth (of which 8 had low vision) and the rest 6 users from time ranging from 4 years ago to 22 years ago.The users were distributed into the age group of 16 to 35 years old, 45% users falling in the age group of 16-20 years old, 35% in the age group of 21-25 years old, and 10% in the age group of both 26-30 years old and 31-35 years old each.9 users out of the total had undergone a cane-training course.All the users volunteered for the study and were associated with Blind People's Association (BPA), Ahmedabad, Gujarat. Fig. 3 . Fig.3.Depicts distribution of user age and level of blindness Fig. 5 . Fig.5.User navigating through the obstacle course wearing the Smart Shoes Table 1 . Ratio of obstacles detected to total obstacles encountered Table 2 . Average distance for obstacle apprehension Table 3 . Average response time observations on the use of the Smart Shoes. The average response time of users was 3.08 seconds.The study was conducted on an artificially designed obstacle course.The obstacle course was designed carefully with a wide variety of obstacles.Obstacles present in the experiment replicated most commonly encountered hindrances in the daily lives of users.Users with blindness had an average obstacle detection ratio of 0.889 with an average response time of 2.54 seconds.Their average distance of obstacle apprehension was 108.88 cms in regular mode and 51.49 cms in crowd mode.Users with low vision had an average obstacle detection ratio of 0.91 with an average response time of 3.23 seconds.Their average distance of obstacle apprehension was 116.21 cms in regular mode and 48.11 cms in crowd mode.The users without disability had an average obstacle detection ratio of 0.89 with an average response time of 4.65 seconds.Their average distance of obstacle apprehension was 84.8 cms in regular mode and 48.4 cms in crowd mode.
3,617.6
2017-09-25T00:00:00.000
[ "Computer Science", "Engineering" ]
Magnetic microswimmers exhibit Bose-Einstein-like condensation We study an active matter system comprised of magnetic microswimmers confined in a microfluidic channel and show that it exhibits a new type of self-organized behavior. Combining analytical techniques and Brownian dynamics simulations, we demonstrate how the interplay of non-equilibrium activity, external driving, and magnetic interactions leads to the condensation of swimmers at the center of the channel via a non-equilibrium phase transition that is formally akin to Bose-Einstein condensation. We find that the effective dynamics of the microswimmers can be mapped onto a diffusivity-edge problem, and use the mapping to build a generalized thermodynamic framework, which is verified by a parameter-free comparison with our simulations. Our work reveals how driven active matter has the potential to generate exotic classical non-equilibrium phases of matter with traits that are analogous to those observed in quantum systems. The coupling between non-equilibrium activity and long-range magnetic dipole-dipole interaction can lead to new emergent properties for magnetic microswimmers [8,10,[28][29][30][31][32]. Similarly rich phenomenology is known to emerge from long-range interactions in phoretic active matter [33], and in particular, due to the interplay between translational and orientational degrees of freedom [34][35][36][37]. In determining the potential for such emergent effects, a key difference between biological and artificial magnetic swimmers is in the strength of their respective interactions. While magnetotactic bacteria carry a typical magnetization of the order of ∼ 10 A·m −1 [21,22], the magnetization can reach values of up to ∼ 10 3 A·m −1 for swimmers with magnetite [38]. The effect of such strong magnetic dipole-dipole interaction on the collective response of magnetic swimmers is still largely unexplored, despite the growing interest in their potential applications for cargo and drug delivery in microscopic environments [39]. Here, we illustrate how strong dipoledipole interactions affect the collective behavior of magnetic microswimmers confined in a microfluidic channel. Using Brownian dynamics simulations and a coarsegrained analytical framework, we show that the radial dynamics of microswimmers across the channel is equivalent to that of particles diffusing in an effective potential and presenting a diffusivity-edge [14]. Consequently, the system is found to exhibit a transition leading to the formation of a condensate at the channel center, which coexists with a surrounding gas. By means of a gen-eralized thermodynamic framework, we characterize the singular behavior of the system and find it to be analogous to the characteristics of Bose-Einstein condensation (BEC) transition. These concrete predictions are moreover quantitatively verified by simulations, without the need of tuning parameters. Finally, our extensive simulations across the entire parameter space allow us to construct a phase diagram, with phase boundaries that show agreement with the simple criteria obtained from our analytical framework. We start by introducing the microscopic model of magnetic microswimmers which we simulated and used as a starting point for the derivation of a field theory. We consider swimmers that carry a magnetic dipole moment m 0 n, along which they self-propel with a constant speed v 0 , under the influence of a uniform magnetic field B ext = −B ext e z ; see Fig. 1(a). They swim in a 3 dimensional cylindrical channel oriented along e z , and experience a Poiseuille flow described as V f = v f (1−r 2 /R 2 0 ) e z , with r denoting the radial distance from the center and R 0 being the channel radius. The Langevin equations governing the dynamics of their position r and orientation n thus reaḋ where ζ and ζ r denote the translational and rotational friction coefficients that are taken to be scalar for simplicity, ξ and ξ r are thermal noises of respective variances 2D and 2D r , with D = k B T /ζ and D r = k B T /ζ r , and T being the medium temperature. B int in Eqs. (1) and (2) is the effective magnetic field induced by other swimmers, and is obtained from Ampère's law. In what follows, N denotes the number of swimmers in the channel, which is set to 1000 (Simulation details can be found in [40]). Their mean density ρ 0 = N/(πR 2 0 L) is adjusted by varying the channel length L. Moreover, we fix v 0 , R 0 , m 0 , ζ, ζ r , and T to realistic values [8,10] (see Table I in [40]), such that only B ext , v f and ρ 0 are varied. As we shall see later, the dimensionless number which combines the relative strength of the magnetic energy versus thermal energy and the strength of propulsion and shear velocities versus diffusion, plays a key role in determining the behavior of the system. When magnetic interactions are negligible compared to the effect of external driving, the radial dynamics of particles relaxes over a finite timescale τ ≡ [30]. The dynamics along the channel direction e z is then determined by the value of the dimensionless parameter When B < 1, the distribution of swimmers along e z is uniform on average, whereas for B ≥ 1 the system undergoes an instability leading to a dynamical steady state made of a periodic arrangement of traveling clusters, characterized by strong inhomogeneities in the particle distribution along the channel axis (see Fig. 1(d)). In this study, we focus on the stationary radial distribution of swimmers, φ r (r) ≡ ρ(r, t)/ρ 0 z,t , where ρ(r, t) denotes particle density. The Poiseuille flow V f generates a vorticity that orients the swimmers-that are already aligned by the external magnetic field-towards the center of the channel, essentially acting as a confining potential in the radial direction [8,30]. Figure 1(b) shows how for B 1 φ r (r) is well approximated by a Gaussian, which corresponds to the case where the effective potential is quadratic. Deep in the clustering phase, φ r (r) is not Gaussian and dramatically shoots up in the vicinity of r = 0 (see Fig. 1(c)). The scaling of the maximum of φ r at r = 0, denoted φ 0 , with the mean particle density ρ 0 is shown in Fig. 1(e). When B is sufficiently small, φ 0 barely varies with ρ 0 as expected from a radial focusing by an effective potential. On the contrary, for large values of B the system exhibits anomalous accumulation of particles at r = 0, as indicated by the abrupt increase of φ 0 with ρ 0 . The parameter B essentially measures how dipoledipole interactions and alignment with the external magnetic field dominate over thermal fluctuations, as well as how self-propulsion competes with the external flow. The condensation phenomenon described above occurs when B 1, and thus relies on the key role of magnetic dipolar interactions between swimmers. As a first simplification, we consider the case where the alignment with B ext dominates over thermal fluctuations: m 0 B ext k B T . We moreover assume that the induced magnetic field B int is negligible compared to B ext in (2) for the orientational dynamics. Within the above two assumptions, which are met in most experimentally relevant cases [8,10], the rotational dynamics becomes a fast process and can be decoupled from the translational dynamics. The resulting orientational equilibrium is essentially determined by the balance between magnetic torque (m 0 n×B ext ) and the vorticity ( 1 2 ∇ × V f ). Denoting θ the angle between n and −e z (see Fig. 1(a)), its stationary value averaged over thermal fluctuations obeys sin θ r/(τ v 0 ) 1. Projecting (1) along the radial direction of the channel e r , the equation governing the dynamics of r readṡ where the effective Gaussian noise ϑ r is delta- The dynamics of the system in the radial direction is therefore equivalent to that of interacting dipoles that experience an effective temperature and a confining effective harmonic potential U(r) ≡ 1 2 kr 2 with stiffness k ≡ ζ/τ = k B T eff /(D eff τ ) which focuses the particles at the center of the channel. Numerical simulations of Eqs. (1) and (2) reveal that the clustering instability leads to a highly dynamic regime where clusters assemble and disassemble continuously due to thermal fluctuations (see SM movie [40], clustering panel). Therefore, longitudinal density inhomogeneities are expected to have little influence on B int in the steady state, suggesting that the dynamics will be dominated by the leading contribution of B int (r) −µ 0 m 0 ρ 0 φ r (r)e z in this limit [40]. Inserting this expression into (5), the radial dynamics completely decouples from the longitudinal one. Using the following ansatz for the particle density inside the channel ρ(r, t) = ρ 0 φ r (r)φ z (z, t), the radial equilibrium condition thus follows where While T eff plays the role of an effective temperature for φ r in the dilute limit (corresponding to φ r → 0) as mentioned above, the strength of the collective effects leading to density-dependent effective diffusion in (7) is set by φ c . Importantly, we observe that the associated density-dependent effective diffusion coefficient vanishes at φ r = φ c , which will place the system of magnetic bacteria in a shear flow in the class of systems that can exhibit a classical analogue of Bose-Einstein condensation of particles in the ground state U = 0 [14,41]. It follows from (7) that for φ r < φ c , φ r is a monotonously decreasing function of U [14]. We denote φ 0 as the maximum of φ r that corresponds to U = 0, and define β ≡ (k B T eff ) −1 . Using these definitions, the solution of (7) reads where W 0 (x) is the principal branch of the Lambert W function that satisfies W 0 (xe x ) = x. When φ 0 < φ c , where k B T c ≡ kR 2 0 /φ c is defined as the value of k B T eff for which φ 0 = φ c . In particular, for T eff T c the effect of dipolar interactions is negligible, such that φ r is well approximated by a Boltzmann distribution: φ r ∼ βkR 2 0 2 exp(−βU), in agreement with our numerical simulations (see Fig. 1(b)). As the systematic derivation of (7) from the particle-level stochastic dynamics gives the expressions of T eff , φ c , and U as functions of the microscopic parameters, a quantitative and parameter-free comparison between the theory and the simulations is possible. This is shown in Fig. 2, whose panel (a) verifies that (10) is in excellent agreement with the simulation results for T eff > T c . As φ 0 approaches φ c (or equivalently as T eff → T c ), φ r becomes non-analytical at U = 0, which reflects the formation of a condensate (see Fig. 1(c)). Consequently, the distribution cannot be normalized and an additional contribution from the ground state needs to be added by hand. Denoting the number of particles in the condensate as N c (and N c /N as the corresponding ground state and condensation (red) with Brownian dynamics simulations (black squares: focusing; red circles: clustering; blue triangles: condensation) at fixed values of J = 2 · 10 −4 (b) and 5 · 10 −4 (c) (see [40] for details). fraction), the distribution reads (11) This is the only stable solution of (7) which admits values of φ r larger than φ c [41]. It thus emerges that the system belongs to the class of diffusivity-edge problems treated in Ref. [14], for which the nonlinear diffusion vanishes for all φ r ≥ φ c , because the solution given in (11) admits values of φ larger than φ c at a single point only. From the normalization of φ r , we find that the fraction of particles in the condensate satisfies the following relation which is consistent with the BEC law in two-dimensional free space [43,44]. Equation (11) predicts the formation of a point-wise condensate at U = 0. We have measured N c /N in our Brownian dynamics simulations by defining it as 2 kR 2 0 φr(U )≥φc dUφ r (U), and found a good agreement with the theoretical prediction of (12) as shown (by the red dots) in Fig. 2(b). We have found that measuring the average number of particles N 0 in a cylinder of radius 0.005R 0 around the channel center leads to an underestimation of the number of condensed particles (see the hollow squares in Fig. 2(b)). We thus conclude from these observations that the condensate emerging in our simulations occupies a finite volume. This feature can moreover be read directly from the distribution φ r and is linked to the regularization of the near-field magnetic interactions, whose details are discussed in [40]. The addition of short-range repulsion between the swimmers is expected to have similar consequences. Following previous works [14,41], the analogy with BEC can be further extended by defining and calculating thermodynamic quantities for the system. The mean potential energy per particle defined asŪ ≡ 2 kR 2 0 ∞ 0 dU Uφ r (U), can be explicitly calculated to givē . (13) As in the case of BEC [44], (13) predicts a change of slope ofŪ at T eff = T c [43]. In particular, for T eff T c it gives U ∼ k B T eff , which corresponds to a 2-dimensional ideal gas law. Figure 2(c) shows that the theoretical prediction ((13)) is well reproduced by the simulations. We note thatŪ is generally overestimated in the condensed phase as a consequence of the finite radial extension of the condensate. With the parameters used in the simulations, the radial focusing of particles occurs on scales much smaller than the channel radius (see Figs. 1(b) and (c)). Therefore, the radial confinement of particles is essentially due to the effective potential U(r) and the effect of the channel boundary is negligible. Within the generalized thermodynamics of the BEC in the steady state, a pressure p can be defined for the active fluid via [14,41] dp = −ρ 0 φ r (U)dU. The difference of pressure between the edge and center of the channel is thus given by which corresponds to the behavior expected for BEC [43][44][45]. In particular, as T c ∝ φ −1 c ∝ ρ 0 , ∆p is independent of ρ 0 in the condensed phase (T eff ≤ T c ). Isotherms of ∆p versus ρ −1 0 as represented in Fig. 2(d) therefore exhibit characteristic plateaus for ρ −1 0 ≤ ρ −1 0,c , where ρ 0,c is the value of ρ 0 at which T c = T eff . To compare (14) with the simulation, we have calculated the total pressure exerted on the condensate, defined via ρ 0 φr(U )<φc dUφ r (U), by using the measured average distribution φ r . ∆p exhibits a good agreement with the theory as seen in Fig. 2(d). As in the case of mean energyŪ, ∆p appears to be larger than the predicted values in the condensed phase, which is also due to the finite volume occupied by the condensate. The longitudinal clustering of swimmers was characterized in a previous study [30] neglecting the dipole-dipole interaction term in (5), thus effectively describing the limit φ 0 /φ c 1. As larger values of φ 0 /φ c qualitatively change the shape of the radial distribution, a natural question is how these modifications affect the behavior of swimmers in the longitudinal direction. Using (9), we find that the longitudinal clustering instability occurs for T eff > T c when the following condition is satisfied [40] 1 In the limit of small φ 0 /φ c and with parameter values considered in Ref. [30], (15) reduces to B ≥ 1, as expected. Below the condensation threshold where T eff < T c , the lhs of (15) diverges due to the singularity of φ r at U = 0, and the inequality is always satisfied. Our theoretical investigations thus predict that magnetic microswimmers in a quasi-one dimensional channel exhibit three types of dynamical behavior. When collective effects are negligible, the swimmers are radially focused due to an effective quadratic potential created by the interplay between the external flow and the magnetic field, while being uniformly distributed along the channel axis. When T eff > T c and the inequality (15) is satisfied, the system undergoes an instability that gives rise to the formation of clusters that travel along the channel. This longitudinal structure formation persists when T eff ≤ T c , while in that case a macroscopic number of swimmers form a condensate at the center of the channel in a BEC-like fashion. A phase diagram in the (T eff /T c , J , φ c ) parameter space summarizing this phase behavior is provided in Fig. 3(a). Our Brownian dynamics simulations at fixed values of J = 2 · 10 −4 and 5 · 10 −4 verify our theoretically predicted phase behavior of the system, as shown in Figs. 3(b) and (c) (see [40] for simulations details). To conclude, we have fully characterized the collective behavior of magnetic microswimmers suspended in a microfluidic channel. We have the system to exhibit a novel type of nonequilibrium condensation transition, which shows striking similarities with Bose-Einstein condensation. These findings not only enrich the broad set of many-body dynamics exhibited by active matter systems, but also provide guidelines for future designs of controllable functional micro-robotic active matter systems with desired emergent properties. This work has received funding from the Horizon 2020 research and innovation programme of the EU under Grant Agreement No. 665440, JSPS KAKENHI Grants No. 20K14649, and the Max Planck Society.
4,166.4
2020-10-06T00:00:00.000
[ "Physics" ]
Experimental Study on Electric Potential Response Characteristics of Gas-Bearing Coal During Deformation and Fracturing Process Coal mass is deformed and fractured under stress to generate electrical potential (EP) signals. The mechanical properties of coal change with the adsorption of gas. To investigate the EP response characteristics of gas-bearing coal during deformation and fracture, a test system to monitor multi-parameters of gas-bearing coal under load was designed. The results showed that abundant EP signals were generated during the loading process and the EP response corresponded well with the stress change and crack expansion, and validated this with the results from acoustic emission (AE) and high-speed photography. The higher stress level and the greater the sudden stress change led to the greater EP abnormal response. With the increase of gas pressure, the confining action and erosion effect are promoted, causing the damage evolution impacted and failure characteristics changes. As a result, the EP response is similar while the characteristics were promoted. The EP response was generated due to the charge separation caused by the friction effect etc. during the damage and deformation of the coal. Furthermore, the main factors of the EP response were different under diverse loading stages. The presence of gas promoted the EP effect. When the failure of the coal occurred, EP value rapidly rose to a maximum, which could be considered as an anomalous characteristic for monitoring the stability and revealing failure of gasbearing coal. The research results are beneficial for further investigating the damage-evolution process of gas-bearing coal. Introduction As the basic energy used in the world, coal resources play an important part in industrial production and economic life [1].During mining activities, coal and gas outburst disasters greatly threaten safe production in coal mines, leading to serious casualties and property loss.Examples of accidents include coal and gas outburst and rock burst [2][3][4].For example, in 2014, a coal burst damaged mine equipment seriously and resulted in the death of two miners in the Austar coal mine in Australia [5].Therefore, the research on revealing the initiation and generation mechanism of coal and gas outburst disasters is of great significance for monitoring the stability of coal-rock mass while mining, and predicting the generation of coal and gas outburst disaster [6]. Under the combined effect of ground stress, mining-induced stress, and gas pressure, the internal damage of coal-rock mass constantly changes to finally trigger structural instability and dynamic failure, which causes the coal and gas outburst disaster [7].Monitoring the stress state and damage-evolution process of coal-rock mass is the key to monitoring and early-warning of coal and rock outburst disasters [8].At present, with the gradual reduction of coal resources in the shallow underground coal seams, coal mining shifts to deep underground coal seams [9].Deep coal seams basically contain abundant high-pressure gas and the coupling effect of stress and gas gradually imposes increasing influences on the state of the coal mass [10].When the gas is sufficiently adsorbed by the coal, the gas is stored in the pores of the coal mass at adsorbed state and free state to form gas-solid coupling system with the coal mass [11].The formation of this gas-solid coupling system changes the physical and mechanical properties of the coal (including mechanical property, deformation and fracture process, and failure mode).The formation of the system further influences the occurrence of various disasters, such as coal and gas outburst and rock burst [12]. Energies are released during the initiation and occurrence process of coal and gas outburst disasters.The released forms include the elastic energy, acoustic energy, electromagnetic energy, etc. [13].Therefore, numerous geophysical methods (such as acoustic emission (AE) and electromagnetic radiation) are developed to monitor the stability and predict the failure of the coal mass [14,15].The methods have been widely explored in the laboratory, and corresponding technologies also have been applied in mining activities in the field [16][17][18][19][20]. Previous studies indicated that coal-rock mass could be electrically charged during deformation under load.It triggers the electrical potential (EP) response on the material surface.The EP response is closely related to the deformation and fracture of the coal mass, which characterizes the stress state and damage-evolution process of the coal mass.The abnormal change of the EP can be considered as the precursor of failure of coal-rock materials.Yoshida [21] found that the EP changed markedly just prior to dynamic rupture.Takeuchi et al. [22,23] studied the electrokinetic properties of quartz and granite and demonstrated that the fracture surface and sliding friction surface were charged with the density up to 10 −4 ~10 −2 C/m 2 .By exploring the EP effect of the coal-rock not containing gas under load, Wang et al. [24] showed that the change of the EP was highly correlated with loads and rates of load change.The distribution of the EP field corresponded well to that of strain field.Additionally, they discussed different mechanisms of the EP response by referring to the electromagnetic radiation (EMR) [25].Archer et al. [14] found EP signals were stimulated during linear-elastic and -inelastic deformation, which was associated with micro-cracking.As a conclusion, the article provided an effective and advanced method for structural health monitoring of rocks.Niu et al. [26] presented the EP response characteristics and its mechanism in a similar simulation of coal-mining activities.The above research provides a favorable theoretical basis for monitoring the failure of coal-rock mass by testing the EP response [27][28][29]. Previous research on the EP effect of the coal-rock mass was mostly carried out on the coal-rock mass not containing gas.However, research on the EP response characteristics of coal dynamic failure under the stress-gas coupling effect and the influence of gas is rare.To solve the problem, the influences of the stress-gas coupling effect on the deformation and fracturing process of the coal mass were investigated.Furthermore, a multi-parameter test system for gas-bearing coal under load was designed to carry out compressive tests under different gas pressures.During the loading process, the various responses (including EP and AE) during the deformation process of gas-bearing coal under load were synchronously recorded.Simultaneously, the evolution process of coal fractures is subjected to high-speed photography in real time by using an industrial camera.The AE count can reflect the micro-fracturing happening during the damage and deformation of the coal mass while the amplitude of the signals can reflect the intensity of micro-cracks [30].The high-speed photography can catch the instantaneous state of crack evolution of the coal mass during different loading stages [31].This method allows us to analyze the EP response and the damage evolution of the coal mass.Therefore, the research result is beneficial to further reveal the initiation mechanism of coal-rock dynamic disasters.In this way, the attempt is carried out to provide valuable information about the damage evolution under the stress-gas coupling effect and monitor the failure of the coal mass by using the EP response. Specimen System The experiment was conducted in the Faraday shielded room in China University of Mining and Technology.It could effectively prevent the electric noise from affecting the recorded signals [15].The experimental system was designed and established individually to acquire multiple types of information during loading process of the gas-bearing coal (see Figure 1).The subsystem parts are shown as follows. Processes 2019, 7, 72 3 of 23 subjected to high-speed photography in real time by using an industrial camera.The AE count can reflect the micro-fracturing happening during the damage and deformation of the coal mass while the amplitude of the signals can reflect the intensity of micro-cracks [30].The high-speed photography can catch the instantaneous state of crack evolution of the coal mass during different loading stages [31].This method allows us to analyze the EP response and the damage evolution of the coal mass.Therefore, the research result is beneficial to further reveal the initiation mechanism of coal-rock dynamic disasters.In this way, the attempt is carried out to provide valuable information about the damage evolution under the stress-gas coupling effect and monitor the failure of the coal mass by using the EP response. Specimen System The experiment was conducted in the Faraday shielded room in China University of Mining and Technology.It could effectively prevent the electric noise from affecting the recorded signals [15].The experimental system was designed and established individually to acquire multiple types of information during loading process of the gas-bearing coal (see Figure 1).The subsystem parts are shown as follows.Number of parts in experimental system of Figure 1.It includes a hermetically sealed chamber subsystem, an axial compressive loading subsystem, an EP monitoring subsystem, an AE monitoring subsystem, a high-speed camera shooting subsystem, and a gas inflating-extracting subsystem (1) The hermetically sealed chamber subsystem, including a chamber cover, a loading shaft, gas spiracles, waveguide rods, and a visible window (see Figure 1).The gas can be pumped into the chamber with spiracles and kept at a certain pressure after closing the valve. (2) The EP monitoring subsystem, including the electrode, data transmission line, an amplifier, A/D converter and data storage.When the EP signals were measured by the electrode on the surface of specimen, and then translated by the data transmission line.The host computer can analyze the signal and display it in real time.The maximum sampling frequency is 100 kHz. (3) The AE monitoring subsystem, reaching a series of functions, such as the parameter setting, signal acquisition, data storage, graphics display, waveform acquisition, and spectral analysis.During the loading process of the specimen, it can acquire simultaneously the counts and energy of AE signals and locate the damage position inside the specimen. Conventional AE sensors fail to endure high gas pressure.However, in previous monitoring methods, sensors are distributed on the surface of gas-bearing containers, which causes loss and inaccurate measurements of AE signals [32].To solve the problem, the specialized waveguide rods are designed, as shown in Figure 2. The one end of the waveguide rods is connected to the specimen while the other end is connected to the AE sensor.Moreover, coupling agents favorable for conduction of acoustic waves are uniformly painted on the two ends of the waveguide rods.Composite materials are filled between the waveguide rods and the wall of the gas cylinder, to prevent the transmission of acoustic waves from waveguide rods to the gas cylinder.In this way, AE signals generated by the specimen can be completely transmitted to the AE sensors through waveguide rods as far as possible.The composite materials are connected to the waveguide rods through the screw joints so that the waveguide rods can move to the wall of the cylinder relatively and the specimen can be readily dismantled.Besides, the space between the waveguide rods and the composite materials is sealed through a flexible seal ring.It is an innovative design to transmit the AE signals of coal fracture in the high-gas environment through the waveguide rods. It includes a hermetically sealed chamber subsystem, an axial compressive loading subsystem, an EP monitoring subsystem, an AE monitoring subsystem, a high-speed camera shooting subsystem, and a gas inflating-extracting subsystem (1) The hermetically sealed chamber subsystem, including a chamber cover, a loading shaft, gas spiracles, waveguide rods, and a visible window (see Figure 1).The gas can be pumped into the chamber with spiracles and kept at a certain pressure after closing the valve. (2) The EP monitoring subsystem, including the electrode, data transmission line, an amplifier, A/D converter and data storage.When the EP signals were measured by the electrode on the surface of specimen, and then translated by the data transmission line.The host computer can analyze the signal and display it in real time.The maximum sampling frequency is 100 kHz. (3) The AE monitoring subsystem, reaching a series of functions, such as the parameter setting, signal acquisition, data storage, graphics display, waveform acquisition, and spectral analysis.During the loading process of the specimen, it can acquire simultaneously the counts and energy of AE signals and locate the damage position inside the specimen. Conventional AE sensors fail to endure high gas pressure.However, in previous monitoring methods, sensors are distributed on the surface of gas-bearing containers, which causes loss and inaccurate measurements of AE signals [32].To solve the problem, the specialized waveguide rods are designed, as shown in Figure 2. The one end of the waveguide rods is connected to the specimen while the other end is connected to the AE sensor.Moreover, coupling agents favorable for conduction of acoustic waves are uniformly painted on the two ends of the waveguide rods.Composite materials are filled between the waveguide rods and the wall of the gas cylinder, to prevent the transmission of acoustic waves from waveguide rods to the gas cylinder.In this way, AE signals generated by the specimen can be completely transmitted to the AE sensors through waveguide rods as far as possible.The composite materials are connected to the waveguide rods through the screw joints so that the waveguide rods can move to the wall of the cylinder relatively and the specimen can be readily dismantled.Besides, the space between the waveguide rods and the composite materials is sealed through a flexible seal ring.It is an innovative design to transmit the AE signals of coal fracture in the high-gas environment through the waveguide rods.(4) The axial compressive loading subsystem, including a loading frame, a hydraulic drive device, a controller host, and a program (see Figure 3).It can record the data of displacement, load, and time synchronously during the loading process.The oil source is placed outside the faraday shielded room, to reduce the interferon on the experiment.(4) The axial compressive loading subsystem, including a loading frame, a hydraulic drive device, a controller host, and a program (see Figure 3).It can record the data of displacement, load, and time synchronously during the loading process.The oil source is placed outside the faraday shielded room, to reduce the interferon on the experiment.(5) The high-speed camera shooting subsystem.It includes the industrial camera, the illumination source, and the triangular bracket.The instant crack expansion on the specimen surface during loading process can be shot clearly by using the industrial camera through a visible window.The resolution ratio of photograph is up to 1920 × 1080 pixels with 40 frames per second. (6) The gas inflating-extracting subsystem, including the vacuum pump, gas transmission, exhaust lines, and gas cylinder.The gas also can be inflated into the chamber by the gas transmission line and extracted after the experiment. Under the coupled action of loading stress and gas pressure, the system could be used to conduct the compression experiment of coal.During the process, the signals of EP and AE and crack photograph could be obtained in real time to be analyzed. Specimen Preparation The coal mass was derived from No. 5 coal seam in Yangzhuang coal mine, Huaibei City, China.Then the specimens were prepared according to the standard size of 50 (width) × 50 (height) × 100 (length) mm (see Figure 4). Experimental Scheme The experiment was carried out according to the following steps. (1) First, all specimens were placed in the sealed dry container for 24 h before the experiment, to prevent them from absorbing excessive water.(5) The high-speed camera shooting subsystem.It includes the industrial camera, the illumination source, and the triangular bracket.The instant crack expansion on the specimen surface during loading process can be shot clearly by using the industrial camera through a visible window.The resolution ratio of photograph is up to 1920 × 1080 pixels with 40 frames per second. (6) The gas inflating-extracting subsystem, including the vacuum pump, gas transmission, exhaust lines, and gas cylinder.The gas also can be inflated into the chamber by the gas transmission line and extracted after the experiment. Under the coupled action of loading stress and gas pressure, the system could be used to conduct the compression experiment of coal.During the process, the signals of EP and AE and crack photograph could be obtained in real time to be analyzed. Specimen Preparation The coal mass was derived from No. 5 coal seam in Yangzhuang coal mine, Huaibei City, China.Then the specimens were prepared according to the standard size of 50 (width) × 50 (height) × 100 (length) mm (see Figure 4).(5) The high-speed camera shooting subsystem.It includes the industrial camera, the illumination source, and the triangular bracket.The instant crack expansion on the specimen surface during loading process can be shot clearly by using the industrial camera through a visible window.The resolution ratio of photograph is up to 1920 × 1080 pixels with 40 frames per second. (6) The gas inflating-extracting subsystem, including the vacuum pump, gas transmission, exhaust lines, and gas cylinder.The gas also can be inflated into the chamber by the gas transmission line and extracted after the experiment. Under the coupled action of loading stress and gas pressure, the system could be used to conduct the compression experiment of coal.During the process, the signals of EP and AE and crack photograph could be obtained in real time to be analyzed. Specimen Preparation The coal mass was derived from No. 5 coal seam in Yangzhuang coal mine, Huaibei City, China.Then the specimens were prepared according to the standard size of 50 (width) × 50 (height) × 100 (length) mm (see Figure 4). Experimental Scheme The experiment was carried out according to the following steps. (1) First, all specimens were placed in the sealed dry container for 24 h before the experiment, to prevent them from absorbing excessive water. Experimental Scheme The experiment was carried out according to the following steps. Processes 2019, 7, 72 6 of 23 (1) First, all specimens were placed in the sealed dry container for 24 h before the experiment, to prevent them from absorbing excessive water. (2) The airtightness of the cylinder was tested to ensure that the cylinder has a favorable sealing effect.The various parts of the test system were connected and kept a turn-on state. (3) Electrode and waveguide rods (the other end connected to AE detectors) were distributed on the surface of the coal mass and sealed in the cylinder.By using a vacuum pump, the cylinder was vacuumized.To provide electrical isolation, two thin teflon-plates were placed on the top and bottom of specimen. (4) Gas was injected into the cylinder.After reaching a certain pressure, the pressure was stabilized for 8 h so that the coal mass could fully adsorb gas. (5) Experimental parameters were set.(6) The load was applied on the specimen by using the press to further synchronously measure various data such as EP and AE. (7) After the specimen was damaged, the experiment was ended and gas in the gas cylinder was released. Test Results of Multi-parameters during the Damage of Gas-Bearing Coal under Load In the series, experiments under different gas pressures were carried out.As the example, the experimental results under 2.0 MPa were analyzed as follows.The experimental parameters were set as Table 1.(1) Responses of strain and stress During loading process, the gas pressure was kept at 2 MPa after reaching adsorption equilibrium.The loading stress on the top surface of specimen by loading system is defined as loading stress (LS).Owing to gas pressure, the initial LS was not zero.The curves of stress and strain with respect to loading time are shown in Figure 5. Processes 2019, 7, 72 6 of 23 (2) The airtightness of the cylinder was tested to ensure that the cylinder has a favorable sealing effect.The various parts of the test system were connected and kept a turn-on state. (3) Electrode and waveguide rods (the other end connected to AE detectors) were distributed on the surface of the coal mass and sealed in the cylinder.By using a vacuum pump, the cylinder was vacuumized.To provide electrical isolation, two thin teflon-plates were placed on the top and bottom of specimen. (4) Gas was injected into the cylinder.After reaching a certain pressure, the pressure was stabilized for 8 h so that the coal mass could fully adsorb gas. (5) Experimental parameters were set.(6) The load was applied on the specimen by using the press to further synchronously measure various data such as EP and AE.(7) After the specimen was damaged, the experiment was ended and gas in the gas cylinder was released. Test Results of Multi-parameters during the Damage of Gas-Bearing Coal under Load In the series, experiments under different gas pressures were carried out.As the example, the experimental results under 2.0 MPa were analyzed as follows.The experimental parameters were set as Table 1.(1) Responses of strain and stress During loading process, the gas pressure was kept at 2 MPa after reaching adsorption equilibrium.The loading stress on the top surface of specimen by loading system is defined as loading stress (LS).Owing to gas pressure, the initial LS was not zero.The curves of stress and strain with respect to loading time are shown in Figure 5.It can be seen from Figure 5 that with increasing loading time, the stress and strain on the specimen gradually rose.Stress and strain suddenly changed at small amplitudes both at 299.1 s and 359.6 s, implying that the great damage appeared in the local zone of the specimen.Moreover, the local crack expansion resulted in the instant increase of the specimen deformation while stress fluctuated.With the constant loading, stress and strain constantly rose.At 396.9 s, the primary crack was found and therefore the structure of the specimen lost stable.In this case, stress dramatically and suddenly changed and rapidly reduced while strain rapidly increased. (2) Responses of AE signals Previous research shows that under the effect of external load, the coal mass is damaged to thus lead to the initiation and expansion of cracks, consequently triggering numerous AE events [30].Dislocation and slippage between particles in coal matrix can also induce the fracturing of bridge bonds between coal molecules, consequently generating AE phenomenon [33].AE counts characterize the times of micro-cracking happening during damage and fracturing of the coal mass while the amplitude of the signals reflects the strength of micro-cracking.The parameters related to AE can be used to describe the evolution process of damage and crack expansion and release process of energies in the coal mass, to further judge the deformation and fracturing of the coal specimen and predict the occurrence of failure of the coal specimen [20].Figure 6 shows the curves of AE counts and MAS (mean amplitude strength) with respect to loading time under gas pressure of 2 MPa. Processes 2019, 7, 72 7 of 23 It can be seen from Figure 5 that with increasing loading time, the stress and strain on the specimen gradually rose.Stress and strain suddenly changed at small amplitudes both at 299.1 s and 359.6 s, implying that the great damage appeared in the local zone of the specimen.Moreover, the local crack expansion resulted in the instant increase of the specimen deformation while stress fluctuated.With the constant loading, stress and strain constantly rose.At 396.9 s, the primary crack was found and therefore the structure of the specimen lost stable.In this case, stress dramatically and suddenly changed and rapidly reduced while strain rapidly increased. (2) Responses of AE signals Previous research shows that under the effect of external load, the coal mass is damaged to thus lead to the initiation and expansion of cracks, consequently triggering numerous AE events [30].Dislocation and slippage between particles in coal matrix can also induce the fracturing of bridge bonds between coal molecules, consequently generating AE phenomenon [33].AE counts characterize the times of micro-cracking happening during damage and fracturing of the coal mass while the amplitude of the signals reflects the strength of micro-cracking.The parameters related to AE can be used to describe the evolution process of damage and crack expansion and release process of energies in the coal mass, to further judge the deformation and fracturing of the coal specimen and predict the occurrence of failure of the coal specimen [20].Figure 6 shows the curves of AE counts and MAS (mean amplitude strength) with respect to loading time under gas pressure of 2 MPa.As shown in Figure 6, in the early stage of loading, there were few AE signals with low AE count and amplitude.AE signals were mainly generated due to the mutual slippage of primary cracks in compacted gas-bearing coal and the friction between particles.With the constant growth of stress, the coal damage was constantly aggravated, gradually transiting to plastic damage stage from elastic stage.In the process, new cracks initiated, split, and expanded along the weak structural plane to generate new fracturing signals thus further.Therefore, the AE counts and amplitude both increased during which the amplitude became more concentrated.At 299.1 s and 358.6 s, the specimen was greatly damaged and therefore cracks rapidly expanded, resulting in many fracturing events.In this context, fractured zones were formed in local areas and high-energy gas constantly impacted the coal mass along the weak structural plane, also exhibiting a significant friction effect.AE counts rose in an impulse type, with a high amplitude, reflecting the sudden increase in micro-cracks in the coal mass and the large crack strength, thus releasing huge elastic energies.The result confirmed to the phenomenon that stresses suddenly changed at corresponding time moment in Figure 5, which can be taken as a favorable complementary evidence.When the primary cracks occurred at 396.9 s, AE count and amplitude both reached the maximum.It indicated that the number and strength of micro- As shown in Figure 6, in the early stage of loading, there were few AE signals with low AE count and amplitude.AE signals were mainly generated due to the mutual slippage of primary cracks in compacted gas-bearing coal and the friction between particles.With the constant growth of stress, the coal damage was constantly aggravated, gradually transiting to plastic damage stage from elastic stage.In the process, new cracks initiated, split, and expanded along the weak structural plane to generate new fracturing signals thus further.Therefore, the AE counts and amplitude both increased during which the amplitude became more concentrated.At 299.1 s and 358.6 s, the specimen was greatly damaged and therefore cracks rapidly expanded, resulting in many fracturing events.In this context, fractured zones were formed in local areas and high-energy gas constantly impacted the coal mass along the weak structural plane, also exhibiting a significant friction effect.AE counts rose in an impulse type, with a high amplitude, reflecting the sudden increase in micro-cracks in the coal mass and the large crack strength, thus releasing huge elastic energies.The result confirmed to the phenomenon that stresses suddenly changed at corresponding time moment in Figure 5, which can be taken as a favorable complementary evidence.When the primary cracks occurred at 396.9 s, AE count and amplitude both reached the maximum.It indicated that the number and strength of micro-cracks in the coal mass dramatically rose to thus result in the great expansion and cut-through of cracks, consequently triggering the failure of the specimen.After the primary crack was found, AE signals rapidly reduced and almost disappeared. (3) Responses of crack expansion The evolution processes of crack expansion at different time moments were recorded by applying high-speed photography, as shown in Figure 7.As shown in Figure 7, in the early stage of loading (for example, at 20.0 s), although accumulated damage of the specimen was present, it was insignificant, and no micro-crack was formed on the surface of the specimen.With increasing stress, the micro-defects and micro-cracks in the coal-rock mass gradually aggregated and connected in local zones after undergoing constant extension and expansion in the early stage, showing certain self-organization, thus triggering the generation of micro-cracks in the coal-rock mass.At 299.1 s, stress suddenly changed at a small amplitude and therefore two separated thin and long micro-cracks were formed on the surface of the specimen.At 350.5 s, the damage and fracturing of the coal mass aggravated and two previous micro-cracks mutually cut through and further expanded to thus form multiple secondary cracks.Additionally, microscopic breakage appeared at the left side of the specimen and the top inclined to the left.At 356.8 s, the stress approximated to the maximum, and the width and length of the primary cracks further rose, and the number of secondary cracks increased and constantly expanded.The damage zones at the left side and the top of the specimen were expanded and therefore the damage degree increased.At 395.9 s, the cracks in the middle part in the frontage of the specimen expanded and gradually cut through while the cracks at the bottom were staggered.The structure of the coal mass became increasingly unstable.At 396.9 s, the primary crack occurred in the specimen while the cracks As shown in Figure 7, in the early stage of loading (for example, at 20.0 s), although accumulated damage of the specimen was present, it was insignificant, and no micro-crack was formed on the surface of the specimen.With increasing stress, the micro-defects and micro-cracks in the coal-rock mass gradually aggregated and connected in local zones after undergoing constant extension and expansion in the early stage, showing certain self-organization, thus triggering the generation of micro-cracks in the coal-rock mass.At 299.1 s, stress suddenly changed at a small amplitude and therefore two separated thin and long micro-cracks were formed on the surface of the specimen.At 350.5 s, the damage and fracturing of the coal mass aggravated and two previous micro-cracks mutually cut through and further expanded to thus form multiple secondary cracks.Additionally, microscopic breakage appeared at the left side of the specimen and the top inclined to the left.At 356.8 s, the stress approximated to the maximum, and the width and length of the primary cracks further rose, and the number of secondary cracks increased and constantly expanded.The damage zones at the left side and the top of the specimen were expanded and therefore the damage degree increased.At 395.9 s, the cracks in the middle part in the frontage of the specimen expanded and gradually cut through while the cracks at the bottom were staggered.The structure of the coal mass became increasingly unstable.At 396.9 s, the primary crack occurred in the specimen while the cracks in the middle part were totally connected.Moreover, the width and length of cracks in the lower part witnessed great increases and the cracks at the bottom were connected.Additionally, secondary cracks developed into secondarily primary cracks.The through-running cracks showed the combination of tensile and shear failure.Repetition (4) Responses of EP signals As shown in Figure 8, abundant EP signals can be generated during the damage of the gas-bearing coal under load.The changing trend of the EP was consistent with that of stress and there was a favorable corresponding relationship between EP and stress.With the increase of stress, damage in the specimen evolved.As a result, new fracturing constantly happened (shown as the growth of AE count and amplitude in Figure 6) and cracks constantly expanded (see Figure 7).During the above damage evolution, the EP intensity constantly rose.During the loading of the specimen, the EP intensity rose at a ladder type at 299.1 s and 359.6 s, with a significant increase amplitude.The phenomenon conformed to the time moments when stress and strain suddenly changed in Figure 5 and the time moments when AE count and amplitude varied suddenly in Figure 6.At these moments, the EP damage dramatically rose and therefore micro-cracks rapidly increased, showing growing strength.As a result, huge energy was released, which was also verified in Figure 7.At 396.9 s, as stress maximum appeared, the specimen was subjected to primary micro-cracks to thus be damaged due to loss of bearing capacity.In this case, the specimen was greatly and rapidly damaged.Moreover, the EP dramatically fluctuated and rapidly increased to a maximum, showing an extremely significant response.Afterward, the EP intensity rapidly reduced and stabilized at a low level.After the specimen was damaged, stress rapidly declined and the EP rapidly reduced and stabilized at a low level.As shown in Figure 8, abundant EP signals can be generated during the damage of the gasbearing coal under load.The changing trend of the EP was consistent with that of stress and there was a favorable corresponding relationship between EP and stress.With the increase of stress, damage in the specimen evolved.As a result, new fracturing constantly happened (shown as the growth of AE count and amplitude in Figure 6) and cracks constantly expanded (see Figure 7).During the above damage evolution, the EP intensity constantly rose.During the loading of the specimen, The strain's response characteristics were similar to the stress response.The difference was that the large stress drop occurring at 299.1 s.Currently, since the specimen was still in the elastic loading stage, the strain changes little even the EP response was significant.When loading phase of sample becomes to be plastic phase, especially around the failure, the strain response became more significant.It indicated that the strain monitoring is more sensitive when the specimen is plastic, especially for the main rupture. The representative time points were selected to calculate corresponding LS levels.On this basis, the abnormal response characteristics of EP, AE, and crack expansion are attained, as shown in Table 2 and their datum statistics is displayed in Figure 9. significant.It indicated that the strain monitoring is more sensitive when the specimen is plastic, especially for the main rupture. The representative time points were selected to calculate corresponding LS levels.On this basis, the abnormal response characteristics of EP, AE, and crack expansion are attained, as shown in Table 2 and their datum statistics is displayed in Figure 9.As shown in Table 2 and Figure 9, on the condition that stress suddenly changed, EP and AE tended to vary suddenly, showing significant abnormal response characteristics, and cracks greatly expanded and constantly aggravated.Correspondingly, the higher the stress level was and the larger the sudden change of stress was, the more significant the damage of the coal mass and thus the more dramatic the abnormal responses of EP and AE.It meant that the larger the increment of EP was, the more the AE counts and the higher the intensity.Additionally, crack expansion enhanced to thus aggravate the damage of the coal mass.As shown in Table 2 and Figure 9, on the condition that stress suddenly changed, EP and AE tended to vary suddenly, showing significant abnormal response characteristics, and cracks greatly expanded and constantly aggravated.Correspondingly, the higher the stress level was and the larger the sudden change of stress was, the more significant the damage of the coal mass and thus the more dramatic the abnormal responses of EP and AE.It meant that the larger the increment of EP was, the more the AE counts and the higher the intensity.Additionally, crack expansion enhanced to thus aggravate the damage of the coal mass. Therefore, based on the changes of stress and AE and analysis result of crack expansion, the change trend and response characteristics of EP can reflect the stress state of the specimen and reveal its damage-evolution process. The EP Response Results under Different Gas Pressures The EP responses under different gas pressures are shown in Figure 10.It is worth noting that gas pressure of 0 MPa means that gas is not injected into the cylinder and previous air in the cylinder is not extracted.Therefore, the air press is kept as the standard atmospheric pressure. Therefore, based on the changes of stress and AE and analysis result of crack expansion, the change trend and response characteristics of EP can reflect the stress state of the specimen and reveal its damage-evolution process. The EP Response Results under Different Gas Pressures The EP responses under different gas pressures are shown in Figure 10.It is worth noting that gas pressure of 0 MPa means that gas is not injected into the cylinder and previous air in the cylinder is not extracted.Therefore, the air press is kept as the standard atmospheric pressure.As shown in Figure 10, the EP responses under different gas pressures exhibited a basically same change law, which was similar to the results shown in Section 3.1 (4) With the increase of stress, the damage of the specimen under load constantly exacerbated and EP intensity gradually grew.When the specimen was subjected to a serious structural damage, stress suddenly changed and therefore the EP tended to vary abruptly (see 487.2 s in Figure 10a, 464.4 s in Figure 10b, 299.1 s and 359.6s in Figure 10c, and 372.7 s in Figure 10d).The specimen lost its bearing capacity after being subjected to primary crack and therefore stress declined at a cliff type and the EP also rapidly rose to the maximum, with the largest sudden change (see 580.3 s in Figure 10a, 472.4 s in Figure 10b, 396.9 s in Figure 10c, and 395.9 s in Figure 10d).As shown in Figure 10, the EP responses under different gas pressures exhibited a basically same change law, which was similar to the results shown in Section 3.1 (4) With the increase of stress, the damage of the specimen under load constantly exacerbated and EP intensity gradually grew.When the specimen was subjected to a serious structural damage, stress suddenly changed and therefore the EP tended to vary abruptly (see 487.2 s in Figure 10a, 464.4 s in Figure 10b, 299.1 s and 359.6s in Figure 10c, and 372.7 s in Figure 10d).The specimen lost its bearing capacity after being subjected to primary crack and therefore stress declined at a cliff type and the EP also rapidly rose to the maximum, with the largest sudden change (see 580.3 s in Figure 10a, 472.4 s in Figure 10b, 396.9 s in Figure 10c, and 395.9 s in Figure 10d). The broken strength, the strain maximum, the EP maximum, and the EP variation coefficient of specimens under different gas pressures were computed, as shown in Figure 11.The broken strength, the strain maximum, the EP maximum, and the EP variation coefficient of specimens under different gas pressures were computed, as shown in Figure 11. (1) The effective stress on the gas-bearing coal can be expressed as follows [34]: Where, , , ∅, , , , , , , and refer to effective stress, LS, equivalent pore coefficient, gas pressure, ultimate gas adsorption of unit mass of rock mass at experimental temperature, apparent density of coal, molar gas constant, absolute temperature, adsorption constant, and molar volume, respectively.Owing to the other conditions are unchanged, the aforementioned formula can be simplified if only the broken strengths (effective maximum stress on the specimen) of specimens under gas pressures were compared.It meant that relative broken strength ( ′) can be simplified and expressed by using the difference between LS maximum and gas pressure: As shown in Figure 11a, with the growth of gas pressure, the relative broken strength of the specimen gradually declined.Under the gas pressure of 3 MPa, the relative broken strength of the specimen significantly reduced, decreasing by 18.1% compared with that without gas pressure. (2) The deformation of the specimen can be represented by using axial mean strain.As shown in Figure 11b, the deformation of the specimen greatly increased with increasing gas pressure.When the gas pressure was 3 MPa, the maximum strain rose by 112.3% compared with that on the condition of having no gas pressure. (3) As shown in Figure 11c, the EP maximum tended to occur before or after primary microcracks of specimens, which was the most significant characteristic of the EP abnormal response.With the increase of gas pressure, although the EP maximum slightly fluctuated, it generally rose, implying (1) The effective stress on the gas-bearing coal can be expressed as follows [34]: where, σ e , σ L , ∅, P, a, ρ s , R, T, b, and V m refer to effective stress, LS, equivalent pore coefficient, gas pressure, ultimate gas adsorption of unit mass of rock mass at experimental temperature, apparent density of coal, molar gas constant, absolute temperature, adsorption constant, and molar volume, respectively.Owing to the other conditions are unchanged, the aforementioned formula can be simplified if only the broken strengths (effective maximum stress on the specimen) of specimens under gas pressures were compared.It meant that relative broken strength (σ e ) can be simplified and expressed by using the difference between LS maximum and gas pressure: As shown in Figure 11a, with the growth of gas pressure, the relative broken strength of the specimen gradually declined.Under the gas pressure of 3 MPa, the relative broken strength of the specimen significantly reduced, decreasing by 18.1% compared with that without gas pressure. (2) The deformation of the specimen can be represented by using axial mean strain.As shown in Figure 11b, the deformation of the specimen greatly increased with increasing gas pressure.When the gas pressure was 3 MPa, the maximum strain rose by 112.3% compared with that on the condition of having no gas pressure. (3) As shown in Figure 11c, the EP maximum tended to occur before or after primary micro-cracks of specimens, which was the most significant characteristic of the EP abnormal response.With the increase of gas pressure, although the EP maximum slightly fluctuated, it generally rose, implying that gas pressure promoted the EP response and therefore EP maximum was at a high level, which was more valuable for analysis. (4) The EP variation coefficient is the ratio of mean to standard deviation of whole EP data, which can objectively describe the fluctuating response of EP.As shown in Figure 11d, similar to the EP maximum, the EP variation coefficient also generally grew with the increase of gas pressure. The aforementioned results showed that gas pressure promoted the EP response.The EP response characteristics can monitor the evolution process of damage and fracturing of gas-bearing coal (especially coal containing high content of gas). Damaging and Fracturing Process of Gas-Bearing Coal The coal belongs to a typical heterogeneous structure, which contains a great number of micro-defects including pores, cracks, and dislocation, that is, Griffith defects [35].The result can be clearly verified by observing the microstructures of gas-bearing coal by using the scanning electron microscope (SEM) (see Figure 12). Processes 2019, 7, 72 13 of 23 that gas pressure promoted the EP response and therefore EP maximum was at a high level, which was more valuable for analysis. (4) The EP variation coefficient is the ratio of mean to standard deviation of whole EP data, which can objectively describe the fluctuating response of EP.As shown in Figure 11d, similar to the EP maximum, the EP variation coefficient also generally grew with the increase of gas pressure. The aforementioned results showed that gas pressure promoted the EP response.The EP response characteristics can monitor the evolution process of damage and fracturing of gas-bearing coal (especially coal containing high content of gas). Damaging and Fracturing Process of Gas-Bearing Coal The coal belongs to a typical heterogeneous structure, which contains a great number of microdefects including pores, cracks, and dislocation, that is, Griffith defects [35].The result can be clearly verified by observing the microstructures of gas-bearing coal by using the scanning electron microscope (SEM) (see Figure 12).Cracks first initiated at the edge of primary cracks.After reaching the critical breaking strength, cracks expanded along a certain angle to thus generate new cracks.With the constant increase of stress, the damage of the coal mass exacerbated.Therefore, many micro-cracks appeared, expanded, split, closed and mutually connected along the direction with a weak strength to thus form a failure zone of cracks with a certain width.The phenomenon resulted in the generation of micro-cracks in the specimen finally (see Figure 7).Figure 13 displays the images of different fracture surfaces in the damaged gas-bearing coal at different magnification times.According to the pictures, the generation and expansion of cracks during the fracturing of coal as well as the inflection, bending, and splitting during the expansion can be clearly observed.Cracks first initiated at the edge of primary cracks.After reaching the critical breaking strength, cracks expanded along a certain angle to thus generate new cracks.With the constant increase of stress, the damage of the coal mass exacerbated.Therefore, many micro-cracks appeared, expanded, split, closed and mutually connected along the direction with a weak strength to thus form a failure zone of cracks with a certain width.The phenomenon resulted in the generation of micro-cracks in the specimen finally (see Figure 7).Figure 13 displays the images of different fracture surfaces in the damaged gas-bearing coal at different magnification times.According to the pictures, the generation and expansion of cracks during the fracturing of coal as well as the inflection, bending, and splitting during the expansion can be clearly observed. The EP Response Analyses As a micro-molecular mixture, coal is composed of multiple atom groups.The interiors of micromolecules are connected through multiple bridge bonds, such as covalent bond, hydrogen bond and Van der Waals' force.The atom groups carry non-uniformly distributed charges so that they show polarity outward.As a result, the micro-surface of the rock mass shows weak electrical property [36] and the surface charge density exhibits the magnitude of 10 −5 ~10 −4 C/m 2 [37], endowing the rock mass with a certain conductivity.Under the effect of external factors, the charged groups on the surface can lead to charge separation to form electric field.When the gas-bearing coal is damaged and deformed, the particles of coal matrix, mineral particles, and cement in internal structure of coal are subjected to relative slippage and dislocation to generate free charges due to the friction effect.The combined effect between triboelectrification and thermionic emission effect induces the EP response.However, the initiation and expansion of cracks can cause the fracturing of cementitious chemical bonds (even covalent bonds) between coal molecules to generate dangling bonds.It consequently leads to charge separation.Additionally, when stress is applied to micro-cracks, stress concentration effect occurs at the tip of cracks, which causes the energy in coal molecules to dramatically rise at the crack tip.Furthermore, molecular structures are subjected to distortion while outer electrons of molecules escape [16,19,24,26,28]. The charge separation and charge accumulation happening in the aforementioned process can lead to electrostatic charge field, which can be regarded as a point source of the surface EP effect.The interior of the coal mass is composed of heterogeneous coal matrixes and pores filled with gas molecules and coal matrix is greatly different from gas molecules in dielectric constant.Therefore, the polarization electric field can be formed at the interfaces of different dielectrics.Thus, it can be considered that EP at a point within the coal mass is formed due to the superposition of constantly The EP Response Analyses As a micro-molecular mixture, coal is composed of multiple atom groups.The interiors of micro-molecules are connected through multiple bridge bonds, such as covalent bond, hydrogen bond and Van der Waals' force.The atom groups carry non-uniformly distributed charges so that they show polarity outward.As a result, the micro-surface of the rock mass shows weak electrical property [36] and the surface charge density exhibits the magnitude of 10 −5 ~10 −4 C/m 2 [37], endowing the rock mass with a certain conductivity.Under the effect of external factors, the charged groups on the surface can lead to charge separation to form electric field.When the gas-bearing coal is damaged and deformed, the particles of coal matrix, mineral particles, and cement in internal structure of coal are subjected to relative slippage and dislocation to generate free charges due to the friction effect.The combined effect between triboelectrification and thermionic emission effect induces the EP response.However, the initiation and expansion of cracks can cause the fracturing of cementitious chemical bonds (even covalent bonds) between coal molecules to generate dangling bonds.It consequently leads to charge separation.Additionally, when stress is applied to micro-cracks, stress concentration effect occurs at the tip of cracks, which causes the energy in coal molecules to dramatically rise at the crack tip.Furthermore, molecular structures are subjected to distortion while outer electrons of molecules escape [16,19,24,26,28]. The charge separation and charge accumulation happening in the aforementioned process can lead to electrostatic charge field, which can be regarded as a point source of the surface EP effect.The interior of the coal mass is composed of heterogeneous coal matrixes and pores filled with gas molecules and coal matrix is greatly different from gas molecules in dielectric constant.Therefore, the polarization electric field can be formed at the interfaces of different dielectrics.Thus, it can be considered that EP at a point within the coal mass is formed due to the superposition of constantly generated charges under the combined effect of variable static electric field and polarization electric field [38].To simplify the solution process of EP, the EP value can be calculated with imaging method [39].As shown in Figure 14, it is assumed that the dielectric constants of two semi-infinite dielectrics (coal matrix and gas) are ε 1 and ε 2 and their interface is set as M.Moreover, it is supposed that a point charge appears at a point O, with the electric charge quantity of q.Additionally, it is assumed that the symmetrical location of O in the mirror image corresponding to the interface M is O , with the electric charge of q . Processes 2019, 7, 72 15 of 23 generated charges under the combined effect of variable static electric field and polarization electric field [38].To simplify the solution process of EP, the EP value can be calculated with imaging method [39].As shown in Figure 14, it is assumed that the dielectric constants of two semi-infinite dielectrics (coal matrix and gas) are and and their interface is set as .Moreover, it is supposed that a point charge appears at a point , with the electric charge quantity of .Additionally, it is assumed that the symmetrical location of in the mirror image corresponding to the interface is ′, with the electric charge of ′.Therefore, the EP at the point P with the same dielectric location with is expressed as . Similarly, the EP at the point with the same dielectric point with ′ is calculated as .Thus, = 1/ + / / 4 Where, , , , and = − / + refer to the distance of to point O , the distance of ′ to the point , the distance of to the point , and the reflection coefficient of the dielectric to , respectively. The aforementioned model is popularized to the finite boundary.As shown in Figure 15, the point charge is bounded within a finite space by four boundaries (, B, , and ) and isolated from different dielectrics of external environment.The initial imaging charges of generated in the four boundaries are , , , and , respectively.Similarly, existing imaging charges can generate new imaging charges along the other boundaries.In this way, there are infinite such imaging charges.However, the farther the new imaging charges distancing to the point charge is, the less significant their effect on the EP of the measurement point [40].Where O and O refer to the true and imaging charges while U 1 and U 2 denote the EPs at points P 1 and P 2 , respectively. Therefore, the EP at the point P 1 with the same dielectric location with O is expressed as U 1 .Similarly, the EP at the point P 2 with the same dielectric point with O is calculated as U 2 .Thus, where, r, r 1 , r 2 , and K 12 = (ε 1 − ε 2 )/(ε 1 + ε 2 ) refer to the distance of P 1 to point O, the distance of O to the point P 1 , the distance of O to the point P 2 , and the reflection coefficient of the dielectric ε 1 to ε 2 , respectively.The aforementioned model is popularized to the finite boundary.As shown in Figure 15, the point charge q is bounded within a finite space by four boundaries (A, B, C, and D) and isolated from different dielectrics of external environment.The initial imaging charges of q generated in the four boundaries are q 1 , q 2 , q 3 , and q 4 , respectively.Similarly, existing imaging charges can generate new imaging charges along the other boundaries.In this way, there are infinite such imaging charges.However, the farther the new imaging charges distancing to the point charge is, the less significant their effect on the EP of the measurement point [40].Through simplification, the EP at the measurement point can be expressed as follows: Where, and separately denote reflection coefficients while and are superscripts representing the reflecting interfaces separately parallel to axes and .Moreover, and denote corresponding mirror imaging reflections and , , , and refer to the distances of imaging charges to the measurement point . The Influence of Gas on EP Effect As the pore structure is well developed in coal, a large amount of gas is adsorbed on the coal mass after the complete adsorption.Non-adsorbed gas freely moves in coal pores at a free state.A gas-solid coupling system is formed with the adsorbed and free-state gas with coal pores.The formation of the system changes the mechanical properties as well as the damage and fracturing process of the coal mass to thus further influence of response characteristics of EP [6].(1) The influence of gas on mechanical properties of the coal mass Through simplification, the EP at the measurement point S can be expressed as follows: where, K ab i and K ab j separately denote reflection coefficients while i and j. are superscripts representing the reflecting interfaces separately parallel to axes x and y.Moreover, n and m denote corresponding mirror imaging reflections and r 1ij , r 2ij , r 3ij , and r 4ij refer to the distances of imaging charges to the measurement point P. The Influence of Gas on EP Effect As the pore structure is well developed in coal, a large amount of gas is adsorbed on the coal mass after the complete adsorption.Non-adsorbed gas freely moves in coal pores at a free state.A gas-solid coupling system is formed with the adsorbed and free-state gas with coal pores.The formation of the system changes the mechanical properties as well as the damage and fracturing process of the coal mass to thus further influence of response characteristics of EP [6]. (1) The influence of gas on mechanical properties of the coal mass After gas is fully absorbed by the coal mass, surface free energy of pores reduces [41].As a result, the attractive force between coal molecules on the fracture surface decreases and the capability of matrix for restricting coal molecules weakens, triggering the expansion-induced deformation of coal matrixes.Microscopically, the change is reflected by the reduced cohesion between coal matrix particles, which finally reduces the force and energy required during failure of the coal mass, and therefore the failure strength declines while the deformation amount increases.This has been verified in the results of Section 3.2, as shown in Figure 10a,b.Additionally, the free-state gas enters large microscopic fractures in the coal mass under the effect of pore pressure to therefore strengthen the effective normal stress, and the structure of fractures will be split and expanded.Additionally, the frictional resistance of cracked surfaces will also be weakened.Moreover, the pore structure is changed, and the mechanical strength of pore structures reduced [42]. (2) The influence of gas on the damage and fracturing process of the coal mass The process of deformation and fracturing of the coal mass is discontinuous and non-uniform and sometimes local zone is subjected to expansion or shrinkage.On the one hand, gas can deteriorate the structure of the coal mass and promote the damage and fracturing of the coal mass.On the other hand, gas provides confining pressure to change the stress state of the interior of the specimen [42,43].Under the effect of axial compression stress, the coal mass is subjectged to transverse deformation to induce transverse tensile stress.Under the effect of axial compression stress, transverse tensile stress, and transverse gas stress (confining pressure), the specimens are characterized by the combination of tensile and shear failure.Then the fracturing direction of the specimen shows a certain included angle with the axial direction.With the increase of gas pressure, the difference between confining pressure and peak loading stress leading to failure of the coal mass reduces, and such failure characteristic of the specimen becomes more significant, with a larger included angle (see Figure 16). After gas is fully absorbed by the coal mass, surface free energy of pores reduces [41].As a result, the attractive force between coal molecules on the fracture surface decreases and the capability of matrix for restricting coal molecules weakens, triggering the expansion-induced deformation of coal matrixes.Microscopically, the change is reflected by the reduced cohesion between coal matrix particles, which finally reduces the force and energy required during failure of the coal mass, and therefore the failure strength declines while the deformation amount increases.This has been verified in the results of Section 3.2, as shown in Figure 10a,b.Additionally, the free-state gas enters large microscopic fractures in the coal mass under the effect of pore pressure to therefore strengthen the effective normal stress, and the structure of fractures will be split and expanded.Additionally, the frictional resistance of cracked surfaces will also be weakened.Moreover, the pore structure is changed, and the mechanical strength of pore structures reduced [42]. (2) The influence of gas on the damage and fracturing process of the coal mass The process of deformation and fracturing of the coal mass is discontinuous and non-uniform and sometimes local zone is subjected to expansion or shrinkage.On the one hand, gas can deteriorate the structure of the coal mass and promote the damage and fracturing of the coal mass.On the other hand, gas provides confining pressure to change the stress state of the interior of the specimen [42,43].Under the effect of axial compression stress, the coal mass is subjectged to transverse deformation to induce transverse tensile stress.Under the effect of axial compression stress, transverse tensile stress, and transverse gas stress (confining pressure), the specimens are characterized by the combination of tensile and shear failure.Then the fracturing direction of the specimen shows a certain included angle with the axial direction.With the increase of gas pressure, the difference between confining pressure and peak loading stress leading to failure of the coal mass reduces, and such failure characteristic of the specimen becomes more significant, with a larger included angle (see Figure 16).(3) The influence of gas on the EP response The presence of gas promoted the evolution of damage and fracturing of the coal mass.In particular, the expansion effect and the friction effect of cracks promoted the EP response.Additionally, free-state gas migrated and diffused in coal pores to constantly lead to collision and (3) The influence of gas on the EP response The presence of gas promoted the evolution of damage and fracturing of the coal mass.In particular, the expansion effect and the friction effect of cracks promoted the EP response.Additionally, free-state gas migrated and diffused in coal pores to constantly lead to collision and energy exchange with the edge of micro-cracks and to generate streaming EP with a certain intensity as well. The EP response was triggered by cracks in the coal mass under load and abnormal sudden change of EP can reveal the failure of the coal mass.For this reason, during the fracturing of gas-bearing coal under load, with the increase of gas pressure, cracks were likely to initiate and propagate in the coal mass and friction effect was strengthened.As a result, the EP effect was more significant, and EP maximum and EP variation coefficient both gradually rose, which are also varied in Section 3.2, as shown in Figure 10c,d. Analysis of the EP Response of Gas-Bearing Coal in Different Loading Stages By taking 2.0 MPa of gas pressure as an example, the damage process of the gas-bearing coal under load can be approximately divided into five stages (see Figure 17) [32]. Processes 2019, 7, 72 18 of 23 energy exchange with the edge of micro-cracks and to generate streaming EP with a certain intensity as well. The EP response was triggered by cracks in the coal mass under load and abnormal sudden change of EP can reveal the failure of the coal mass.For this reason, during the fracturing of gasbearing coal under load, with the increase of gas pressure, cracks were likely to initiate and propagate in the coal mass and friction effect was strengthened.As a result, the EP effect was more significant, and EP maximum and EP variation coefficient both gradually rose, which are also varied in Section 3.2, as shown in Figure 10c,d. Analysis of the EP Response of Gas-Bearing Coal in Different Loading Stages By taking 2.0 MPa of gas pressure as an example, the damage process of the gas-bearing coal under load can be approximately divided into five stages (see Figure 17) [32].(1) AB stage: Under the effect of stress, the original structural plane and micro-cracks were compacted and closed gradually.Therefore, the EP intensity in the initial stage was generally low while greatly fluctuated.In terms of the EP effect in the stage, the friction effect generated due to the closure of primary crack surfaces mainly appeared while new cracks hardly occurred. (2) BC stage: In the elastic stage, the coal matrix was mainly subjected to linear-elastic deformation while suffered from little plastic damage.With the increase of strain, stress steadily rose and therefore new primary cracks constantly reached the critical strength, thereby causing the initiation, expansion, and splitting of cracks and steady rise of the EP intensity.In terms of the EP response in the stage, the friction effect between particles of coal matrix and crack surface and little crack expansion effect were mainly found.The presence of gas aggravated the damage and deformation of the coal mass, with a significant promotion effect on the EP response. (3) CD stage: In the yielding stage, after stress reached the yield point, some irreversible deformations appeared in the specimen.Micro-cracks greatly expanded and resulting charge separation was taken as the dominant mechanism of the EP response.Moreover, the EP response was relatively active, and the EP intensity constantly rose. ( (1) AB stage: Under the effect of stress, the original structural plane and micro-cracks were compacted and closed gradually.Therefore, the EP intensity in the initial stage was generally low while greatly fluctuated.In terms of the EP effect in the stage, the friction effect generated due to the closure of primary crack surfaces mainly appeared while new cracks hardly occurred. (2) BC stage: In the elastic stage, the coal matrix was mainly subjected to linear-elastic deformation while suffered from little plastic damage.With the increase of strain, stress steadily rose and therefore new primary cracks constantly reached the critical strength, thereby causing the initiation, expansion, and splitting of cracks and steady rise of the EP intensity.In terms of the EP response in the stage, the friction effect between particles of coal matrix and crack surface and little crack expansion effect were mainly found.The presence of gas aggravated the damage and deformation of the coal mass, with a significant promotion effect on the EP response. (3) CD stage: In the yielding stage, after stress reached the yield point, some irreversible deformations appeared in the specimen.Micro-cracks greatly expanded and resulting charge separation was taken as the dominant mechanism of the EP response.Moreover, the EP response was relatively active, and the EP intensity constantly rose. (4) DE stage: Plastic deformation mainly happened in the stage.The damage of the specimen constantly exacerbated and micro-cracks and secondary cracks rapidly expanded and connected, which resulted in the occurrence of a primary crack.In the process, a great quantity of intermolecular and even intramolecular chemical bonds in the coal was fractured to thus further generate many charge separations.Additionally, the rapid expansion of cracks also led to the generation of friction effect and a great number of charges were accumulated under the two effects.Owing to the local zone was seriously damaged, the electron-escaping effect caused by stress concentration was also enhanced.The combined effect of multiple factors caused many charges to instantaneously accumulate.As a result, dramatic fluctuation appeared, and EP signals rapidly rose to a maximum.The promotion effect of gas was extremely significant (as shown in Figures 10 and 11). (5) EF stage: Primary cracks appeared, and cracks fully connected, depriving bearing capacity of the specimen.After the specimen was completely damaged, generated friction effect, crack expansion, etc. of EP signals also basically ended.Therefore, the EP intensity also rapidly reduced and stabilized. Combined with the analysis in Section 3.1.,the EP response was closely related to loading state, AE response, and crack expansion, which can express the evolution of cracks and damage during the loading process of the gas-bearing coal.The presence of gas showed a promotion effect on the EP response.As shown in Figure 16, during DE stage, the specimen was greatly damaged and fractured and therefore failure happened to the coal.Moreover, the abnormal response of EP was extremely significant.The specimen was damaged, which meant that bearing capacity of the specimen rapidly reduced, that is, LS rapidly declined, which can be seen in G zone of Figure 17.In this case, EP rapidly rose to a maximum, which can be taken as the abnormal characteristic for the failure of the gas-bearing coal to monitor the failure of the coal mass. Research Significance of the EP Effect of Gas-Bearing Coal In the mining field of gas-bearing coal, dynamically monitoring the damage and fracturing process of gas-bearing coal is the premise of warning the coal and gas outburst disasters.In addition, EP monitoring can further reflect the stress level and damage state of the coal mass under different loading stages.Therefore, if the stress state of the coal mass cannot be tested directly, monitoring the EP response provides a favorable reference for monitoring the damage-evolution process of gas-bearing coal.Further investigating the EP effect and the mechanism of gas-bearing coal is conducive to further exploring the damage-evolution process of the coal mass.It is beneficial for the study on disaster-causing mechanism of coal failure under the stress-gas coupling effect.It also provides a new idea for monitoring the stability (nondestructive detection) of gas-bearing coal based on the EP response.Moreover, it provides a new idea for exploring the initiation and occurrence of rock and gas outburst disasters based on the EP response. Compared with traditional geophysical information such as electromagnetic radiation and AE, the EP response is more accurate.Its monitoring process shows a low requirement for the shielding of environmental-noise and non-contact electromagnetic interference.Moreover, signal screening is not complex.Therefore, the EP response exhibits a favorable superiority in engineering application [26].It possesses the important significance to be used to monitor the damage evolution of gas-bearing coal seams and provide an indication prior to forecasting dynamic disasters in mines. Conclusions During the loading and damaging process of gas-bearing coal, multi-information was measured and analyzed to obtain the research results as follow: (1) Abundant EP signals are generated during the damage of gas-bearing coal under load.With the growth of stress, the damage of the specimen was aggravated, and EP was strengthened.Moreover, AE counts and amplitude increased, and crack expansion were exacerbated.When the specimen was subjected to local fracturing and stress suddenly changed, EP and AE tended to vary suddenly, and crack expansion was significant and constantly aggravated.The higher the stress level was and the greater the sudden change of stress was, the more dramatic the damage of the coal mass and therefore the greater the abnormal response of EP and AE.The EP response showed similar characteristics under different gas pressures and the presence of gas promoted the EP response.The changing trend and response characteristics of EP exhibit the stress state and reveal the damage-evolution process of the specimen. (2) Under the coupling effect of stress and gas, the damage of the coal mass constantly aggravated.It caused internal cracks to constantly initiate, then propagate and finally converge, and connect, triggering the fracturing of the coal mass.Charge separation happened under the effects of crack expansion, friction effect between crack surface and coal matrixes, electron emission induced by stress concentration, etc.As a result, the EP response was triggered.Furthermore, the calculation method for EP is simplified with imaging method. (3) After gas was fully absorbed by the coal mass, surface free energy of pores reduced, which caused the decline of intramolecular and intermolecular attractive forces.It led to expansion-induced deformation of coal and reduction of cohesion.This reduced the broken strength of the coal mass and increased deformation.Additionally, free-state gas entered large fractures in the coal mass under the effect of pore pressure.It had a splitting and expansion effect on fracture structure and weakened the friction resistance of crack surface.Therefore, the presence of gas promoted the crack expansion of the coal mass and friction effect to strengthen the EP effect.In addition, the electrokinetic effect generated due to the flow of free-state gas in pores also exerted a certain influence on the EP effect. (4) At different loading stages, different factors dominated the EP response of gas-bearing coal.In the early stage of loading, the friction effect played a dominant part while crack expansion mainly appeared in the later period of loading.The electron emission was caused by stress concentration and the electrokinetic effect induced by gas flow both exhibited a certain effect during the whole loading stage.During the failure of the specimen, the EP rapidly rose to a maximum, so did the AE count.Moreover, signals showed a high amplitude and cracks rapidly expanded and ran through from the top to the bottom of the specimen.It led to the failure of gas-bearing coal finally.After the specimen was completely damaged, EP signals rapidly reduced and then stabilized.The abnormal characteristic of EP can be taken as an index for monitoring the stability of gas-bearing coal and warning the failure of the coal mass. Figure 1 . Figure 1.Experimental system of multi-data acquisition during loading process of gas-bearing coal.Number of parts in experimental system of Figure 1. Figure 1 . Figure 1.Experimental system of multi-data acquisition during loading process of gas-bearing coal. Figure 5 .Figure 5 . Figure 5. Curves of stress and stain with respect to loading time.The red dotted lines indicate the mutation time of curves. Figure 6 . Figure 6.Curves of AE counts and MAS with respect to loading time. Figure 6 . Figure 6.Curves of AE counts and MAS with respect to loading time. Processes 2019, 7, 72 8 of 23 cracks in the coal mass dramatically rose to thus result in the great expansion and cut-through of cracks, consequently triggering the failure of the specimen.After the primary crack was found, AE signals rapidly reduced and almost disappeared.(3)Responses of crack expansion The evolution processes of crack expansion at different time moments were recorded by applying high-speed photography, as shown in Figure7. Figure 7 . Figure 7. Pictures of fracture expansion in gas-bearing coal under load at different time moments; red imaginary lines refer to the tracks of crack expansion while red wireframes denote the centralized zone with cracks of specimen. Figure 7 . Figure 7. Pictures of fracture expansion in gas-bearing coal under load at different time moments; red imaginary lines refer to the tracks of crack expansion while red wireframes denote the centralized zone with cracks of specimen. Figure 8 . Figure 8. Curves of strain and EP with respect to loading time. Figure 8 . Figure 8. Curves of strain and EP with respect to loading time. Figure 9 . Figure 9. Datum statistics on EP and AE responses at different stress levels. Figure 10 . Figure 10.Changes of stress and EP during damage of coal mass under load under different gas pressures.Black and red actual lines represent the changes of stress and EP with loading time, respectively.(a) 0 MPa, (b) 1 MPa, (c) 2 MPa, (d) 3 MPa. Figure 10 . Figure 10.Changes of stress and EP during damage of coal mass under load under different gas pressures.Black and red actual lines represent the changes of stress and EP with loading time, respectively.(a) 0 MPa, (b) 1 MPa, (c) 2 MPa, (d) 3 MPa. Figure 11 . Figure 11.Statistical results of EP responses under different gas pressures.(a) Broken strength of specimen, (b) strain maximum of specimen, (c) EP maximum, (d) EP variation coefficient. Figure 11 . Figure 11.Statistical results of EP responses under different gas pressures.(a) Broken strength of specimen, (b) strain maximum of specimen, (c) EP maximum, (d) EP variation coefficient. Figure 12 . Figure 12.SEM images of microstructures of coal mass before loading.(a) Primary pores are abundant and crack was even, mostly appeared as spongy shape under magnification of 500 times; (b) Numerous cracks were found in pore clusters under magnification of 5,000 times.Red curves refer to the tracks of cracks while red circles represent the location of cracks. Figure 12 . Figure 12.SEM images of microstructures of coal mass before loading.(a) Primary pores are abundant and crack was even, mostly appeared as spongy shape under magnification of 500 times; (b) Numerous cracks were found in pore clusters under magnification of 5000 times.Red curves refer to the tracks of cracks while red circles represent the location of cracks. Figure 13 . Figure 13.SEM images of microstructures of damaged gas-bearing coal; (a) Primary pores were deformed under magnification of 1,000 times.(b) Pores were damaged, closed and then connected under magnification of 2,000 times.(c) Tensile cracks were bent and split, which were densely distributed under magnification of 250 times.(d) Fracture surface appeared as step-shaped and rivershaped fractures under magnification of100 times.Red rectangles denote the location of fractures. Figure 13 . Figure 13.SEM images of microstructures of damaged gas-bearing coal; (a) Primary pores were deformed under magnification of 1000 times.(b) Pores were damaged, closed and then connected under magnification of 2000 times.(c) Tensile cracks were bent and split, which were densely distributed under magnification of 250 times.(d) Fracture surface appeared as step-shaped and river-shaped fractures under magnification of 100 times.Red rectangles denote the location of fractures. Figure 14 . Figure 14.The sketch map of the imaging method for solving EP based on a single boundary.Where′ and refer to the true and imaging charges while and denote the EPs at points and , respectively. Figure 14 . Figure 14.The sketch map of the imaging method for solving EP based on a single boundary.Where O and O refer to the true and imaging charges while U 1 and U 2 denote the EPs at points P 1 and P 2 , respectively. Figure 15 . Figure 15.Schematic map of imaging method for solving EPs under finite boundaries.The red solid circle and the pink box refer to actual point charge and four boundaries, respectively.Brown solid circles denote the initial imaging charge of actual point charge at the aforementioned boundaries while blue solid circles represent new imaging charges generated based on the initial imaging charges at the corresponding boundaries.The solid box refers to the location of the point . Figure 15 . Figure15.Schematic map of imaging method for solving EPs under finite boundaries.The red solid circle and the pink box refer to actual point charge and four boundaries, respectively.Brown solid circles denote the initial imaging charge of actual point charge at the aforementioned boundaries while blue solid circles represent new imaging charges generated based on the initial imaging charges at the corresponding boundaries.The solid box refers to the location of the point S. Figure 17 . Figure 17.Complete stress-strain curves during the damage of gas-bearing coal under load. Figure 17 . Figure 17.Complete stress-strain curves during the damage of gas-bearing coal under load. Table 1 . Setting of experimental parameters. Table 1 . Setting of experimental parameters. Table 2 . Statistics on characteristics of EP, AE and crack expansion under different loading time moments (stress levels) Table 2 . Statistics on characteristics of EP, AE and crack expansion under different loading time moments (stress levels) Datum statistics on EP and AE responses at different stress levels.
17,455.2
2019-02-01T00:00:00.000
[ "Engineering", "Environmental Science" ]
Estimating Base Saturation Flow Rate for Selected Signalized Intersections in Al-Najaf City . The correct saturation flow rates for the specific circumstances must be used to calculate delays and the level of Service at intersections. As a result of a lack of local data, practitioners would often use default values from overseas software developers. Base saturation flow rate is an important factor for timing traffic signals. Despite the 1,900 pc/h/ln number suggested by the Highway Capacity Manual (HCM), the base saturation flow rate differs from city to city, dependent on the local driving habits and traffic conditions. As a result, it's crucial to estimate, given the local climate. As a result, erroneous decisions may be made that have incorrect results. This study attempts to estimate the base saturation flow rate in Al-Najaf City. The following situations were observed: Turning movement (through or right); Gradient (Up and down); Number of through lanes; and Speed limit (60 and 80 km/h). The mean headway from a total of 9931 through moving vehicles from 187 lineups was calculated to be 1.55. The basic saturation flow rate was therefore determined to be 2,323 pc/h/ln. This result is substantially higher than the 1,900 pc/h/ln proposed by the HCM, but it is comparable to results from other nations with similar traffic conditions and driving habits. The results show significant differences between the saturation flow rates when the conditions of the movements are different in terms of the above characteristics. Recommendations are made regarding the most appropriate values to use under different conditions. INTRODUCTION Although the definitions of saturation flow are consistent in the reference material that is currently accessible, the technique for measuring saturation flow on location is occasionally inconsistent and ambiguous.The crucial details needed for acquiring and comprehending saturation flows are provided in this fact sheet and the engineering judgment necessary for on-site measurement and subsequent calibration of saturation flows [1].Although this is meant to be a best practice guide, the modeler may decide to apply engineering judgment and the reference materials that are currently accessible for identifying appropriate saturation flows for modeling and analysis purposes.In traffic modeling, saturation flow serves as a crucial calibration and validation parameter.Saturation flows substantially impact networks' capacity, delay queues and saturation level [2,3].The accuracy of the models is crucial to assess the impact on the road network if applied on-site when traffic modeling is used to aid in the design of new intersections or modify existing intersections.For modeling purposes, it is crucial to measure saturation flow precisely.It helps modelers and surveyors get consistently correct data for input of models used for calibration or validation.The Saturation Flow Information document has been created.This study attempts to estimate the base saturation flow rate in Al-Najaf City.The average saturation headway was measured to be 1.55 seconds; therefore, the corresponding base saturation flow rate was found to be 2,323 pc/h/ln.This is higher than the 1,900 pc/h/ln suggested by the HCM.It includes examples of the standards for Main Roads and describes how saturation flow should be assessed on the spot A crucial element in traffic modeling is the lane saturation flow, and the accuracy of these flows has a big impact on the model's output. Saturation flow rates have been the subject of numerous research around the globe, although they were all carried out under typical conditions [4][5][6].Figure 1 lists some of the older investigations, the locations where they were done, the mean saturation flow discovered, and the number of participants in each study.The Texas Transportation Institute just finished significant research [7].This study examined the impact of heavy vehicles, the posted speed limit, the volume of traffic, the local population, and the number of approach lanes in relation to the saturation flow rate.They discovered that the base rate (or saturation flow rate) under ideal circumstances is 1905 pc/h/ln, that a one mph reduction in the speed limit will cause a nine pc/h/ln decrease, and that an approach using the saturation flow rate of a system with two through lanes will be 130 pc/h/ln higher than that of an approach with one through lane.Bester and Meyers studied the saturation flow rate for through traffic at six signalized intersections in Stellenbosch, South Africa [8].The reported saturation flow rates ranged from 1,711 to 2,370 (with an average of 2,076) pc/h/ln.The study concluded that these values are much higher than in other countries, which could indicate the aggressiveness of local drivers. DEFINITION OF SATURATION FLOW AND METHODOLOGY Saturation flow is defined as the most significant flow that can be discharged from a traffic lane when there is a continuous green indicator and a continuous queue on the approach.It expresses a lane's maximum capacity and can be influenced by factors such as road layout, topography, visibility, and vehicle classes, such as heavy vehicles.The basic traffic signal capacity model, depicted in Figure 1, assumes that when the signal changes to green, the flow across the stop line increases swiftly to a rate known as the saturation flow, which remains constant [10].Continuous until the wait is full or the green window closes.Figure 2 shows a rectangle model of saturation flow rate shows an idealized depiction of saturation flow at a signalized junction The saturation headway is the average headway (time gap) between vehicles occurring after the fourth or fifth vehicle in the queue and continuing until the last vehicle in the initial queue clears the intersection [10].The saturation flow for each sample should be calculated.Outliers or measurements that do not meet the requirements are to be eliminated.The saturation flow values for the remaining valid measurements are to be averaged, representing the saturation flow for the lane.It is acknowledged that measuring saturation flows for all lanes on site can be time-consuming/costly, but it is essential to the quality of modeled outputs.It is not always possible to measure saturation flow on-site due to congested traffic conditions during peak periods, exit-blocking, low demand during off-peak periods, or insufficient green time due to network operational strategies or capacity issues. Measurement of Similar Intersection If it is impossible to measure saturation flow at an intersection at any time, then an alternative is to take measurements from a similar intersection.This may be a neighboring intersection with similar geometry, signal timings, and traffic volumes.The use of this method must be detailed in the modeling report Calculated from SIDRA Program Where measurement of saturation flow is not possible for base cases or for proposed intersections, saturation flows can be estimated based on the site geometry and lane usage.Modelers must consider if the Sidra saturation flow values represent the driving behavior at the modeled intersection by comparing the calculated saturation flow with available site-measured values.If the average of the site values is found to be more than 4% different from the SIDRA values, the modeler must apply a local site factor to the SIDRAcalculated lanes. BACKGROUND Start-up lost time and saturation headway to determine accurate saturation flow rates; start-up lost time needs to be understood and considered.The principle of start-up lost time can be described as follows [12]: When the signal at an intersection turns green, the vehicles in the queue will start crossing the intersection.The vehicle headways can now be described as the time elapsed between successive vehicles crossing the stop line.The first headway will be the time taken until the first vehicle's rear wheels cross the stop line.The second headway will be the time taken between the crossing of the first vehicle's rear wheels and the crossing of the second vehicle's rear wheels over the stop line.The first driver in the queue needs to observe and react to the signal change at the start of green time. After the observation, the driver accelerates through the intersection from a stand-still, resulting in a relatively long first headway.The second driver performs the same process except that the driver could react and accelerate while the first vehicle begins moving.This results in a shorter headway than the first because the driver had an extra vehicle length to accelerate.This process carries through with all following vehicles, where each vehicle's headway will be slightly shorter than the preceding vehicle.This continues until a certain number of vehicles have crossed the intersection, and start-up reaction and acceleration no longer affect the headways.From this point, headways will remain relatively constant until all vehicles in the queue have crossed the intersection or green time has ended.This constant headway is known as the saturation headway and can start to occur anywhere between the third and sixth vehicles in the queue.Figure 2 illustrates the situation described above. To calculate the saturation headway from the above example in Figure 2, the following equation will be used [13]: hs = saturation headway, s hj = discharge headway of jth queued vehicle, s n = position of the queued vehicle from where the saturation flow region started l = last queued vehicle position This saturation headway (hs) can be used to determine the maximum number of vehicles that can be released during a specified green time and the saturation flow rate, s = 3600/hs.The saturation flow rate (s) is an important parameter for estimating the performance of vehicular movement at signalized intersections.The saturation flow rate for a lane group is a direct function of vehicle speed and separation distance.The established concept for the determination of capacity demands the concept of saturation flow [14]. Intersections were surveyed for this study.Identifying and observing intersections that represented the conditions described above was important.The following criteria were also taken into account for selecting intersections: • The gradient for normal intersections should be as flat as possible; • Standard lane widths of 3.7 m should be available; • The queues of through traffic should be long enough to facilitate the observation of saturation flow rates; • No parking or bus stops should be near the intersections; • Low volumes of non-motorized vehicles and low volumes of heavy vehicles should be present.These criteria selected the following intersections for observation: 60 km/h, flat gradient.To measure the saturation flow rate at signalized intersection, one signalized intersection in Al-Najaf City was selected. Figure 3 shows the location of the study sites.Especially at peak times, there are lengthy lines at some intersections.Over April and May 2023, all data were gathered during weekday peak times.To collect traffic statistics, video surveillance recorded all turning maneuvers, including left, though, and right turns.In a perfect world, the camera would be positioned to show the viewer the beginning and finish of each phase, the end of the line for each lane, and the lane's stop line.Figure 4 shows a screenshot from a video recording near the Al-Salam intersection.It's assumed that the end of the line would occur in a few instances where it was hard to observe it when there was a significant pause (greater than 2 seconds) between two sequential vehicles. Table 1 shows the sample size from the selected intersection.The total number of queues considered from the selected intersection combined was 187, and the total number of vehicles used was 9,931. RESULTS The results of the study are given in Tables 2 and 3.The general statistics are shown in Table 2, and the results relative to the specific conditions are shown in Table 3.The HCM has recommended the saturation flow rate.Still, it also advises that local traffic patterns and driver behavior should be evaluated because they impact how much capacity a signalized intersection has.Since it directly impacts signal timing, its precise determination for a given location is crucial to the design of signal timing. The base saturation flow rate in Al-Najaf City was calculated in this study.Data gathered from One separate signalized junction was used to achieve this.The mean headway from a total of 9931 through moving vehicles from 187 lineups was calculated to be 1.55.The basic saturation flow rate was therefore determined to be 2,323 pc/h/ln.This result is substantially higher than the 1,900 pc/h/ln proposed by the HCM, but it is comparable to results from other nations with similar traffic conditions and driving habits.To compute the base saturation flow rate, the average discharge headway of 1.55 seconds for all intersections and Eq. ( 2) was used as follows: Figure 2 : Figure 2: Typical traffic signal capacity model [11].LinSig and SIDRA used saturation flow for calibration, Vissim and Aimsun used saturation flow for validation, and LinSig and SIDRA used it for calibration.The throughput of any technique is significantly impacted by the saturation flow predicted at signalized stop lines.The stop line saturation flow on-site may be influenced by a number of variables, all of which must be accurately simulated in the model These elements consist of: a) Geometry b) Gradient c) Visibility d) Gap acceptance for turning traffic e) Lane width f) Downstream blocking.The saturation flow is calculated for each sample using the following formula [10]: Saturation flow = pcu or veh time(s) * 3600 Figure 4 : Figure 4: Aerial image from the Maps app of the Al-Salam intersection. Figure 5 : Figure 5: Screenshot from a video recording made near the Al-Salam intersection. Table 1 : Chosen intersection and data collected
3,031
2023-01-01T00:00:00.000
[ "Engineering", "Environmental Science" ]
Dielectric Slab Mode Antenna for Integrated Millimeter-wave Transceiver Front-ends A novel type of integrated dielectric antenna is presented, which is suitable for low-loss integrated transceiver front-ends in the upper microwave or millimeter wave frequency ranges. The proposed antenna comprises a dielectric high permittivity substrate acting as grounded slab waveguide and a simple planar lens on top for beam focusing. The guided wave is gradually transformed to free space by a curved ground plane for end-fire radiation from the substrate edge. Apart from high radiation efficiency due to very low conductor losses, the use of a standard substrate material also simplifies manufacturing and allows accommodating MMICs or bias circuitry at minimum cost. Simulation and measurement results are presented for a scaled prototype in X-band. Simulation studies were also conducted at millimeter-wave frequencies, where the low-loss advantage is even more evident. Having dimensions of 10 mm × 18 mm, an example design provides a gain of 15 dBi at 60 GHz and a radiation efficiency of more than 80 % if a Duroid6010LM substrate is used. Good input impedance matching is achieved in a bandwidth of over 20 %, covering the entire unlicensed 60-GHz band. Introduction Among the most important parts of a wireless system is the antenna, since it strongly influences the overall receiver sensitivity and the link budget. In the near future, wireless transmission for consumer products will also happen at much higher frequencies than nowadays, i.e. in the millimeter wave (mm-wave) frequency range. This is required to achieve very fast data exchange and HD video streaming between all kinds of consumer products [1][2][3]. High radiation efficiency is particularly important due to significant free-space loss, very limited battery capacity in portable devices and consequential low transmit power levels, which is distributed over wide bandwidths. In this context, a multitude of problems arises at higher frequencies, which were not formerly experienced to such an extent in the lower microwave range. Those include high conductor losses and the critical interconnection of a transceiver MMIC to an external antenna. On-chip antennas have other well-known drawbacks. Their radiation efficiency on conductive high permittivity silicon is poor [4,5] and in spite of the short wavelength, they still occupy a non-negligible area on an MMIC chip, which is an important cost factor. A novel hybrid antenna concept is presented in this paper. (Figure 1) It is suitable to overcome the above-mentioned difficulties. Guidance and beam collimation of a surface wave are achieved by dielectric means to reduce conductor losses to a minimum. Using a planar substrate, this so-called slab mode antenna can be manufactured at low cost by a printed circuit process and is suitable for hybrid integration with active MMICs. Furthermore, the end-fire radiation parallel to the substrate plane complements the usual radiation in perpendicular directions. This may help to increase the overall antenna coverage or increase the throughput in a MIMO multi-antenna system. principle. Simulation and measurement results for a fabricated scaled prototype at 12 GHz are shown in section 3. An additional simulation study of a 60-GHz example antenna is also presented to demonstrate its suitability for mm-wave frequencies. Figure 1 shows a conceptual assembly of the proposed slab mode antenna. The principal part is a planar high permittivity substrate (ε r ≈ 10) on a ground plane, which serves as grounded dielectric slab guide for the propagating TM 0 surface wave. The main electric field component of this mode is normal to the ground plane. Depending on the dielectric material, such a slab guide can have very low transmission loss. Dielectric dissipation is usually the dominant loss mechanism due to the absence of field singularities in the slab guide. An advantageous field distribution leads to low current densities and therefore to low conductor losses in the ground plane. The latter are particularly low if a thin low-permittivity insulation film (ε r ≈ 2) is introduced between the substrate and ground. Then, the tangential magnetic field intensity right above the ground plane is lowered and current densities are reduced. The TM 0 slab mode is excited by a coplanar wave launcher. (Figure 2) Coplanar waveguide (CPW) was chosen as input line because it is compatible with MMICs and with the relative thick substrate necessary for sufficient field confinement. The substrate thickness d is in the order of one third of a wavelength in the dielectric medium or about 0.5 mm at 60 GHz for the exemplary used Duroid ® 6010LM laminate (ε r = 10.2) from Rogers ® Corp. Such a thickness is still good to handle in a manufacturing process. The launcher's large metallized back-short vias can simultaneously be used as RF or DC ground or for heat dissipation from active MMICs. This kind of wave launcher was first used for a transition from CPW to substrate integrated image guide [6] and is described there in more detail. Comparable TM 0 slab mode launchers were published by other authors [7,8]. However, the presented design is superior in terms of low parasitic radiation, achievable bandwidth of over 20 %, and almost perfect front-to-back ratio due to the reflector-like back-short vias. It was also used in a hybrid integrated mm-wave front-end at 60 GHz [9]. MMIC transceiver front-ends and IF or bias circuits can be mounted on the same substrate that is used for the antenna. (Figure 1) Consequently, the antenna is easy to integrate and cost-efficient. Besides wire-bonding, the flip-chip technique is available to achieve very low interconnect loss to the CPW antenna input. A very short CPW section is feasible and recommended for lowest loss. Description of the Antenna Parts The slab mode launcher generates a cylindrical slab wave in the forward half space, which would result in a very wide beam width. A simple planar dielectric lens on top of the substrate is capable of focusing the beam effectively. Field simulations illustrate how the TM 0 surface wave on the grounded slab is collimated by the planar lens. The guided wave is then gradually transformed into an ungrounded slab mode by the curved ground plane and is finally radiated from the substrate edge in end-fire direction. (Figure 3) The underlying focusing effect of the planar lens is discussed in the following subsection, along with a quantitative analysis of the propagation constant in the dielectric slab. The necessary curved ground body can be fabricated at low cost by plastic injection molding and subsequent electroplating. It may be part of the housing of a later wireless consumer product. Since the wave is loosely guided in the ungrounded part of the slab and the effective permittivity is low, reflections or scatterings from the substrate edge are negligible. This is in spite of the high refractive index contrast between the substrate material and the surrounding air, which normally causes a strong field trapping effect [5,10]. Directive radiation in the vertical plane is achieved as a result of the pronounced vertical field extension that leads to a large effective antenna aperture. The gain can be enhanced further by thinning the substrate towards the end, similar to a dielectric rod or wedge antenna [11]. In the horizontal plane, the effective aperture can be extended by using a larger lens for higher gain and narrower Universal Journal of Electrical and Electronic Engineering 1(3): 87-93, 2013 89 beam width. Slab Mode Propagation A simple planar dielectric lens on top of the grounded slab guide is capable of collimating the beam in the horizontal plane. This effect can be explained by a local increase in the substrate thickness, which in turn decreases the phase velocity of the guided wave in those zones. The cylindrical phase fronts of the launched surface wave are flattened in this way to augment the antenna gain. A simple but efficient method being available to determine the phase constant in a layered slab guide is the transverse resonance technique [12,13]. Its distinct advantage is that boundary conditions at dielectric interfaces can easily be handled as junctions of different transmission lines. The phase constants β of the guided modes are obtained by finding the roots of a single characteristic equation. Figure 4 shows the cross-section of the grounded slab guide including a low-permittivity insulation film. Its transverse resonance representation is shown on the right hand side. and transverse wave number k yi in the i-th dielectric layer (Cartesian coordinates and uniformity in x-direction). Variable f denotes the frequency, c 0 the speed of light, and ε r,i is the relative permittivity of the respective layer. The modal fields in a waveguide form standing waves in the transverse plane and therefore the structure can be modeled as a resonant transmission line circuit obeying the following resonance condition Z + in (y) and Zin (y) are the input impedances seen at a point y on the resonant transmission line when looking towards the positive or negative y-direction, respectively. (Figure 4) By means of the well-known transmission line impedance equation [13] Evaluating the resonance condition (3) if layer "3", the air layer, is approximated to be infinite. Applying the substitutions (1), (2), and (5) to equation (7) results in the final characteristic equation, which allows the calculation of β by a root-finding algorithm for a given frequency. Several roots may exist, corresponding to the fundamental and higher order TM modes. Equation (7) can easily be extended to more than three dielectric layers, e.g. if the collimating lens and the substrate consist of different materials or if the effects of a potential air gap should be analyzed. The transverse resonance technique described above was used to compute the normalized phase constant of the fundamental TM 0 mode for the case that substrate and lens are made from Duroid ® 6010LM high frequency laminate. (Figure 5) In the region with superimposed lens, corresponding to an increased substrate thickness if the lens is of the same material, the guided wavelength λ g = 2π/β is shorter than on the substrate alone. The resulting phase equalization is responsible for the collimating effect. It is observed that the sensitivity to a small air gap between the lens and the substrate is low, so that certain surface irregularities can be tolerated. Once those parameters are known, the shape and size of a suitable lens can be determined. Planar lens design is covered elsewhere, e.g. [14,15]. Well-known numerical optimization techniques can be applied to the lens design to obtain the desired antenna characteristics (e.g. maximum gain or minimum side lobes). Simple circular lenses are used in the following examples of medium gain slab mode antennas. Prototype at 12 GHz A prototype of the antenna described in the previous section was designed and realized in the X-band at around 12 GHz. (Figure 6) Its dimensions are listed in Table 1. The curved ground body of the antenna was milled out of a brass block. It also includes the back-short vias of the slab mode launcher and spacers, which form a thin air gap instead of the previously mentioned insulation film. A Duroid ® 6010LM substrate with notches for the vias was fixed thereon. The CPW line and the patch resonator of the launcher were fabricated in printed circuit manner. The patch and the vias were conductively connected with silver epoxy and a standard SMA connector was mounted as an interface to the measurement equipment. The circular lens was cut out of another Duroid substrate and was fixed with adhesive tape on the grounded slab section of the antenna. The input reflection coefficient was measured by means of a network analyzer at the SMA connector. It is compared to simulation results obtained with the 3D time-domain solver of CST Microwave Studio ® . (Figure 7) Some degradation is caused by the SMA connector, which was not taken into account in the simulation. In addition, a degradation of the input matching and shift to lower frequencies may be caused by fabrication tolerances, because the alignment of the milled metal body with the substrate was not absolutely precise. Nevertheless, the obtained bandwidths for > 10 dB return loss are about the same. Simulated and measured patterns agree well. The half-power beam width is 33° in the H-plane and 36° in the E-plane. The measured gain of 12.0 dBi is slightly lower than the simulated value of 12.6 dBi at an overall measurement uncertainty of ±0.6 dB. Increased side lobes are observed in the E-plane as a consequence of the superposition of parasitic radiation by the slab mode launcher and the intended beam. If desired, this side effect can be reduced significantly by use of absorber paint or resistive shielding in a later housing. The fraction of power which is radiated by the slot dipole to free space instead of being coupled to the surface wave can be approximated by ε r,S -3/2 [6,16], where ε r,S is the relative permittivity of the substrate. As a consequence, only high permittivity substrates should be used together with this kind of slab mode launcher, since otherwise too much power is radiated uncontrolled. The slab mode antenna was built for operation at 12 GHz to prove the operating principle and to confirm the simulated antenna characteristics. It was not optimized for compactness. The advantages of this type of antenna, in particular the high radiation efficiency, become apparent at higher frequencies only, where conductor losses are generally more dominant. For this reason, an example antenna design was studied theoretically at 60 GHz. Results are presented in the following section. 60-GHz Example Antenna Another slab mode antenna was designed for operation in the unlicensed 60-GHz band. (Figure 9) In this case, the design parameters were (description see Table 1): a = 18 mm; b = 10 mm; d = 0.508 mm; h = 0.05 mm; t = 0.254 mm; D = 8 mm; r = 8 mm; l = 7 mm. The air gap of the previous prototype antenna was replaced by a PTFE insulation film (ε r = 2.08; tanδ = 0.001). Again, Duroid ® 6010LM laminate was used for the slab and the lens, having a loss tangent tanδ = 0.003. All conductors were modeled with specific conductivity σ = 2.0·10 7 S/m. Good input return loss > 15 dB was achieved for an operating bandwidth of more than 20 %. (Figure 10) This large bandwidth easily covers the entire unlicensed 60-GHz band. At 60 GHz, the H-plane half-power beam width is 22°, but it can be customized by the size of the lens. A very wide beam width is obtained if no collimating lens is used at all, which results in lower gain but wider coverage, however. For this case, the antenna size can be made very compact. The E-plane half-power beam width is 37°. It can be altered by changing the length and the taper of the substrate cantilever overlapping the ground body, similar to a dielectric rod or wedge antenna [11]. Based on the given material specifications, the radiation efficiency of the 60-GHz antenna design is better than 80 %. In a second trial simulation, all metallic parts were modeled as perfect electric conductor, i.e. only dielectric losses were taken into account. In this test, the radiation efficiency rises by only 3 % compared to the lossy conductor. This confirms the very low conductor loss of the slab mode antenna and suggests its use for even higher frequencies. Measurement results are not available due to the limited micro-fabrication capabilities in the university laboratory and because of the great difficulties with connecting the mm-wave measurement equipment for radiation pattern measurements in an anechoic chamber. Nevertheless, the presented simulation results are deemed accurate because simulation and measurement at 12 GHz also agreed well. Similar to [15], the feature of beam scanning is feasible in the horizontal plane by arranging several switched slab mode launchers around the collimating lens. Conclusion The developed dielectric slab mode antenna is most suitable for applications at the upper end of the microwave range or in the mm-wave range. This is because of its low-loss dielectric construction, its wide operating bandwidth, and its customizable gain in a range of about 10 dBi to 20 dBi. Other beneficial features include the use of a planar substrate to promote low-cost manufacturing and the compatibility to MMICs which can be mounted in flip-chip manner for lowest interconnection loss at very high frequencies. The design relies on high permittivity substrates with ε r > 8, unless a wave launcher is developed which can deal with lower permittivity. Most other integrated antennas perform poor on high permittivity substrates due to field trapping effects, so that the slab mode antenna provides an alternative. It is furthermore compatible to relative thick substrates, which are easier and therefore cheaper to handle during production. Radiation occurs in end-fire direction, i.e. parallel to the substrate plane, which may be desired in many cases of product integration. There are very few antenna designs having the same characteristic. Due to its simple construction and very low conductor loss, the dielectric slab mode antenna can be a practicable option up to terahertz frequencies.
3,962
2013-10-01T00:00:00.000
[ "Engineering", "Physics" ]
Complex dynamics, hidden attractors and continuous approximation of a fractional-order hyperchaotic PWC system In this paper, a continuous approximation to studying a class of PWC systems of fractional-order is presented. Some known results of set-valued analysis and differential inclusions are utilized. The example of a hyperchaotic PWC system of fractional order is analyzed. It is found that without equilibria, the system has hidden attractors. Introduction This paper studies a new class of piecewise continuous (PWC) fractional-order (FO) systems modeled by the following general initial value problem (IVP): where the PWC function f : R n → R n has the form of f (x(t)) = g(x(t)) + A(x(t))s(x(t)), (2) in which q ∈ (0, 1), g : R n → R n a scalar vectorvalued function, at least continuous, with s : R n → R n , s(x) = (s 1 (x 1 ), s 2 (x 2 ), . . . , s n (x n )) T a vector-valued piecewise function, with s i : R → R, i = 1, 2, . . . , n, real piecewise constant functions (e.g., sgn functions), and A n×n a square matrix of real functions. Let M be the discontinuity set. Moreover, let D q * denote the Caputo differential operator of order q with starting point 0 [1]: One of the reasons to use Caputo differential operator is that it has a physically meaningful interpretation for the initial conditions just like in the integerorder problems [a unique condition x(0) in the case of q ∈ (0, 1)]. Remark 1 Fractional-order differential equations (FDEs) do not define dynamical systems in the usual sense: by denoting the solution of (1) as Φ(t, x 0 ), one does not have the flow property Φ s • Φ t = Φ t+s [2]. However, in this paper, by numerical calculation of solutions, the definition of an integer-order dynamical system is adopted, which states that if the underlying IVP admits a solution, the problem defines a dynamical system [3, Definition 2.1.2]. Because the systems modeled by the IVP (1) are autonomous, hereafter, unless otherwise indicated, the time variable will be dropped in writing. For numerical integration of discontinuous ordinary differential equations (ODEs) of integer order, there exist dedicated numerical methods (see, e.g., the survey [4]) and, there are two main strategies to approach discontinuous systems: one strategy for treating discontinuities is simply to ignore them (time stepping methods) and to rely on a local error estimator such that the error remains acceptably small; the other strategy is to use a scalar event function h : R n → R, which defines the discontinuity Σ = {x ∈ R n |h(x) = 0}, to determine the intersection point as the new starting point for the numerical solution (event-driven methods). The following aspects of discontinuous dynamical systems should be mentioned: A numerical method for discontinuous systems may become either inaccurate or inefficient, or both, in a region where discontinuities of the solution or its derivatives occur and the local truncation error analysis, which forms the basis of most step-size control techniques, fails if there is not sufficient (local) smoothness. Several known numerical methods assume that trajectories will cross the discontinuity surface as they reach it, or this will never happen. But there will always be errors in finding discontinuities. Actually, systems of PWC systems are mostly ideal, since switch-like functions like sgn are used, where the hysteresis or delay of the real switching operation is not considered, or the regularization represents a good approach in these cases. Although there are numerical methods for FDEs (see, e.g., [5][6][7]) and also for DEs with discontinuous right-hand sides (see, e.g., [8][9][10] or the survey [4]), to the best of our knowledge, there are no numerical methods for FDEs with discontinuous right-hand sides. Consequently, modeling continuously or smoothly the underlying systems is of real important. To approximate the PWC problem (1)-(2), one has to regularize the right-hand side first using, e.g., the Filippov regularization, transforming the discontinuous problem to a set-valued IVP, i.e., an FO ordinary differential inclusion (DI) of FO. Then, Cellina's Theorem ensures the existence of continuous approximations. 1 In this way, the obtained continuous FO problem can be numerically integrated using, e.g., the multistep predictor-corrector Adams-Bashforth-Moulton (ABM) scheme for FDEs. Two kinds of continuous approximations are proposed and utilized in this paper: global approximation and local approximation. As an example of the PWC system modeled by (1)-(2), consider a fractional variant of one PWC system, proposed in [11] for the integer case, 2 as follows: where a, b > 0 are real positive parameters. In this paper, let a = 1, b be the bifurcation parameter and, unless otherwise specified, q = 0.98. Comparing (3) with (1), one has 1 Graphically, by approximation one understands that not all graphic points of the PWC function to be approximated need to be located on the created figure, compared with interpolation where all graphic points of the PWC function have to be located on the created figure. 2 See also [12], where several PWC and non-smooth jerk systems are proposed. Fig. 1a. Thus, R 4 is splitted by the discontinuity surface, x 1 = 0, to two open half spaces, Ω ± = {x ∈ R 4 : ±x 1 > 0}. Besides rich dynamics (similar to those presented in [11] for the integer-order case), it will be shown that the FO setting reveals some new behavior. It will be shown numerically that the local approximation to the PWC system by a continuous one is preferable instead of the classical global sigmoid approximation. Finite-time local Lyapunov exponents are determined, and chaotic as well as hidden hyperchaotic attractors are found. Sliding motion is numerically investigated. Moreover, using existing results on the periodicity of solution of FDEs, it will be shown that such a system cannot admit stable cycles. The paper is organized as follows. Section 2 presents the continuous approximation of PWC systems (1). In Sect. 3, the PWC system (3) is approximated. In Sect. 4, the dynamics of the obtained continuous system are numerically investigated. In Conclusion section, the obtained results are enumerated, while the three appendices present some mathematical notions and known results utilized by the paper. Regularization The PWC IVP (1) will be transformed to the following FO DI: The set-valued function F : R n ⇒ 2 R n can be defined in several ways. A simple (convex) expression of F, obtained by the Filippov regularization, is given by where F(x) is the convex hull of f (x), μ the Lebesgue measure and ε the radius of the ball centered at x. At those points where f is continuous, F(x) consists of one single point, which coincides with the value of f at this point (i.e., F(x) = { f (x)}). At the points belonging to M, F(x) is given by (5) ( [13, p. 85]; see also [14]). Set-valued function F, given by (5), is upper semicontinuous (see Definition A.1 in Appendix), with compact and convex values (see also Remark A.1 in Appendix). Then, applying [2, Theorem 3.2] (see also [15]), one obtains the following result about the existence of generalized (Filippov) solutions (see Definition A.2 in Appendix) to the DI (4). Even after regularization, the DI (4) admits generalized solutions, because single-valued IVPs offer numerical opportunities. The interest in this paper is to transform the set-valued problem (4) to a single-valued continuous one. In order to justify the use of the Filippov regularization to some physical systems, one must choose small values for ε, so that the motion of the physical system is arbitrarily close to a certain solution of the underlying DI (it tends to the solution, as ε → 0). However, extremely small values of ε can lead to large values of derivatives and, consequently, to stiff systems. Therefore, as one can see in the following sections, special attention has to be focussed on ε. If the piecewise constant functions s i are sgn, their set-valued forms obtained with Filippov regularization and denoted by Sgn : R ⇒ R are defined as follows: Here, the conventional sgn(0) is replaced with the whole interval [− 1, 1] connecting the points − 1 and + 1 (see Fig. 2a, b). By applying the Filippov regularization to the function f defined by (2), one arrives at the following problem: in which After regularization. c After local continuous approximation for a large ε value in the case of sgn function). Continuous approximation The continuous approximation of the function f defined by (2) is realized next, as described in [16]. For brevity, only the most important steps are presented. Let C 0 ε (R) be the class of real continuous approximations (see Definition A.3 in Appendix) s : R → R of the set-valued function F, which satisfy Here, B(x, ε) is the disk of radius ε centered at x. Figure 2c presents the case of the set-valued function The approximation of the set-valued function S, with the single-valued functions, will be denoted as The set-valued functions S i , i = 1, 2, . . . , n, can be approximated due to Approximate Theorem, called Cellina's Theorem (Theorem A.1 in Appendix), which states that a set-valued function F, with closed graph and convex values (Remark A.1 in Appendix), admits C 0 ε approximations. By global approximation (GA) of the PWC functions S in (6), denoteds, one understands a function defined on the entire axis R, while by local approximation (LA), denotedS ε , a function defined on some ε-neighborhood [− ε, ε] of discontinuity, with ε being a small positive number [16]. Theorem 2 [16] The PWC system (1)-(2), with g continuous, can be locally or globally continuously approximated with the following problem: wheref is either a local or a global approximation of f . If g is smooth, then the approximation is smooth. Global approximation Any single-valued function on R, with the graph located in the ε-neighborhood, can be considered as a global approximation of S by Cellina's Theorem (see the sketch in Fig. 2c, or Weiestrass Theorem A.2). However, some of the best candidates for s are the sigmoid functions, which provide the required flexibility and to which the abruptness of the discontinuity can be easily smoothened. If S(x) = Sgn(x), one of the mostly utilized sigmoid approximations is the following function sgn: 3 3 Sigmoid functions include the ordinary arctangent such as 2 π arctan x ε , the hyperbolic tangent used especially in modeling neural networks (see, e.g., [17]), the logistic function, some algebraic functions like x √ +x 2 , and so on. where δ is a positive parameter which controls the slope of the curve in the neighborhood of the discontinuity x = 0. In Fig. 3a, sgn is plotted for three distinct values of δ. The smallest ε values, necessarily to embed sgn within an ε-neighborhood of Sgn (as stated by Cellina's Theorem), depend proportionally on δ. Note that for x = 0, sgn is identical to the single-valued branches of Sgn (the horizontal lines ± 1) only asymptotically, as x → ± ∞. For example, for δ = 0.01, at the point x = 0.06, the difference is of order 10 −3 , even the two graphs look apparently identical at the underlying points A or B (Fig. 3b). To reduce the ε value, e.g., to 10 −4 , δ should be of order 10 −5 . On the other hand, as one can see in Sect. 3, lower values of δ does not necessarily imply substantial reduced values of ε. In order not to significantly affect the physical characteristics of the underlying system, it is desirable to approximate S only on some tight ε-neighborhoods of the discontinuity x = 0, not on the entire real axis, since the difference between S ands persists along the entire real axis R. Local approximation A better approximation is LA,s ε : [− ε, ε] → R, wheres ε is some continuous function satisfying the gluing conditions Obviously,s ε ands are both C 0 ε functions. If g is continuous, then for every ε > 0, there exists an LA of f ,f ε : R n → R n , such that [16] (see Fig. 4a) Approximations ε can also be continuously extended on R, yielding a GA,s, as follows: Among the simplest functionss ε which, compared to the exponential function in the GA (7), are the cubic Fig. 3 a Graph of sigmoid function sgn (7) for three values of δ. b Overplots of set-valued function Sgn and its approximate function sgn. Detail reveals the difference between the two curves. ε-neighborhood (see Fig. 2c) is not shown here polynomials (called cubic splines) s ε : R → R defined by Remark 2 Polynomials have the advantage to be directly evaluated by computers with the four arithmetic operations of adding, subtracting, multiplication and division. By imposing, near the gluing conditions (8), the supplementary differentiability conditions at the boundary of the discontinuity neighborhood 4 for the particular case of the Sgn function, the smooth LA function, denoted by sgn ε , is 4 Since s i are PW constant functions on x = 0, they are differentiable on x = 0. Using (9), Sgn can be continuously approximated on R by the following piecewise function (see Fig. 4b): Remark 3 Both GA (7) and LA (10) are also smooth approximations. Concluding, in order to obtain numerical solutions to (4), the simplest way is to replace the discontinuous problem with the continuous (smooth) one, using one of the locally or globally approximations (see the sketch in Fig. 5). Approximations and numerical integration of the investigated FO PWC system Following the way the problem was transformed to DI, the PWC problem (3) can be transformed to the following set-valued problem: or Note that only the second equation is a DI. For x 4 = 0, the underlying set-valued function is present in Fig. 1b. By applying Theorem 2, via relation (7), GA leads to the following problem: while with LA, (11), one has the following problem: Remark 4 Although both GA and LA are smooth, the approximating functionf is only continuous due to the modulus (third equation). However, the existence of solution to (12) is not affected by the non-smoothness (see, e.g., [18] for FO DIs). The graph of approximating surface of the second component on the right-hand side of the function F, with GA for a large value of δ, δ = 1, is present in Fig. 1c. As can be seen, no significant graphical differences are indicated between both approximations. The numerical integration of the approximate system is obtained in this paper with the predictorcorrector multi-step ABM method [5], implemented using the MATLAB subroutine FDE12.m [19] with default tolerance 1E −6, for fixed q = 0.98 and, unless otherwise specified, integration step size h = 0.002. The following numerical analysis is performed for b = 1.25 and q = 0.98 with default double precision (roughly 15-16 decimal digits), sufficient for the current purpose. To compare the approximation results, Fig. 6 shows the time series from component x 1 for both GA and LA (with δ = 1E − 6 and ε = 1E − 6, respectively), and also the time series obtained without approximation (WA) by using the ABM method on the nonapproximated system, which are overplotted together. The tests have been repeated with different initial conditions and time-step sizes. Results are summarized as follows: -All the results (for GA, LA and WA) are accurate for all t ∈ [0, t * 1 ), with t * 1 ≈ 74.95. Thus, for t ∈ [0, t * 1 ), the underlying numerical solutions coincide, because the differences between them are 0; -GA "escapes" early from this coincidence, near t * 1 (Fig. 6b, while LA and WA remain further coinciding, till t * 2 ≈ 75.60; -For decreasing values of δ and ε, 1E − 10 < δ < 1E − 6 and 1E − 8 < ε < 1E − 6, the time superior limit slightly increases for both approximations (t * max 76); -For LA, with step size 0.002 and ε < 1E − 8, the ε-neighborhood (necessary for the LA algorithm) can no longer be identified; -For δ < 1E −10 in GA, numerically sgn is the same as sgn, therefore, there exists no more approximation, for which the code works further as FDE12 applied to the original WA system; -With a smaller step size of FDE12, no significant accuracy can increase. 5 Therefore, up to t ≈ t * 1 , both approximations are time-step independent 5 The predictor-corrector ABM method has an error which is roughly proportional to h 2 . Thus, to obtain an error of, e.g., 1.0e− 6, which is sufficient in applications, a step size close to h = 1.0E − 3 must be considered. (invariant) in the sense that the error between the two computed solutions, starting from the same initial conditions, remains zero, which are also close to WA; -GA takes longer computational time, despite its simpler form, because it is calculated over the entire axis x 1 , compared to LA which is calculated only on [− ε, ε] (see also Remark 2). Note that, for a smaller step-size value (e.g., h = 0.0002), in the crossing and sliding surface (i.e., x 1 = 0; see Sect. 4.1), the two approximations are identical and also with the WA trajectory (Fig. 6c). Summarizing, by comparing LA and GA and also with WA, it can be concluded that LA is better than GA in using with ABM for FDEs. It is important to note that even FDE12.m can be applied directly to PWC problems, although the routine was not designed for this kind of problems. Remark 5 The above results are in concordance with those obtained in [20][21][22]. Beyond numerical artifacts that might occur when numerically integrating a system of DEs, notions such as "shadowing time" and "maximally effective computational time" reveal that it is possible to have reliable numerical simulations only for some chaotic systems on a finite-time interval (see, e.g., [20][21][22]). For example, in the classical Lorenz system, obtaining a precise solution for, e.g., t ∈ [0, 100] represents a real challenge. Therefore, the case of FO systems is even more delicate. On the other hand, larger intervals must be considered, so that possible phenomena like transient behaviors could be studied. In this paper, after intensive numerical tests it is concluded that the obtained numerical results could be acceptable even on relatively larger intervals, as large as [0, 800]. Dynamics of the investigated FO PWC system Next, because of the above-mentioned advantages, the LA will be utilized. Sliding motion In order to check that there exist crossing, sliding and grazing phenomena in system (3), it is first considered without approximation. Denote the open half spaces by Ω ± = {x ∈ R 4 : ± x 1 > 0}. For x 0 ∈ Ω ± , the switching time t s is given by the following equation (see "Appendix B"): Since φ ± (t) are analytic functions, they may only have isolated zeros. Consequently, crossing, sliding and grazing at x(t s ) are possible, which are determined by the sign behavior of functions φ + (t) and φ − (t) near t s . However, one cannot use the local behavior of the function f given by (2). Therefore, one does not know if the solution is really sliding on x 1 = 0, except the obtained numerical results. Moreover, one does not know how the solution is crossing the discontinuous surface. When a possible solution is crossing the discontinuous surface at some points, there is no theoretical proof for the existence of a solution. Therefore, the graphical approach is adopted to study the trajectories of the continuous approximated system, to see what happens near the discontinuity surface x 1 = 0. As can be observed from Fig. 7a, b, d, where the case of b = 2.2 is considered, the trajectory seems to slide along the plane x 1 = 0. However, the tubular representations shown in Fig. 7 (especially the detail in Fig. 7d) would indicate that the trajectory actually crosses the plane x 1 = 0 for several (but finite number of) times. It is remarked that this phenomenon happens for other values of b too. Note that, after approximation, the system still remains in class C 0 (see Remark 4). Therefore, beside the "smoothed" corners caused by the approximated discontinuity, which appear along the plane x 1 = 0, 6 the trajectories have some other corners due to the modulus |x 1 | (see the 3D phase projections in Fig. 7a, c, d). An interesting behavior has been found, especially when the, respectively, initial conditions are situated in a relatively larger distance from the origin of the phase space, where the trajectories scroll toward a (x 2 , x 3 , x 4 ) and zoomed view. Possible sliding phenomenon can be observed in a, b, d direction parallel with the axis x 3 in the 3D phase projection (x 1 , x 2 , x 3 ), or with some axis parallel to the axis x 1 in the plane x 3 = 0 in the 3D phase projection (x 1 , x 3 , x 4 ) (see Fig. 7b). This also happens, on the plane projection (x 4 , x 3 ), for the hidden hyperchaotic attractor corresponding to b = 0.5 in Fig. 4d. Periodicity The bifurcation diagram of the approximated system, with bifurcation parameter b (Fig. 8), shows that there are some ranges for b where the system would have stable periodic cycles (see also Fig. 9 for b = 2). In reality, the following result on the periodicity of solutions of FDEs should be noted. Theorem 3 [24,25] (see also [26]) The fractionalorder system (1) cannot have any exact non-constant periodic solution. In [27,28], it is proved that a long-time non-constant periodic solution may have a steady-state behavior, but with − ∞ as the lower limit in the Caputo operator. In these cases, the underlying FDEs may have asymptotically T -periodic solutions for which lim t→∞ x(t + T ) − x(t) = 0 (see also [29]), for a certain T > 0. For example, consider the following simple scalar linear FDE using Caputo derivative with the lower limit at − ∞: where α, β, γ , Ω ∈ R and [2] A periodic solution of (15) is (see the proof in "Appendix C") On the other hand, the scalar linear FDE using Caputo derivative and the lower limit at 0 (D q * ), has not periodic solutions (Theorem 3). However, continuous dynamical systems of FO, modeled by Caputo's derivative D q * , could have non- The discontinuity plane x 1 = 0 reveals the corners, typical to continuous non-smooth systems, and also the sliding phenomena along the plane x 1 constant periodic trajectories, if the system variables are impulsed periodically (impulsive fractional-order systems) [30]. Also, the corresponding torus in the bifurcation diagram for b > 2.1 in Fig. 8b could contain numerically periodic oscillations (see Fig. 10). As remarked in the previous section, tori also present scrolls before they are reached by system trajectories (Fig. 10d). Hidden Chaotic and hyperchaotic attractors From a computational point of view, it is natural to suggest the following classification of attractors, based on the complexity in finding basins of attraction in the phase space [32][33][34][35]: an attractor is called a self-excited attractor if its basin of attraction intersects with any open neighborhood of a stationary state (equilibrium); otherwise, it is called a hidden attractor. For a hidden attractor, chaotic or not, its basin of attraction is not connected with equilibria. Thus, the On the other hand, hidden attractors can be attractors in systems without equilibria (the case of our system (3), see [36]), or in systems with only one stable equilibrium [37]. Hidden hyperchaotic attractors have been reported, e.g., in [38][39][40], and for a FO system, e.g., in [41]. In order to numerically find finite-time local Lyapunov exponents (LEs), one can locally approximate the non-smoothness, caused by the modulus function, by a quadratic polynomial p(x) = 1 2ε x 2 + ε 2 within the neighborhood [− ε, ε]. In general, sustained chaos is numerically indistinguishable from transient chaos, which can persist for a long time (see [17,42]). For example, for q = 0.9725 and b = 1.77, because one of the LEs {0.0058, − 0.0000, − 0.0042, − 0.0447} (measured with a precision of 1E − 5 and from the initial conditions (1, 2, 0.0, 0.1)) is 0, the system evolves first for a relatively long time (t ∈ [0, t * ], with t * ≈ 220, with hidden transient chaos, after which the trajectory is attracted by a hidden torus which, as discussed in Sect. 4.2, is characterized by a numerical periodic trajectory (see phase projections (x 1 , x 2 , x 3 ) and time series in Fig. 11a, b). On the other hand, if one uses q = 0.936 and b = 1.19, the spectrum of the LEs (with the same initial value conditions and step size) is Λ = {0.0034, 0.0022, 0.0000, − 0.0592)}, which means that the existing transient is hidden hyperchaotic, since two LEs are positive (see phase projections (x 1 , x 2 , x 3 ) and the time series in Fig. 11c, d). Because, as indicated by the bifurcation diagram (Fig. 8), the system presents multistability, so to obtain different hidden hyperchaotic attractors, one needs to choose the initial points in their respective basins of attraction (see, e.g., the case present in Fig. 10c, where the initial conditions are (1, 2, 0, 1) and (11, − 1, 0, .1)). Conclusion In this paper, it has been shown numerically that local approximations of the discontinuities in systems (1)-(2) are more useful than global approximations. Therefore, the considered system is locally continuously and smoothly approximated, for which its dynamics are numerically analyzed. Without equilibria, the system admits only hidden attractors. It has been found for what values of the fractional order q and parameter b, the system admits a single positive finite-time Lyapunov exponent, with which the system behaves chaotically. It has also been found when the system admits two positive time-finite local Lyapunov exponents, with which the system behaves hyperchaotically. Finally, it has been shown that the system cannot have exact periodic oscillations; therefore, for correctness the seeming oscillations are referred to as numerically periodic oscillations. An important and yet challenging future research topic would be to implement the methodology and algorithms by physical means such as electronic circuits and mechanical structures. A Basic notions and results Because the set-valued property of F in (6) is generated by S i , which are real functions, the notions and results presented here are considered in R, for the case of n = 1, but they are also valid in the general cases of n > 1. The graph of a set-valued function F is defined as follows: Remark A.1 Due to the symmetric interpretation of a set-valued function as a graph (see, e.g., [43]), a setvalued function satisfies a property if and only if its graph satisfies it. For instance, a set-valued function is closed or convex if and only if its graph is closed or convex. F is u.s.c. if it is so at every x 0 ∈ R, which means that the graph of F is closed. Generally, a set-valued function admits (infinitely) many approximations. The last formula of (B.3) gives explicit solutions of (12) on each Ω ± , respectively. where α, β, γ , Ω ∈ R. Note that [2] To find a solution of (C.1) in the form of one can use formulas [28, (24), (26)] to derive
6,528.2
2018-01-04T00:00:00.000
[ "Mathematics" ]
Efficient Congo Red Removal Using Porous Cellulose/Gelatin/Sepiolite Gel Beads: Assembly, Characterization, and Adsorption Mechanism Porous sustainable cellulose/gelatin/sepiolite gel beads were fabricated via an efficient ‘hydrophilic assembly–floating droplet’ two-step method to remove Congo red (CR) from wastewater. The beads comprised microcrystalline cellulose and gelatin, forming a dual network framework, and sepiolite, which acted as a functional component to reinforce the network. The as-prepared gel beads were characterized using FTIR, SEM, XRD, and TGA, with the results indicating a highly porous structure that was also thermally stable. A batch adsorption experiment for CR was performed and evaluated as a function of pH, sepiolite addition, contact time, temperature, and initial concentration. The kinetics and isotherm data obtained were in agreement with the pseudo-second-order kinetic model and the Langmuir isotherm, with a maximum monolayer capacity of 279.3 mg·g−1 for CR at 303 K. Moreover, thermodynamic analysis demonstrated the spontaneous and endothermic nature of the dye uptake. Importantly, even when subjected to five regeneration cycles, the gel beads retained 87% of their original adsorption value, suggesting their suitability as an efficient and reusable material for dye wastewater treatments. Introduction Nowadays, wastewaters from dye-using industries, such as clothing, leather, synthesis, and electroplating, pose a major challenge to global society [1]. Of the various types of environmental harm caused by these industries, the aquatic environmental contamination by azo dyes is considered the most serious. Congo red (CR), a popular anionic azo dye, is intrinsically harmful to living organisms [2,3]. Even the presence of very low concentrations of CR in wastewater imparts a color, which blocks light and inhibits the photosynthetic efficiency of aquatic life [4]. Moreover, these effluents are highly toxic and non-biodegradable to humans, fauna, and flora, with some variants being carcinogenic and mutagenic [5]. Considering their complex aromatic structure, thermal stability, and stable chemistry, the treatment of CR-containing effluents before being discharged into any natural resource is critical [6]. Considerable effort has been put into developing techniques to remove CR from effluents, including photolysis [7], coagulation/flocculation [8], adsorption [9], membrane separation [10], and electrochemical processes [11]. Among these methods, adsorption is recognized as a practical solution, owing to its low cost, ease of operation, lack of secondary pollution, and potential for regeneration [12]. A well-known adsorbent for CR removal is activated carbon, which is effective for a range of pollutants, including dyes and pigments. However, the high cost and difficulty associated with its regeneration has limited extensive use [13], and, predictably, focus has shifted to the development of renewable adsorbents based on natural and low-cost materials. Synthesis of MGS Gel Beads First, 100 g SEP was added to 1 L 15% HCl solution, with stirring for 24 h, then washed with distilled water until neutral pH and dried for usage [29]. Then, an aqueous solution of 7 wt% NaOH/12 wt% urea was prepared and precooled to −23 °C. Next, 3 g MCC and 1 g GEL were immediately added to the 100 mL NaOH/urea solution under vigorous stirring for 10 min to obtain a 4 wt% homogeneous MCC/GEL hybrid sol. Then, 1.5 g pretreated SEP was added into the hybrid sol and stirred for 2 h. The resulting suspension was added dropwise using a 5 mL syringe at a dropping rate of 2 mL·min −1 into the HCl solution containing 5 wt% CaCl2 for 12 h to solidify. The well-formed MCC/GEL/SEP (MGS-1.5, according to the mass of SEP) hydrogel beads were filtered and washed in deionized water to remove residual chemicals, and MGS-1.5 gel beads were obtained after freeze-drying. A schematic diagram for the generalized fabrication process is shown in Figure 1. For comparison, MGS gel beads with 0 g, 0.5 g, 1.0 g, and 2.0 g SEP were synthesized using the same process. Characterization of MGS Gel Beads The surface morphology of MGS gel beads was studied using field emission scanning electron microscopy (FE-SEM, Hitachi S-4800, Tokyo, Japan). The crystalline and chemical structures were recorded using X-ray diffractometry (XRD, Bruker D8, Bremen, Germany) and Fourier transform infrared spectroscopy (FT-IR, Thermo Nicolet 5700, Carlsbad, CA, USA). A thermal analyzer (TG, TA-SDTQ600, Waltham, MA, USA) was used to survey the weight loss of the MGS gel beads from ambient conditions to 600 °C, at a heating rate of 10 °C·min −1 in N2. Characterization of MGS Gel Beads The surface morphology of MGS gel beads was studied using field emission scanning electron microscopy (FE-SEM, Hitachi S-4800, Tokyo, Japan). The crystalline and chemical structures were recorded using X-ray diffractometry (XRD, Bruker D8, Bremen, Germany) and Fourier transform infrared spectroscopy (FT-IR, Thermo Nicolet 5700, Carlsbad, CA, USA). A thermal analyzer (TG, TA-SDTQ600, Waltham, MA, USA) was used to survey the weight loss of the MGS gel beads from ambient conditions to 600 • C, at a heating rate of 10 • C·min −1 in N 2 . Density and Porosity Measurement The density of gel beads was calculated from the weight and volume of the specimen, while the porosity was determined using a liquid displacement method, with absolute ethanol as a solvent [30]. The weighed gel beads were dipped into a petri dish containing absolute ethanol, until they approached saturation, following which they were taken out and the residual ethanol weight was again obtained. The porosity of the beads was then calculated according to the following equation: where W 1 is the weight of the beads, W 2 represents the sum of the weights of the immersed beads and ethanol, and W 3 denotes the weight of the residual ethanol after the beads were removed. Adsorption and Desorption of CR Dye The batch adsorption experiment of CR from aqueous solutions was carried out as follows: 0.1 g MGS gel beads was soaked in 100 mL CR solution, with various concentrations, and stirred for different durations. The initial pH of the CR solutions was adjusted using dilute HCl or NaOH solutions. UV-visible spectroscopy (UV-vis, TU-1810, Purkinje General, Beijing, China) was employed to analyze the concentrations of CR at 497 nm, before and after adsorption. The adsorption capacity was calculated according to the following equation: where Q t (mg·g −1 ) is the adsorbed capacity at time t (min), C o (mg·L −1 ) and C t (mg·L −1 ) are the CR concentrations at initial and given time t (min), respectively. V corresponds to the volume of CR solution (L) and m denotes the weight of MGS gel beads (g). The adsorption data at different time intervals were used to fit the kinetic curves of CR adsorption, and the isotherm parameters were obtained by comparing the adsorption quantities under different initial concentrations. For regeneration, the MGS gel beads were desorbed after adsorption via 0.05 mol·L −1 NaOH eluent, with stirring at 303 K for 4 h. After the complete elution, the MGS gel beads were washed and dried to obtain the regenerated beads. To evaluate the reusability of the regenerated MGS gel beads, five such sequential adsorption-desorption cycles were carried out. Fabrication Principle and Strategy of MGS Gel Beads As displayed in Figure 1, an efficient 'hydrophilic assembly-floating droplet' two-step method and possible assembly mechanism are proposed. First, MCC and GEL chains worked as framework materials to assemble a dual network. When embedding SEP with abundant silicon hydroxyl groups, an effective 'hydrophilic assembly' occurs, driven by the hydrophilic groups (silicon hydroxyl groups) of SEP sheets and active sites (oxygencontaining groups) of the dual network structure via an electrostatic interaction and hydrogen bonding, leading to an interpenetrating network structure [19,31]. Then, a 'floating droplet' technology was employed to obtain homogeneous porous MGS beads via sol-gel conversion induced by a CaCl 2 /HCl coagulating bath. Hence, in the assembly process, SEP sheets acted like a 'crosslinker' to connect the dual network and construct uniform and robust beads with good resistance to deformation. Furthermore, SEP sheets with abundant silicon hydroxyl groups are expected to improve the specific surface area and adsorption capacity of MGS beads. Chemical Analysis of MGS Gel Beads FTIR spectroscopy was performed on the MCC, GEL, SEP, and MGS beads to understand their chemical structure, and the respective spectra are displayed in Figure 2. As can be seen in Figure 2, the bands of MCC at 3417 and 1637 cm −1 represent O-H stretching and bending vibrations, respectively [32]. The peak at 2902 cm −1 is assigned to C-H stretching vibration [33]. In the spectrum of GEL, a broad band in the range of 3600-3100 cm −1 was attributed to N-H and O-H stretching vibrations [34,35], while those at 1637 and 1560 cm −1 are designated as C=O stretching (amide I) and N-H bending vibrations (amide II) [36]. The characteristic peak at 3620 cm −1 for SEP corresponds to the stretching vibration of the Mg-OH group in the octahedral layers, and the absorbance at 3424 and 1637 cm −1 were assigned to the vibrations of zeolitic water or structurally bound water. Next, the bands at 915 and 1090 cm −1 were attributed to the asymmetrical vibration of Si-O, while the band at 1035 cm −1 corresponds to Si-O-Si plane vibration [28,37]. As for the MGS beads, peaks at 1090, 1035, and 916 cm −1 appeared in the spectrum of MGS, indicating the involvement of SEP in the hybrid gel beads. Moreover, the bands assigned to O-H stretching and bending vibrations shifted to 3425 and 1636 cm −1 , respectively. The stretching vibration peak of C-H at 2800~3000 cm −1 decreased, indicating that C-H may participate in the cross-linking reaction [18]. The Mg-OH stretching vibration at 3620 cm −1 assigned to the free silanol groups located on the external surface of SEP disappeared in the spectrum of the MGS bead. A similar phenomenon was observed in related hybrid systems and correlated to the presence of hydrogen bonding interactions between the silanol groups of SEP and hydrophilic matrices [38][39][40]. Such favorable interactions are responsible for the good dispersion of SEP, MCC, and GEL molecules, generating porous hybrid beads with an improved performance and adsorption capacity. stand their chemical structure, and the respective spectra are displayed in Figure 2. As can be seen in Figure 2, the bands of MCC at 3417 and 1637 cm −1 represent O-H stretching and bending vibrations, respectively [32]. The peak at 2902 cm −1 is assigned to C-H stretching vibration [33]. In the spectrum of GEL, a broad band in the range of 3600-3100 cm −1 was attributed to N-H and O-H stretching vibrations [34,35], while those at 1637 and 1560 cm −1 are designated as C=O stretching (amide I) and N-H bending vibrations (amide II) [36]. The characteristic peak at 3620 cm −1 for SEP corresponds to the stretching vibration of the Mg-OH group in the octahedral layers, and the absorbance at 3424 and 1637 cm −1 were assigned to the vibrations of zeolitic water or structurally bound water. Next, the bands at 915 and 1090 cm −1 were attributed to the asymmetrical vibration of Si-O, while the band at 1035 cm −1 corresponds to Si-O-Si plane vibration [28,37]. As for the MGS beads, peaks at 1090, 1035, and 916 cm −1 appeared in the spectrum of MGS, indicating the involvement of SEP in the hybrid gel beads. Moreover, the bands assigned to O-H stretching and bending vibrations shifted to 3425 and 1636 cm −1 , respectively. The stretching vibration peak of C-H at 2800~3000 cm −1 decreased, indicating that C-H may participate in the cross-linking reaction [18]. The Mg-OH stretching vibration at 3620 cm −1 assigned to the free silanol groups located on the external surface of SEP disappeared in the spectrum of the MGS bead. A similar phenomenon was observed in related hybrid systems and correlated to the presence of hydrogen bonding interactions between the silanol groups of SEP and hydrophilic matrices [38][39][40]. Such favorable interactions are responsible for the good dispersion of SEP, MCC, and GEL molecules, generating porous hybrid beads with an improved performance and adsorption capacity. Structural Characterization of MGS Gel Beads To gain an insight into the structural characteristics of MGS beads, the morphology of MGS beads with various SEP additions was studied using field emission scanning electron microscopy (FE-SEM). As displayed in Figure 3, all beads showed a diameter of ca. 3 mm, and the MGS-0 bead (Figure 3a,b) exhibited few pores, with a relatively smooth surface. With the addition of SEP (Figure 3c-h), a highly-porous structure was observed, which is crucial to facilitating dye adsorption. Specifically, the incorporation of SEP into the dual network provoked a change in the interconnected network, reflecting a layered porous network, as confirmed by Figure 3c-h. Furthermore, the surface of the layered structure became progressively rougher, which is the typical morphology of SEP sheets Structural Characterization of MGS Gel Beads To gain an insight into the structural characteristics of MGS beads, the morphology of MGS beads with various SEP additions was studied using field emission scanning electron microscopy (FE-SEM). As displayed in Figure 3, all beads showed a diameter of ca. 3 mm, and the MGS-0 bead (Figure 3a,b) exhibited few pores, with a relatively smooth surface. With the addition of SEP (Figure 3c-h), a highly-porous structure was observed, which is crucial to facilitating dye adsorption. Specifically, the incorporation of SEP into the dual network provoked a change in the interconnected network, reflecting a layered porous network, as confirmed by Figure 3c-h. Furthermore, the surface of the layered structure became progressively rougher, which is the typical morphology of SEP sheets and led to an improved specific surface area of the MGS beads. When the SEP content increased further to 2.0 g (Figure 3i,j), the pore size became smaller, and the layered structure tended to accumulate and collapse, which was due to the fragmentation of the bead structure caused by the impaired load transfer of excessive SEP through the dual network. varied from 0.075 to 0.158 g·cm −3 , and showed an almost linear increase with respect to the addition of SEP. Unexpectedly, the porosity was enhanced with increasing SEP content initially, followed by a decrease, with the maximum porosity occurring at a SEP content of 1.5 g SEP. This variation pattern is consistent with the analysis of SEM photos, which further demonstrates that changes of the assembled structure of MGS beads can be induced by SEP. Taking into consideration the nature of the density and porosity changes, 1.5 g SEP was chosen as the most suitable for this study. The variation of porosity and density with the addition of SEP is shown in Figure 4, and the MGS beads were found to be lightweight and full of pores. The density of beads varied from 0.075 to 0.158 g·cm −3 , and showed an almost linear increase with respect to the addition of SEP. Unexpectedly, the porosity was enhanced with increasing SEP content initially, followed by a decrease, with the maximum porosity occurring at a SEP content of 1.5 g SEP. This variation pattern is consistent with the analysis of SEM photos, which further demonstrates that changes of the assembled structure of MGS beads can be induced by SEP. Taking into consideration the nature of the density and porosity changes, 1.5 g SEP was chosen as the most suitable for this study. and led to an improved specific surface area of the MGS beads. When the SEP content increased further to 2.0 g (Figure 3i,j), the pore size became smaller, and the layered structure tended to accumulate and collapse, which was due to the fragmentation of the bead structure caused by the impaired load transfer of excessive SEP through the dual network. The variation of porosity and density with the addition of SEP is shown in Figure 4, and the MGS beads were found to be lightweight and full of pores. The density of beads varied from 0.075 to 0.158 g·cm −3 , and showed an almost linear increase with respect to the addition of SEP. Unexpectedly, the porosity was enhanced with increasing SEP content initially, followed by a decrease, with the maximum porosity occurring at a SEP content of 1.5 g SEP. This variation pattern is consistent with the analysis of SEM photos, which further demonstrates that changes of the assembled structure of MGS beads can be induced by SEP. Taking into consideration the nature of the density and porosity changes, 1.5 g SEP was chosen as the most suitable for this study. [41]. The XRD pattern of GEL shows a broad peak at 2θ~20.20 • , indicating the slightly amorphous nature of GEL [42]. As for SEP, the peaks at 2θ of 6.30 • and 27.15 • correspond to 110 and 080 reflections of silicate, respectively [43]. Compared with the MGS-0 beads, some characteristic SEP peaks appeared in the MGS-1.5 bead, indicating the homogenous dispersion of SEP in the bead matrix, due to its sheets being intercalated into the dual network [20,44]. Unexpectedly, no obvious MCC diffraction peaks appeared in the MGS-1.5 beads. This can be attributed to changes in the crystal structure of MCC, to resemble cellulose II, brought about by the dissolution of the alkali/urea system, as confirmed by peaks at 2θ of 12.10 • , 20.05 • , and 21.05 • in the MGS-0 beads [45]. XRD Analysis of MGS Gel Beads Polymers 2021, 13, 3890 7 of 15 cellulose I crystal structure [41]. The XRD pattern of GEL shows a broad peak at 2θ ~ 20.20°, indicating the slightly amorphous nature of GEL [42]. As for SEP, the peaks at 2θ of 6.30° and 27.15° correspond to 110 and 080 reflections of silicate, respectively [43]. Compared with the MGS-0 beads, some characteristic SEP peaks appeared in the MGS-1.5 bead, indicating the homogenous dispersion of SEP in the bead matrix, due to its sheets being intercalated into the dual network [20,44]. Unexpectedly, no obvious MCC diffraction peaks appeared in the MGS-1.5 beads. This can be attributed to changes in the crystal structure of MCC, to resemble cellulose II, brought about by the dissolution of the alkali/urea system, as confirmed by peaks at 2θ of 12.10°, 20.05°, and 21.05° in the MGS-0 beads [45]. Thermal Property of MGS Gel Beads As shown in Figure 6, the thermal stability and the extent of degradation of MGS beads and components was investigated using TGA and DTG analyses. As for SEP, a total mass loss of approximately 8.4% over the temperature range of 30-600 °C was observed, as shown in Figure 6a. The weight residual rate of MGS-1.5 beads (55.6%) was higher than that of MCC (11.6%), GEL (29.2%), and MGS-0 beads (45.8%), which can be attributed to the addition of thermally stable SEP (91.6%). Moreover, the increased uniform molecular dispersion and interfacial interactions between polar polymer groups and silicate layers of SEP potentially enhanced the thermal stability of the MGS-1.5 bead. Regarding the derivative weight loss curves (DTG in Figure 6b), a weak mass loss peak was observed at ~100 °C for SEP, confirming the presence of zeolitic or structurally bound water on the SEP surface. The decomposition of MGS-0 was found to occur at 147 °C and 253 °C, and was associated with the elimination of the physically adsorbed water molecules in the first step. In the second stage, hydroxyl groups within the MGS-0 beads underwent dehydration and part of the glycoside bonds in the beads broke, resulting in rearrangement of the chemical bonds. In contrast with MGS-0, the decomposition temperature of the MGS-1.5 beads was 262 °C, significantly higher than the MGS-0 beads. Thus, as expected, the hybrid beads showed a higher heat resistance than the MGS-0 beads, due to the confinement and thermal insulation effect of inorganic molecules in SEP [46]. Thermal Property of MGS Gel Beads As shown in Figure 6, the thermal stability and the extent of degradation of MGS beads and components was investigated using TGA and DTG analyses. As for SEP, a total mass loss of approximately 8.4% over the temperature range of 30-600 • C was observed, as shown in Figure 6a. The weight residual rate of MGS-1.5 beads (55.6%) was higher than that of MCC (11.6%), GEL (29.2%), and MGS-0 beads (45.8%), which can be attributed to the addition of thermally stable SEP (91.6%). Moreover, the increased uniform molecular dispersion and interfacial interactions between polar polymer groups and silicate layers of SEP potentially enhanced the thermal stability of the MGS-1.5 bead. Regarding the derivative weight loss curves (DTG in Figure 6b), a weak mass loss peak was observed at 100 • C for SEP, confirming the presence of zeolitic or structurally bound water on the SEP surface. The decomposition of MGS-0 was found to occur at 147 • C and 253 • C, and was associated with the elimination of the physically adsorbed water molecules in the first step. In the second stage, hydroxyl groups within the MGS-0 beads underwent dehydration and part of the glycoside bonds in the beads broke, resulting in rearrangement of the chemical bonds. In contrast with MGS-0, the decomposition temperature of the MGS-1.5 beads was 262 • C, significantly higher than the MGS-0 beads. Thus, as expected, the hybrid beads showed a higher heat resistance than the MGS-0 beads, due to the confinement and thermal insulation effect of inorganic molecules in SEP [46]. Effect of pH and SEP Content on CR Adsorption The pH value of the solution played a considerable role in the sorption study, as the surface charge of both the dye molecules and MGS beads varied significantly based on the pH. As depicted in Figure 7a, it was observed that the equalized adsorption amount (Q e ) of CR was higher at a pH of 5.0 but less in alkaline and concentrated acidic environments. In detail, CR has a pK a value of 4.5-5.5, and is positively charged, due to the protonation of nitrogen atoms and sulfonate groups, when the pH is less than 5.0, resulting in electrostatic repulsion between the protonated MGS beads and positive CR molecules [47]. In alkaline conditions, an electrostatic repulsion still existed between the deprotonated MGS beads and anionic CR molecules, generating a lower adsorption capacity. This phenomenon Effect of pH and SEP Content on CR Adsorption The pH value of the solution played a considerable role in the sorption study, as the surface charge of both the dye molecules and MGS beads varied significantly based on the pH. As depicted in Figure 7a, it was observed that the equalized adsorption amount (Qe) of CR was higher at a pH of 5.0 but less in alkaline and concentrated acidic environments. In detail, CR has a pKa value of 4.5-5.5, and is positively charged, due to the protonation of nitrogen atoms and sulfonate groups, when the pH is less than 5.0, resulting in electrostatic repulsion between the protonated MGS beads and positive CR molecules [47]. In alkaline conditions, an electrostatic repulsion still existed between the deprotonated MGS beads and anionic CR molecules, generating a lower adsorption capacity. This phenomenon demonstrated that the adsorption process is significantly influenced by pH, and it was decided to set the pH value at 5.0 for subsequent studies. As shown in Figure 7b, the Qe values of MGS beads with various SEP additions for CR were investigated. After loading SEP, the Qe gradually increased from 154 mg·g −1 to 205 mg·g −1 for MGS-0 and MGS-1.5 beads, respectively. This improved adsorption capacity of the MGS beads can be attributed to the increased surface area, enhanced porosity, and abundant functional groups. For MGS-2.0 beads, the Qe decreased to 183 mg·g −1 , due to the increased density and decreased porosity, which is consistent with the density, porosity, and SEM analyses. Effect of pH and SEP Content on CR Adsorption The pH value of the solution played a considerable role in the sorption study, as the surface charge of both the dye molecules and MGS beads varied significantly based on the pH. As depicted in Figure 7a, it was observed that the equalized adsorption amount (Qe) of CR was higher at a pH of 5.0 but less in alkaline and concentrated acidic environments. In detail, CR has a pKa value of 4.5-5.5, and is positively charged, due to the protonation of nitrogen atoms and sulfonate groups, when the pH is less than 5.0, resulting in electrostatic repulsion between the protonated MGS beads and positive CR molecules [47]. In alkaline conditions, an electrostatic repulsion still existed between the deprotonated MGS beads and anionic CR molecules, generating a lower adsorption capacity. This phenomenon demonstrated that the adsorption process is significantly influenced by pH, and it was decided to set the pH value at 5.0 for subsequent studies. As shown in Figure 7b, the Qe values of MGS beads with various SEP additions for CR were investigated. After loading SEP, the Qe gradually increased from 154 mg·g −1 to 205 mg·g −1 for MGS-0 and MGS-1.5 beads, respectively. This improved adsorption capacity of the MGS beads can be attributed to the increased surface area, enhanced porosity, and abundant functional groups. For MGS-2.0 beads, the Qe decreased to 183 mg·g −1 , due to the increased density and decreased porosity, which is consistent with the density, porosity, and SEM analyses. As shown in Figure 7b, the Q e values of MGS beads with various SEP additions for CR were investigated. After loading SEP, the Q e gradually increased from 154 mg·g −1 to 205 mg·g −1 for MGS-0 and MGS-1.5 beads, respectively. This improved adsorption capacity of the MGS beads can be attributed to the increased surface area, enhanced porosity, and abundant functional groups. For MGS-2.0 beads, the Q e decreased to 183 mg·g −1 , due to the increased density and decreased porosity, which is consistent with the density, porosity, and SEM analyses. Adsorption Kinetics The adsorption behavior of MGS-1.5 beads for CR at various initial concentrations (100-500 mg·L −1 ) was investigated at 303 K, and the results are displayed in Figure 8a and Table 1. It was evident that the amount of adsorption increased when raising the initial CR concentration from 100 to 500 mg·L −1 . This was attributed to the significant driving forces for adsorption generated by the pressure gradient at higher concentrations. In addition, the adsorption process was fast during the first 120 min, and then leveled off gradually at about 180 min for all concentrations. To further investigate the adsorption mechanism and control the residual time of the adsorption process, the experimental data were analyzed using Lagergren's pseudo-first-order and second-order models [48,49], which are given by Equations (3) and (4): where Q 1e (mg·g −1 ) and Q 2e (mg·g −1 ) are the adsorption capacities calculated from pseudo-first-order and pseudo-second-order kinetic models, respectively. k 1 (min −1 ) and k 2 (g·(mg min) −1 ) refer to the rate constants estimated from the fitting equations. (100-500 mg·L −1 ) was investigated at 303 K, and the results are displayed in Figure 8a and Table 1. It was evident that the amount of adsorption increased when raising the initial CR concentration from 100 to 500 mg·L −1 . This was attributed to the significant driving forces for adsorption generated by the pressure gradient at higher concentrations. In addition, the adsorption process was fast during the first 120 min, and then leveled off gradually at about 180 min for all concentrations. To further investigate the adsorption mechanism and control the residual time of the adsorption process, the experimental data were analyzed using Lagergren's pseudo-first-order and second-order models [48,49], which are given by Equations (3) and (4): where Q1e (mg·g −1 ) and Q2e (mg·g −1 ) are the adsorption capacities calculated from pseudofirst-order and pseudo-second-order kinetic models, respectively. k1 (min −1 ) and k2 (g·(mg min) −1 ) refer to the rate constants estimated from the fitting equations. The fitting plots according to the pseudo-first-order and pseudo-second-order models are shown in Figure 8b,c, and all the corresponding kinetic parameters are summarized in Table 1. The correlation coefficients (R 2 > 0.99) estimated from the pseudo-second-order kinetics for all concentrations were higher than those obtained from the pseudo-first-order kinetics. This can be further supported by the good agreement between the Q2e values estimated from the pseudo-second-order models and the experimental Qexp values. Thus, it was clear that the pseudo-second-order model describes the adsorption process accurately. The fitting plots according to the pseudo-first-order and pseudo-second-order models are shown in Figure 8b,c, and all the corresponding kinetic parameters are summarized in Table 1. The correlation coefficients (R 2 > 0.99) estimated from the pseudo-second-order kinetics for all concentrations were higher than those obtained from the pseudo-first-order kinetics. This can be further supported by the good agreement between the Q 2e values estimated from the pseudo-second-order models and the experimental Q exp values. Thus, it was clear that the pseudo-second-order model describes the adsorption process accurately. Adsorption Isotherm As shown in Figure 9a, the influence of initial concentration on the adsorption capacity was evaluated at different temperatures. It is clear that the adsorption capacity increased gradually as the CR concentration increased from 25 to 500 mg·L −1 , before reaching saturation, which can be attributed to the increasing driving force generated by the concentration gradient, consistent with the results in Figure 8a. Furthermore, the adsorption rate increased with temperature, and the adsorption capacity also showed a small degree of enhancement. The second isotherm, that of Freundlich, is described by assuming a heterogeneous surface with multilayer adsorption [51]. Its linear form is expressed as follows: where kF (L·mg −1 ) and n represent the Freundlich constant and the heterogeneity factor, which reflect the adsorption capacity and adsorption intensity, respectively. The linearized curves and calculated parameters of the Langmuir and Freundlich isotherm models are presented in Figure 9 and Table 2, respectively. It is clear that all the regression coefficients from the Langmuir model (R 2 > 0.99) at different temperatures fitted better than those of the Freundlich model (R 2 > 0.96). In addition, favorable RL (0.2352-0.8889) and reasonable Langmuir constant (kL > 0) also reflect that the Langmuir model fit To describe the interactive behavior between the CR and MGS beads and understand the CR molecular distribution in the liquid/solid phase at equilibrium, two classical adsorption isotherm equations, namely the Langmuir and Freundlich isotherms, were applied to build reliable predictive models. Assuming that the adsorption occurs via CR monolayer coverage onto a homogeneous adsorbent, with identical surface sites, the Langmuir isotherm can be expressed as in Equation (5) [50]: where C e (mg·L −1 ) is the equilibrium concentration, Q max (mg·g −1 ) stands for the maximum monolayer adsorption capacity per unit mass of MGS beads, and k L (L·mg −1 ) is the Langmuir constant related to the energy of the adsorption process. Moreover, another essential parameter R L , which is a dimensionless constant, is defined by Equation (6): where R L indicates whether the Langmuir model is unfavorable (R L > 1), favorable (0 < R L < 1), linear (R L = 1), or irreversible (R L = 0). The second isotherm, that of Freundlich, is described by assuming a heterogeneous surface with multilayer adsorption [51]. Its linear form is expressed as follows: where k F (L·mg −1 ) and n represent the Freundlich constant and the heterogeneity factor, which reflect the adsorption capacity and adsorption intensity, respectively. The linearized curves and calculated parameters of the Langmuir and Freundlich isotherm models are presented in Figure 9 and Table 2, respectively. It is clear that all the regression coefficients from the Langmuir model (R 2 > 0.99) at different temperatures fitted better than those of the Freundlich model (R 2 > 0.96). In addition, favorable R L (0.2352-0.8889) and reasonable Langmuir constant (k L > 0) also reflect that the Langmuir model fit the experimental data well. This result indicates that CR adsorption occurs at a homogeneous MGS surface with identical binding sites, with the maximum monolayer adsorption reaching 279.3 mg·g −1 at 303 K. Adsorption Thermodynamics In order to gain in-depth information regarding the inherent energetic changes associated with the adsorption and the feasibility of the process, the adsorption thermodynamics were investigated assuming an isolated system, wherein the entropy change (∆S • ) is the only driving force (Figure 10) [52]. The thermodynamic parameters, such as enthalpy change (∆H • ), entropy change (∆S • ), and Gibbs free energy change (∆G • ), were determined using the Van't Hoff equations as follows [53]: where R and T stand for universal gas constant (8.314 J·(mol·K) −1 ) and solution temperature (K), respectively. ∆H • and ∆S • are determined from the slope and intercept of the ln(k d ) plot vs. 1/T, in which k d (L·mol −1 ) is the equilibrium constant obtained using Equation (9): where Q e (mg·L −1 ) and C e (mg·g −1 ) are the adsorption amount and adsorbate concentration at equilibrium, respectively. Finally, ∆G • is calculated using the following relation: Adsorption Thermodynamics In order to gain in-depth information regarding the inherent energetic changes associated with the adsorption and the feasibility of the process, the adsorption thermodynamics were investigated assuming an isolated system, wherein the entropy change (ΔS°) is the only driving force (Figure 10) [52]. The thermodynamic parameters, such as enthalpy change (ΔH°), entropy change (ΔS°), and Gibbs free energy change (ΔG°), were determined using the Van't Hoff equations as follows [53]: where R and T stand for universal gas constant (8.314 J·(mol·K) −1 ) and solution temperature (K), respectively. ΔH° and ΔS° are determined from the slope and intercept of the ln(kd) plot vs. 1/T, in which kd (L·mol −1 ) is the equilibrium constant obtained using Equation (9): where Qe (mg·L −1 ) and Ce (mg·g −1 ) are the adsorption amount and adsorbate concentration at equilibrium, respectively. Finally, ΔG° is calculated using the following relation: All the calculated values of ΔH°, ΔS°, and ΔG° are presented in Table 3. The positive ΔH° values reveal that the adsorption process was endothermic in nature, which can be supported by the fact that the adsorption of CR onto MGS beads increased when raising All the calculated values of ∆H • , ∆S • , and ∆G • are presented in Table 3. The positive ∆H • values reveal that the adsorption process was endothermic in nature, which can be supported by the fact that the adsorption of CR onto MGS beads increased when raising the temperature (Figure 9a). The values of ∆S • were found to be positive as well, suggesting an increase in randomness at the adsorbent/adsorbate interface during adsorption. This may be due to the fact that more translational entropy is gained by displacing adsorbed water with CR molecules than in losing it, thus causing increased randomness in the system [2]. The spontaneity of the adsorption process is confirmed by the negative ∆G • values in the studied ranges of temperature and concentration. Moreover, the increasing absolute values of ∆G • with increasing temperature reflect a more feasible adsorption process at high temperatures for CR, which is consistent with the experimental results and positive ∆H • values. Stability and Reusability Studies Chemical stability is very important for the reusability of adsorbents. Herein, we evaluated the stability of the adsorbed beads by observing their SEM photos after adsorption at various conditions. First, CR solutions with pH = 4 and pH = 10 (adjusted by NaOH, strong corrosive) were used as the adsorption conditions, as shown in Figure 11a,b, the microstructures of MGS-1.5 beads changed little compared with the original structure (Figure 3), indicating the chemical stability of the MGS beads in the corrosive environment. In consideration of the oxidant environment, a CR solution with 10 mL H 2 O 2 was used as the simulated wastewater, and the microstructure of MGS-1.5 beads after adsorption is shown in Figure 11c. It can be seen that even with a slight collapse appearing in the inner structure, the original porous structure and pore size were still retained, which are essential for the adsorption of dyes. In order to further determine the chemical stability of MGS-1.5 beads at various conditions, XRD patterns were recorded and the results are shown in Figure 12a. It was clear that there was no obvious change for MGS-1.5 beads after adsorption in CR solutions with various conditions, indicating the structural stability of the MGS-1.5 beads. Hence, regardless of corrosive or oxidative conditions, the MGS-1.5 beads can maintain good chemical stability. Reusability is a vital factor for potential practical applications and can provide further insights into the adsorption mechanism. The adsorption-desorption process was performed five times, and the results are shown in Figure 12b. It is clear that the adsorption capacity for CR remained at approximately 87% of its initial adsorbability at the fifth cycle, indicating the benign and sustainable performance of MGS beads. Furthermore, the desorption using a strong base implies that the attachment of the CR molecules to the MGS beads was mainly through electrostatic interaction and ion exchange [54], which explains the pH-dependent adsorbability of the MGS beads. the temperature (Figure 9a). The values of ΔS° were found to be positive as well, suggesting an increase in randomness at the adsorbent/adsorbate interface during adsorption. This may be due to the fact that more translational entropy is gained by displacing adsorbed water with CR molecules than in losing it, thus causing increased randomness in the system [2]. The spontaneity of the adsorption process is confirmed by the negative ΔG° values in the studied ranges of temperature and concentration. Moreover, the increasing absolute values of ΔG° with increasing temperature reflect a more feasible adsorption process at high temperatures for CR, which is consistent with the experimental results and positive ΔH° values. Stability and Reusability Studies Chemical stability is very important for the reusability of adsorbents. Herein, we evaluated the stability of the adsorbed beads by observing their SEM photos after adsorption at various conditions. First, CR solutions with pH = 4 and pH = 10 (adjusted by NaOH, strong corrosive) were used as the adsorption conditions, as shown in Figure 11a,b, the microstructures of MGS-1.5 beads changed little compared with the original structure (Figure 3), indicating the chemical stability of the MGS beads in the corrosive environment. In consideration of the oxidant environment, a CR solution with 10 mL H2O2 was used as the simulated wastewater, and the microstructure of MGS-1.5 beads after adsorption is shown in Figure 11c. It can be seen that even with a slight collapse appearing in the inner structure, the original porous structure and pore size were still retained, which are essential for the adsorption of dyes. In order to further determine the chemical stability of MGS-1.5 beads at various conditions, XRD patterns were recorded and the results are shown in Figure 12a. It was clear that there was no obvious change for MGS-1.5 beads after adsorption in CR solutions with various conditions, indicating the structural stability of the MGS-1.5 beads. Hence, regardless of corrosive or oxidative conditions, the MGS-1.5 beads can maintain good chemical stability. Reusability is a vital factor for potential practical applications and can provide further insights into the adsorption mechanism. The adsorption-desorption process was performed five times, and the results are shown in Figure 12b. It is clear that the adsorption capacity for CR remained at approximately 87% of its initial adsorbability at the fifth cycle, indicating the benign and sustainable performance of MGS beads. Furthermore, the de- Conclusions This work reports the fabrication of cellulose/gelatin/sepiolite (MGS) gel beads via a simple and efficient two-step 'hydrophilic assembly-floating droplet' method. During the MGS gel bead preparation, microcrystalline cellulose (MCC) and gelatin (GEL) worked as dual network frameworks, and sepiolite (SEP) acted like a 'crosslinker' to connect and reinforce the dual network. A series of characterizations demonstrated that the MGS beads were lightweight and highly porous, and the incorporation of SEP, not only increased the adsorption capacity, but also made the beads more thermally stable. The adsorption behavior followed the pseudo-second-order model and Langmuir isotherm, with a maximum monolayer capacity of 279.3 mg·g −1 for CR at 303 K. Thermodynamic analyses illustrated that the CR adsorption onto MGS beads was spontaneous and endothermic. After five adsorption/desorption cycles, the MGS beads were found to retain 87% of their initial adsorbability, demonstrating themselves as an efficient and renewable candidate for dye wastewater treatment. Author Contributions: C.J. conceived and performed the experiments; D.L., N.W. and J.G. characterized the experimental data; F.F. and T.L. contributed the analysis of adsorption capacity; C.J. wrote the paper; J.W. revised the paper. All authors have read and agreed to the published version of the manuscript.
9,436
2021-11-01T00:00:00.000
[ "Environmental Science", "Materials Science", "Chemistry" ]