text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
ECCENTRICITY IN IMAGES OF CIRCULAR AND SPHERICAL TARGETS AND ITS IMPACT TO 3 D OBJECT RECONSTRUCTION
This paper discusses a feature of projective geometry which causes eccentricity in the image measurement of circular and spherical targets. While it is commonly known that flat circular targets can have a significant displacement of the elliptical image centre with respect to the true imaged circle centre, it can also be shown that the a similar effect exists for spherical targets. Both types of targets are imaged with an elliptical contour. As a result, if measurement methods based on ellipses are used to detect the target (e.g. best-fit ellipses), the calculated ellipse centre does not correspond to the desired target centre in 3D space. This paper firstly discusses the use and measurement of circular and spherical targets. It then describes the geometrical projection model in order to demonstrate the eccentricity in image space. Based on numerical simulations, the eccentricity in the image is further quantified and investigated. Finally, the resulting effect in 3D space is estimated for stereo and multi-image intersections. It can be stated that the eccentricity is larger than usually assumed, and must be compensated for high-accuracy applications. Spherical targets do not show better results than circular targets. The paper is an updated version of Luhmann (2014) new experimental investigations on the effect of length measurement errors.
INTRODUCTION 1.1 Circular targets in practice
The use of circular targets is a standard procedure in closerange photogrammetry for measuring tasks of the highest accuracy.In contrast to other targets, e.g.chessboard-type patterns, circular targets offer a number of practical advantages: Minimum size Symmetric pattern with unique definition of the centre Highly invariant with regard to rotation and scale High image contrast achievable with small target area Continuity of the feature contour High-accuracy image measurement (< 1/20 pixel) via different algorithms, e.g.centroid or contour-based methods Easy to manufacture, low cost Active and passive targeting techniques (e.g.LEDs, fluorescent, retro-reflective, diffuse reflective) Easily incorporated into target adapters for identifying well defined object features such as edges and drill holes High degree of automation in the detection, measurement and identification of points (e.g. using coded targets) However, some drawbacks can be identified for circular targets: Eccentricity between ellipse centre and imaged circle centre if ellipse-based measurement algorithms are applied Restricted visibility (typical: ±45°) depending on light sources and reflective properties The target cannot be mechanically probed for comparative measurement, e.g. with a CMM Risk of mistaken identity with other bright blob-shaped features The benefits of high image contrast, minimal size and high precision in image measurement are of major importance, which explains the fact that circular targets are used in almost all highaccuracy photogrammetric applications (Clarke 1994, Robson and Shortis 2007, Luhmann 2010).
Retro-reflective spherical targets are often used as markers to provide all-around visibility and independence from the relative orientation between object (e.g.probe) and cameras.They are widely used in medical or motion-tracking applications (Figure 1) where different designs of spherical targets can be found (see Figure 2).Also, in industrial and other technical applications spheres are used in order to maximize visibility and enable tactile probing by other systems, e.g.coordinate measurement machines.In comparison to flat circular targets, the advantages of spheres can be listed as follows: Large field of visibility Can be mechanically probed for comparison purposes However, a number of disadvantages can be identified: Continuity of the image contour is broken where the sphere is attached to an object or mounting, or where images are taken from lower observation angles (>45-60°) Blurred image edges resulting from the curvature of the sphere, which leads to some reduction in accuracy of image measurement More difficult to manufacture, medium to high costs Usually passive targeting techniques (mostly retroreflective) However, a target sphere still generates an elliptical image.
Although the semi-axes are of more similar length, measurement eccentricity still exists.The effect is briefly mentioned in Luhmann et al. (2013), but not evaluated in detail.Figure 3 shows an example of a globe that appears as an ellipse if imaged off the optical axis.Figure 4 shows the images of a circular and a spherical target both with identical diameters and similar spatial position.The flat target is tilted by about 35 degrees with respect to the image plane.It can be seen that the circular target is imaged as an elongated ellipse, as expected.The image of the spherical target is elliptical as well while the semi-axes do not differ much.Note that the longer semi-axis appears in x-direction of the image.
Image operators for circular targets
The measurement of circular targets is usually performed by one of the following image methods: Centroid operators: The centroid of the grey values or gradients of the target image pattern is calculated within a particular window.A threshold function can be introduced in order to include only those grey values which exceed the intensity threshold.Centroid operators are easy to implement and have very fast computation times.They are suited to real-time applications and yield sub-pixel precision even for very small targets below 5 pixels in diameter.As a drawback, the centroid is very sensitive to pattern artefacts (e.g.occlusions or dirt) and image noise.In theory, the centre of gravity of the intensities corresponds to the imaged centre of the circle/sphere, hence the eccentricity in projection does not affect the result (Mahajan 1998) if diffuse target reflection without signal saturation can be assumed. Template matching: Based on least-squares matching, a (synthetic) pattern is transformed onto the image, taking account of geometric and radiometric transformation parameters (Gruen 1985).The default affine transformation can be extended by projective or polynomial functions (Bethmann and Luhmann 2010).The adjusted shift parameters a 0 and b 0 are interpreted as the target centre.Partial derivatives for adjustment are derived from image gradients, hence the elliptical contour has maximum impact on the adjustment result.However, disturbances of the target's shape in the image can affect the centre positions. Ellipse-based contour methods: Contour edge points are detected in grey-value profiles arranged in star patterns or parallel lines.A best-fit ellipse based on a general conic section is calculated for all contour points, yielding the centre coordinates of the ellipse.Disturbances along the edge of the ellipse can be eliminated by robust error detection. Feature detectors: Based on the idea of Förstner's interest operator (Förstner & Gülch 1987) Ouellet & Hébert (2009) has developed a feature detector that considers the elliptical contour of a circular target.The result of the operator is, in theory, free of eccentricity.
Objectives and research topics
As further discussed in the next section, the centre of the elliptical image of a circular target is displaced with respect to the actual target centre due to the properties of projective geometry.From investigations such as Ahn (1997) and Dold (1997) and the long-term experience of system suppliers, users and developers, this eccentricity can be neglected in practice if the diameter of imaged targets is small enough, e.g. less than 10 pixels.However, since a) modern CCD and CMOS image sensors consist of much smaller physical detector elements (down to 1µm), b) image measuring accuracies in the order of 1/50 -1/100 pixel can be achieved and c) accuracy requirements can be increased in practice, the impact of eccentricity in projection of circular targets must be revised.
On the basis of the above knowledge and experience, this paper addresses the following questions with respect to current systems and specifications: What is the metric effect of eccentricity in projection for typical imaging scenarios? How does this eccentricity affect the resulting 3D coordinates in simple stereo configurations or in complex multi-image scenes?
Is it possible to correct for eccentricity a priori using image measurements only? Is there a significant difference in using flat circular targets or spherical targets?
These questions are of both theoretical interest and practical relevance.As one example, off-the-shelf systems use hand-held probes with calibrated circular or spherical targets.The probe can be positioned in almost any combination of distance and orientation within a large measuring volume, so that images of target points range from very small to very large.These online systems can be found in numerous applications (e.g.medical navigation, robot control, alignment of workpieces etc.).In contrast to multi-image configurations they provide only limited possibilities for the statistical control of measurements.Thus, systematic image measuring errors will have a significantly higher impact on 3D measurements from online systems in contrast to multi-image offline systems where 3D coordinates are calculated by robust bundle adjustment.
3D circle
The projective imaging of a flat circular target can be described by a general conic intersection.Firstly, the 3D circle of the target creates, in general, an oblique circular cone of rays whose apex is given by the perspective centre of the image (Figure 5 right).The resulting image is then given by the intersection of the image plane with this cone.This intersection is a general ellipse with five independent parameters (2 translations, 2 semiaxes, 1 rotation angle).A comprehensive description of the ellipse geometry and the estimation of ellipse parameters from contour points is given in Luhmann et al. (2013).
Sphere
The perspective imaging of a sphere is illustrated in Figure 6.The visible contour of the sphere is defined by a tangential circle T of radius r which is defined by the cone with apex at O' and tangential to the sphere.Since this tangential circle lies in a plane perpendicular to the cone axis, it is not, in general, parallel to the image plane.Consequently, the resulting image is again an ellipse with eccentricity e'.Only when the cone axis is identical with the optical axis does e' become zero.The eccentricity increases with larger imaging angles between sphere centre and optical axis.In human vision it is hard to detect the elliptical shape of an imaged sphere since the human eye instantly focuses on the object.Hence, the optical axes of the eyes are directed at the sphere and the observed image is circular.Moreover, as depicted in Figure 4, the elliptical contour is very close to a circle (see also Figure 3).
Simulation procedure
The following simulation procedure enables the rigorous estimation of the magnitude of eccentricity by projecting the 3D points of a target shape into the image and by analysing the resulting image points.Since the centre point of the target can be projected in the same way, it is possible to compare the result of a measuring algorithm with the nominal image point of the target centre.
In a simulation, a virtual camera is defined with arbitrary parameters of interior and exterior orientation.Circular targets are then analysed in the following processing steps: 1. Generation of a unit circle with n edge points; 2. 3D coordinate transformation of n edge points with 7 arbitrary selected parameters (3 translations, 3 rotations, 1 scale factor) gives the desired 3D circle of the target; 3. Continue with processing step 5, below.
For analysing spherical targets, steps 1 and 2 are replaced by the following: 1. Generation of a sphere with radius R and centre point C; 2. Calculation of the tangential circle T with centre point C T and rotation matrix R; 3. Generation of a circle with n edge points and radius r; 4. 3D coordinate transformation of n edge points with 6 parameters (shift into point C T , rotation with R) gives the desired 3D points of the tangential circle.
The next steps are independent of the target shape: 5. Definition of a virtual camera image including all parameters of interior and exterior orientation; 6. Transformation of 3D coordinates X,Y,Z of the circle edge points and the nominal 3D target centre using the collinearity equations and optional distortion functions.This generates n image points (x',y') along the ellipse contour and the true image point C' of the target centre; 7. Calculation of the ellipse centre E' by a least-squares ellipse fitting; 8. Difference between E' and C' gives the desired eccentricity e'.
This process can be performed for an arbitrary number of images.Space intersection can therefore easily be calculated in order to observe the effect of eccentricity in object space (see section 4).The intersection result should be identical to the given object centre C. Similarly, space resections or bundle adjustments can be tested for the influence of target displacements.
In addition, the outlined process can be embedded into a Monte-Carlo simulation in order to mix systematic and random errors within one process.
The following scenarios are taken from a wide variety of possible simulations: a) Variation of the diameter of circle/sphere b) Variation of one tilt angle between image plane and target plane T (for circular targets only) c) Variation of imaging distance or image scale
Imaging scenarios
In the following sections, two application-oriented setups with typical cameras, lenses and target parameters are analysed.Only selected and most relevant input parameters are modified in order to minimise possible combinations of parameter settings.
Industrial video camera:
A typical CCD video camera (1280 x 1024 pixels, pixel size 6µm, focal length 8mm) is used to observe object points at a distance between 500mm and 1200mm (image scales between 1:62 and 1:150).Targets of 1-10mm radius are measured (larger targets are not practical).The circle plane is tilted relative to the image plane by angles =20° (around X-axis) and j=10° (around Y-axis).The exterior orientation of the camera is set to zero, i.e. the perspective centre is located at the origin of the object coordinate system and camera rotations are zero.Depending on individual image scale, the targets have images with semi-axes between 1 and 27 pixels (see Table 1).The diagonal imaging angle for a point in the image corner results to 19°.
Digital SLR camera:
A digital SLR camera (4288 x 2848 pixels, pixel size 5.5µm, focal length 16mm) is used to observe object points between 1mm and 10mm radius at a distance between 500mm and 1500mm (image scales between 1:21 and 1:63).The same parameter variations are again evaluated.The target points are imaged with semi-axes between 1 and 50 pixels (see Table 2).The diagonal imaging angle for a point in the image corner results to 41°, hence this camera represents a wider viewing angle but still in a realistic scenario.A digital SLR camera (4288 x 2848 pixels, pixel size 5.5µm, focal length 16mm) is used to observe object points between 1mm and 10mm radius at a distance between 500mm and 1500mm (image scales between 1:21 and 1:63).The same parameter variations are again evaluated.The target points are imaged with semi-axes between 1 and 50 pixels (see Table 2).The diagonal imaging angle for a point in the image corner results to 41°, hence this camera represents a wider viewing angle but still in a realistic scenario.
Eccentricity of circular targets
3.3.1 Industrial video camera: Figure 7 (top) shows the resulting eccentricity in x and y for the three representative points given in Table 1.Since the target plane is tilted by =20° about the X-axis, the eccentricity in y is larger than in x.Values for ex and ey decrease with longer imaging distance, i.e. smaller image scales.The largest value in this data set with about -0.8µm is given for P3 and a target radius of 10mm.Assuming a practical target radius of 5mm (diameter = 10mm), the resulting eccentricities remain below 0.3µm (=1/20 pixel).Hence, for most applications with similar configurations, eccentricity can be neglected if the target radius is less than 5mm and the imaging distance is greater than 500mm.Figure 7 (bottom) displays the result of tilt angle variation for a target of 5mm radius.In this example the angle j is altered between -70 and +70 degrees while =0.The eccentricity ex behaves like a sine curve with a maximum between 45° and 60° depending on target position.For targets P1 and P2 there is no eccentricity ey since the points are located on the optical axis and is zero.P3 is located away from the optical axis and leads to sinusoidal eccentricities ex that are symmetrical with respect to the sign of j.The curves of Figure 8 show the same basic behaviour as those of Figure 7.However, for the closer object points P4, P6, a target radius of 10mm yields eccentricities of up to 3µm, and even 5mm targets show eccentricity values of about 0.8µm.For a 5mm target (Figure 8 right) possible rotations in j lead to significant eccentricities, e.g. up to 1µm (= 1/5 pixel) for P4 at j=30°.For point P6 the target is not visible anymore if j exceeds +20°.The results above prove the well-known effect of eccentricity, hence the effect becomes larger with larger image scale, larger target radius and larger tilt angle between target and image plane.As a conclusion, the eccentricity in projection of flat circular targets should not be neglected.It should be noted that the effect creates a systematic image measurement error and cannot be compensated by multiple and highly redundant imagery.It should also be pointed out that the examples above do not describe worst case scenarios as they could appear for wider imaging angles, e.g. using lenses with very short focal length and/or larger sensor formats.In those cases the resulting systematic errors will be even higher.
Eccentricity of spherical targets
Using the same input parameters as in the previous sections, the eccentricity effect of spherical targets is estimated.Since the effect does not depend on tilt angles between image and target plane, only sphere radius and target position in space are altered by simulation.Note that in practice, spherical targets are not usually available with diameters less than 5mm.camera).The curves show the resulting values for ex and ey for both camera types.As with the flat target tests, the eccentricity for the SLR setup is larger than for the video camera due to the larger scale (smaller scale numbers in Table 2) and the larger viewing angle.A sphere of radius 5mm leads to about 0.3µm displacement while a sphere of radius 10mm gives up to 11µm (2 pixels) for the SLR camera.Figure 9 (right) shows the eccentricity ex as a function of image tilt angle j of exterior orientation.Here the analysis is of points P1 and P4 which lie on the optical axis at j=0°.For a target with r=5mm a tilt of more than 20° results in an eccentricity of 0.3µm, reaching 1.6µm at 45° for the video camera and 1.8µm for the SLR camera.Note that one of the advantages of spherical targets is the wider angle of visibility so that tilt angles of 45° or more are realistic.The effect is even more relevant if targets of larger diameters are used, e.g.r=10mm in Figure 9 (right) where eccentricities of more than 1 pixel can occur for the SLR set-up.
As a conclusion, the use of spherical targets is even more critical in comparison with flat targets, with regard to eccentricity in projection.However, since the eccentricity does not depend on any angular orientation, as it is true for circular targets, it is possible to calculate the eccentricity as described above and use it as a correction for the measured target centre.
In addition, eccentricity of spherical targets has a radially symmetric behaviour.As shown in Figure 10, the displacement of image points is systematic.Hence, if a camera is calibrated by a point field that consists of the same spherical targets as for the actual application, it can be assumed that the effect of eccentricity in projection is mostly compensated by the parameters of radial distortion.
Imaging configurations
In many industrial applications, stereo or multiple camera setups are used to measure object points by space intersection.
In those cases cameras are pre-calibrated and exterior orientation is either given by stable camera fixtures or by continuous re-orientation using control points.Figure 11 shows two typical camera setups that will be analysed in the following sections.A three-camera setup has been investigated in Luhmann (2014).For an observed sphere, Figure 12 shows the principle effect of eccentricity on the calculated 3D point coordinates.Here the effect in x is illustrated because it is most critical for the xparallax, hence for depth calculation.For cameras that are configured according to the normal case of stereophotogrammetry (Figure 11a), ex' and ex" have the same sign, hence the resulting X-or Y-coordinate is shifted while Z is almost unaffected.In contrast, for convergent imagery (Figure 11b) the sign of eccentricities is different, and a systematic x-parallax error occurs that results in an object error mainly in Z.As a representative configuration, a stereo camera with two industrial video cameras as specified in the previous section is investigated.Using a stereo base of b=300mm two variants are simulated: a) a normal case configuration with parallel optical axes and b) a convergent configuration with j=±10°.These setups are typical for many applications in medicine, robotics or photogrammetric machine control (Luhmann et al. 2013).
In order to estimate the theoretical accuracy level for a stereo configuration, a Monte-Carlo simulation has been applied.It can be shown that a normally distributed image measuring noise of 1/20 pixel (= 0.3µm) leads to RMS values (1 sigma) of 0.02mm in X,Y and 0.14mm in Z for object points at a distance h=1200mm (height-to-base ratio h/b=4:1).For points at a distance of h=300mm (h/b=1) a theoretical precision of 0.004mm in in X, Y and Z can be expected.
Stereo images:
The following investigations are based on the sequence of calculations as described in the third section added by spatial intersection using the image ellipse centres.
For the case of circular targets and parallel viewing directions, Figure 13 (top) illustrates the resulting 3D coordinate errors for the points of Table 1 as a function of varying target radius for the video camera setup.The target plane is tilted by =20° and j=10° (and note that eccentricity would not occur at =0 and j=0).As expected from Figure 12, deviations in X and Y are significantly higher than in Z.The maximum error is given by P1 with -0.06mm in Y at r=10mm.For a practical target radius of 5mm the resulting errors lie below 0.02mm.Figure 13 (bottom) shows the 3D coordinate errors as a function of varying target radius in the case of convergent images (j 1 =-10°, j 2 =+10°).Here the target plane is parallel to the XY plane of the object coordinate system.For points P1 and P2, which lie in the middle between both cameras (X=0), only a deviation in Z can be observed.It ranges up to 0.1mm for large targets (r=10mm).Since the error in Z is almost equal for P1, P2 and P3, it can be assumed that the effect of quadraticly increasing stereoscopic Z error, which depends on the height-to-base ratio, is compensated by the decreasing eccentricity effect at longer distances.For point P3, which lies to the right of the cameras (X=250mm), errors in all coordinate directions can be observed.For a practical target radius of 5mm, the resulting Z error is in the order of 0.03mm which should not be neglected if high point accuracy is required.Figure 14 shows the errors of spatial intersection caused by eccentricity of a spherical target.For the normal stereo case, point P1 shows larger errors in X than in Z, while Y is calculated correctly.The same effect occurs for P2 but with smaller values since the point is further away from the cameras.P3 shows errors in all directions.In general, 3D point errors of up to 0.03mm must be expected for a target radius of 5mm.
With larger spheres the error easily reaches critical limits, e.g.0.06mm at r=10mm.
For convergent images, the error behaviour is different.Here the errors in Z are larger than in X for P1 and P2 as expected from Figure 12.In general, resulting 3D point errors are almost twice as high as for the normal case of stereophotogrammetry.For 5mm targets, errors remain below 0.03mm but easily reach 1/10mm for r=10mm.
Length measurement error
In Figure 15 shows the result of length measurement errors (LME) by comparing measured distances to reference distances.For all tested configurations, a clear dependency on the length of the distance can be seen and the measured distances appear too long, hence all LME have a negative sign.For the convergent camera setup (Circle C, Sphere C) it can be observed that maximum LME of more than 0.015mm occur which are caused only by target eccentricity.Spherical targets cause higher LME than circular targets which is in accordance to the results of section 4.2.In the case of parallel viewing directions (normal case of stereo photogrammetry) the LMEs are significantly smaller (Circle N, Sphere N).The maximum LME reaches 0.007mm, hence below a significant level.It can be assumed that the points of the scale bars are subject to similar systematic point errors that are compensated with respect to the distance between them.
For the presented example it can be shown that the effect of target eccentricity does not affect length measurements to a higher amount.The systematic negative LME is presumably caused by the very symmetric arrangement of targets and the effect that the eccentricities in image space lead to errors in Z and consequently to a scaling effect in object space.
SUMMARY AND OUTLOOK
The investigation has shown that eccentricity in images of circular and spherical targets reach a significant amount under practical conditions.With respect to increased accuracy demands and smaller camera pixels the effect has to be considered in measurement.It can be proved that spherical targets show larger eccentricities and resulting object point errors than circular targets.While the eccentricity of spherical targets can be calculated for known target diameters, flat circular targets can only be measured without eccentricity if the normal vector of the target plane is known, which can be calculated from stereo or multi-image configurations.
A general solution to the problem of eccentricity in projection of circular targets is given by a rigorous calculation of the original circle in 3D space.Several approaches have been published , e.g.Kager (1981), Andresen (1991), Schneider (1991), Otepka & Fraser (2004) or Wrobel (2012) who all found solutions for the determination of 3D circle centres from image points.
Figure 3 :
Figure 3: Images of a globe in two different image positions
Figure 5 :
Figure 5: Eccentricity in projection of a circular target left: parallel planes; right: tilted planes The centre of the image ellipse is identical to the imaged circle centre only in the case where circle plane and image plane are parallel to each other (Figure 5 left).The effect is well known and was investigated by Ahn (1997), and by Dold (1997) for high-accuracy photogrammetry systems.Dold has shown that eccentricity depends on circle radius, principal distance and exterior orientation.It becomes zero when both planes are parallel.The eccentricity increases for increasing target diameters, viewing angles and image scales.With respect to the angle between target plane and image plane, the function shows a sinusoidal form (example in Figure 7 right).
Figure 6 :
Figure 6: Eccentricity in projection of a spherical target (left: sphere on optical axis; right: sphere with lateral offset)
Figure 7 :
Figure 7: Eccentricity in projection of circular targets as a function of target radius (left) and tilt angle (right, with r=5mm)
Figure 9 :
Figure 9: Eccentricity in projection of a spherical target at points P3 and P6 as a function of target radius (top), and for P1 and P4 as a function of exterior orientation angle j (bottom) Figure 9 (left) shows the resulting eccentricity for a spherical target as a function of the radius.The targets correspond to point P3 inTable 1 (video camera) and P6 in Table 2 (SLR camera).The curves show the resulting values for ex and ey for both camera types.As with the flat target tests, the eccentricity for the SLR setup is larger than for the video camera due to the larger scale (smaller scale numbers in Table 2) and the larger
Figure 10 :
Figure 10: Eccentricity in projection of a spherical target with r=10mm
Figure 12 :
Figure 12: 3D point errors caused by eccentricity in projection for parallel images (top) and convergent images (bottom)
Figure 13 :
Figure 13: 3D point errors as a function of circular target radius for normal case (left) and convergent stereo case (right)
Figure 14 :
Figure 14: 3D point errors as a function of spherical target radius for normal case (left) and convergent stereo case (right) order to estimate the impact of target eccentricity to the quality of length measurements, two examples have been investigated.Based on a stereo setup with convergent cameras as given in section 4.1, an arrangement of 7 scale bars has been simulated that follows the recommendations of the German guideline VDI 2634.Each scale bar consists of five given points, hence up to different 10 distances are defined per scale bar.From all 7 scale bars a total of 35 object points and 70 individual distances are given.The cube of scale bars covers a volume of about 500 x 500 x 500 mm³.
Figure 15 :
Figure 15: Length measurement errors for VDI set-up of circular targets
Table 1 :
Object point coordinates, imaging scale and resulting semi-axes for video camera setup
Table 2 :
Object point coordinates, imaging scale and resulting semi-axes for SLR camera setup Again three object points are investigated where two lie on the optical axis and one is located in the corner of the field of view. | 6,771.8 | 2014-06-06T00:00:00.000 | [
"Physics",
"Engineering"
] |
Using space-filling curves to improve the quad tree for spatial indexing
Spatial indexes, like the ones that are based on Quad Tree, are important in spatial data-bases for the effective implementation of queries with spatial constraints, particularly in the case where queries include spatial links. The quad trees are a very interesting subject, given the fact that they give the ability to solve problems in a way that focuses only on the important areas with the highest density of information. But it is not without the disadvantages because the search process in the quartile suffers from the problem of repetition when reaching the terminal node and return to the behaviour of another way in the search and lead to the absorption of large amounts of time and storage. A database management system can handle data very easily if the object is one-dimension (sequential).
In this paper, improve the quad tree by combining one of the space filling curve types, including the Hilbert curve and the Z-ordering curve with a quad tree. It will convert from two-dimensional to one-dimensional and sequentially search and end the problem of repetition whenever it reaches a terminal node Ordinary quad tree. Resulting in reduced storage space requirements and improved implementation time.
Introduction
Spatial indexing is based on physically grouping indexed items.For instance, countries may be grouped according to the continent.With spatial indexing, there is even a possibility of providing virtual reality (VR) fly through (or over, or under) for the sake of helping users in maintaining their search context while users narrow or expand their search domain.
Spatial indexing has richness which parallels the indexing with an author or a subject.Spatial indexing concept is quite powerful, that for the management of records, they are found according to their correlation with a place.Numerous records are strongly coupled to a place.Like other indexing forms, geographical indexing can be combined with other indices.There is a wide range of cases when we need to find a subset of spatial information for various Purposes.For example, when we talk about things that happened in the past, we try to link them with the years or our ages, and in this way we treat the events as a time sequence and index the events with time.The power of indexing lies in a commonly used strategy humans have used for thousands of years: divide and conquer.
Quad trees are a very straightforward spatial indexing technique; effectively we divide the whole search area into four quarters and ignore three quarters when our target is definitely not there.We keep on dividing the search space into four quarter until we find what we need [7].A space filling curve (space filling curves) is a continuous path that passes through each one of the points in a space, which gives a one-to-one correlation between point coordinates and the 1-D-sequenced numbers of the points on the curve.Spacefilling curves were commonly utilized in math and for the transformation of multi-dimensional tasks to 1-D forms.For scientific applications, ordering calculation or data along space-filling curves may be utilized to exploit locality in the case of partitioning to parallel systems or in the case of restructuring for exploiting memory hierarchy [3].The presented paper has been organized in the following: Section 2 discusses related works.Section 3 includes a description about quad trees, section 4 gives a brief survey about Space filling curves.Section 5 explains method for quad tree building, Section 6 gives experimental results.Section 7 includes the paper conclusions.
Related work
This section presents the studies about the spatial index with several published researches related to the goals of this work: 1. Julian Hirt (2010) Implemented a method for path finding on a 3m * 3m board for the mobile robot, combining the A* algorithm with the quad tree, resulting in an optimal path and shorter time than using the A* algorithm with the regular Grid [2].
2. Amitava Chakraborty (2011) balanced the quad tree with the use of point pattern analysis; because the effectiveness of the search approach is dependent on the tree height, arbitrarily inserting point features could result in the tree being unbalanced and result in longer searching time.Post implementing this algorithm, it has been noticed that its performance enhanced [8]. 4. Phyo Wai (2014) Provide a proposed compression system for GIS by Using a quad tree way that is an integral part of Cluster encoded method.This system was produced to be more convenient to pressure more than other pressure methods.It is effective in sending data over the current network [1].
5. Ashwaq Talib Hashim (2016) presented a color image compression approach for increasing the ratio of compression with no impact on the original scene by distortion or noise.With the use of low lossy rate quad-tree compression approach for increasing the association amongst pixels with the use of Quantizing and encoders of entropy like shift encoding and run length encoding will additionally compress the image.the results of CR (i.e.Compression Ratio) of the presented approach on average to be about 1:29 of the original image size, higher ratio of compression may be accomplished with the increase of the levels of compression, this high ratio of compression has been considered an optimal ratio in comparison to the achieved PSNR (Peak Signal to Noise Ratio) of decompressed-compressed image [6].
6. Satori Tsuzuki (2019) a method was proposed to create a central graph of the geometrically complex road map using a quad tree pyramid to simulate cellular automation and trace the distant leaves of the tree that used Morton's curve.The leaves were chosen to maintain a certain distance between the previously selected sheets as a graph.Each specific node searches the adjacent nodes and stores them as glyphs.The shortest path of the resulting graph was produced using the Dextra algorithm.It is quite possible that done develops a new technology to create a suitable network diagram for vehicle simulation studies in transport research [11].
Quad Tree Basics
A quad tree is a tree data structure where every one of the internal nodes has precisely 4 children.Quad trees are the 2-D analog of octrees and are usually utilized for partitioning a 2-D space via the recursive sub-division of that space to 4 areas or quadrants.The data which is related to a leaf cell differs according to the application, but the leaf cell denotes a "unit of interesting spatial information".The sub-divided areas could be rectangular or square shaped, or they could be of a random shape.This data structure has been referred to as a quad tree by R. Finke and J.L. Bentley in 1974.An analogous partitioning is referred to as Q-tree as well.All quad trees forms have some features in common: 1.They decompose space into adaptable cells 2. Each cell (or bucket) has a maximum capacity.When maximum capacity is reached, the bucket splits 3. The tree directory follows the spatial decomposition of the quad tree.
Quad trees could be categorized based on the data type that they represent, and that includes points, areas, curves, and lines.Quad trees could as well be categorized according to whether the tree shape is not dependent on the order of data processing (fig 1) and (fig 2) Represent points and region by quad tree [7] .
Space filling curves (SFC)
An SFC is a continuous trajectory visiting each one of the points in space precisely one time in a specific order thereby resulting a one-to-one correlation between points' coordinates and the one-dimensional sequence numbers of points on the curve.Which gives a way for linear ordering of a grid points.The aim is conserving locality, thus the points that are near in space have to be stored close to one another in a linear ordering [2].
The concept of multi-dimensional space mapping to the 1-D space has a significant impact on applications which include multi-dimensional data.Multi-media data-bases, Geographic Information Systems (GISs), and Image processing are examples of multi-dimensional applications.By using a mapping scheme, a point in the D-dimensional space can be represented with one integer reflecting the different dimensions of the original space.Space filling curve can be used for mapping from multi-dimensional space to 1-D space.
There are various space-filling curve types.They differ in the way they map into the 1-D space.Choosing the suitable curve for any application based on knowing the mapping approach which is given by every space-filling curve.Some space filling curve examples are the Hilbert curve, and the Z-order curve [1].The space filling curves may be utilized to create a quad tree structure and construct relevant data of higher dimensions.The order of how these four sub-regions are closely related to the area fill curve is related.If a multidimensional application applies a Z-order or H-order curve, a quad tree in SW, SE, NW, and NE will be created more efficiently to access the data.Work has been done on the map of the province of Baghdad, the capital of Iraq, which contains points representing intersections and each point has a two-dimensional events as shown in the fig ( 5):
Performance results
In the process of spatial indexing, The hybrid structure of quad tree and Hilbert curve or Z-order mapping may benefit of each of the retrieval (time) and storage (size).The objects have been split and represented, Determine the minimum number of points at which the division of the quartile is divided into quarters is k = 10 (This means that each quarter contains 10 points or less not divided into four quarters) , Work has been applied to intersections in Baghdad's two-dimensional map, where each point has coordinates (x, y) on the map.After quad tree is applied to the points, the required amount of storage was calculated in bytes and then the hybrid structure of quad tree and Z-curves, quad tree and H-curves was applied.This resulted in a significant reduction in the amount of storage required, as the points in the hybrid state were serialized.Note that the hybrid structure is equal in terms of storage for quad tree with Z-curves and quad tree with H-curves for each number of points.The experiment was carried out each time as shown in (table 1).When the above storage results are represented by the graph, the difference is clear.The black line represents quad tree, the red line represents Z-curves with quad tree, and the blue line represents H-curves with quad tree, as shown in the (fig 6).6): As the required storage for each case In the same way, when calculating the amount of time it takes, it will not retrieve the point information which is (X: 44.42574332115971, Y: 33.30565709558853) looking for each time for a different number of points.The time taken when applying quad tree alone is much greater than applying the hybrid state of Z-curves with quad tree and H-curves with quad tree, as shown in (table 2) .In previous experiments the focusing was on a certain point (constant) for a different number of points each time.Now, different points have been taken from the map and work on it and at all intersections points in the Baghdad map of 16039 points (fixed).Where x and y are the coordinates for each point that are searched each time, as shown in (table 3) of size.
3. A. A. El-Harby (2012) presented a new Quad tree-based Color Image Compression Algorithm By dividing thecolor of the image into RGB components and then dividing the blocks on the way of the quad tree The ratios of compression of the second were in the range from 0.25 to 0.80 at a value of threshold equal to 0.1, and from 0.78 to 0
Table ( 1
): The storage size taken for each case, with number of k = 10
Table ( 2
): The Response Time taken for each case, with number of k = 10. | 2,717.4 | 2019-09-10T00:00:00.000 | [
"Computer Science"
] |
IMPROVING 3D PEDESTRIAN DETECTION FOR WEARABLE SENSOR DATA WITH 2D HUMAN POSE
: Collisions and safety are important concepts when dealing with urban designs like shared spaces. As pedestrians (especially the elderly and disabled people) are more vulnerable to accidents, realising an intelligent mobility aid to avoid collisions is a direction of research that could improve safety using a wearable device. Also, with the improvements in technologies for visualisation and their capabilities to render 3D virtual content, AR devices could be used to realise virtual infrastructure and virtual traffic systems. Such devices (e.g., Hololens) scan the environment using stereo and ToF (Time-of-Flight) sensors, which in principle can be used to detect surrounding objects, including dynamic agents such as pedestrians. This can be used as basis to predict collisions. To envision an AR device as a safety aid and demonstrate its 3D object detection capability (in particular: pedestrian detection), we propose an improvement to the 3D object detection framework Frustum Pointnet with human pose and apply it on the data from an AR device. Using the data from such a device in an indoor setting, we conducted a comparative study to investigate how high level 2D human pose features in our approach could help to improve the detection performance of orientated 3D pedestrian instances over Frustum Pointnet.
INTRODUCTION
Pedestrian friendly urban designs like walkable cities and shared spaces have recently gained a lot of attention.While the former has focused more on pedestrian needs in urban and suburban environments necessitating a pedestrian network as a criterion for its successful design, the latter has emphasized more on promoting walking by mixing different traffic modes (cars, cyclists, and pedestrians) (Hamilton-Baillie, 2008) with no or reduced infrastructure.In either of these spaces, collisions are a potential safety threat considering pedestrian-pedestrian interactions (conflicts via congestion in pedestrian network (Wang et al., 2016)) or when interactions with different road users (e.g., pedestrian-car collision).
The basic idea of shared spaces is to mix traffic participants to create unclear situations to promote lower vehicle speed promoting walkability; however, the elderly and disabled feel less safe as they are expected to be more cautious.The inability to anticipate an upcoming danger due to reduced cognition or a confusion over priority while crossing paths could result in collisions that is life-threatening.Therefore, possible wearable conflict detection systems (e.g., intelligent mobility aids) need to be explored to enhance safety for these vulnerable road users (VRU).Conflict in this scope of work is inspired from (Javid and Seneviratne, 1991).It is defined as a traffic event involving one or more pedestrians and one or more vehicles, where both perform actions, such as applying changes in direction or speed, to avoid a collision.
Augmented Reality (AR) devices use perception sensors for spatial mapping to place virtual 3D content aligned with the real world space.They use RGBD sensors and can acquire both RGB images and depth information in raw format.Currently such sensors are also used for 3D pedestrian detection, a * Corresponding author fundamental component for follow-up motion prediction and collision detection in autonomous driving.Using the sensor capabilities of an AR device, if we can realise a collision detection system where nearby pedestrian are detected and their future motion are predicted; these head mounted headsets can serve as safety aids and further the research along using 3D augmentation.This can be used for realising virtual traffic lights to follow rules (Kamalasanan and Sester, 2020) and hence traffic behaviour in shared spaces.
However, current research in 3D pedestrian detection is mainly applied to autonomous driving and robotics and there are fewer studies of data from wearable devices that serve pedestrians safety.As a first step in the direction of realising a wearable safety detection system using an AR device, we did experiments in an indoor environment with pedestrian motion, and collected data from these wearable sensors.
In our work we emphasize that using extra orientation information could improve 3D detection and propose to refine the 3D orientation of people by including 2D human pose.Using the device sensors and improved feature representation via our proposed Pedestrian Pose enhanced Frustum PointNets architecture (PPEF-PointNets), we realise a 3D pedestrian detection system.Finally, we perform extensive experiments and show that our approach achieves better results than F-PointNets for pedestrian detection.
RELATED WORK
This section briefly introduces the basic idea of mobility aids and reviews recent work on 3D object detection using RGBD data.In addition, the literature related to the fusion of human pose information is summarized.
Intelligent mobility aids
Intelligent mobility aids for the elderly represent a class of assistive devices that attempt to augment the user's current abilities instead of replacing them and are supported by an array of sensors for environmental perception.While a larger proportion of research has focused on the white cane and wheelchair, recent works have focused along wearable systems capable of dynamically detecting obstacles (Poggi et al., 2015), with emphasis on real-time performance along with environmental perception features (e.g., crosswalk detection).Inspired from social robots, some other works have taken into account the context (Ito and Kamata, 2013) while predicting conflicts in pedestrians environment but are limited to a wheelchair approach and lack the 3D perception system.Many of these systems use auditory or haptic feedback to warn collisions and lack any visual interfaces.
3D pedestrian detection using RGBD Data
Based on RGBD data, (Kollmitz et al., 2019) extended a Faster R-CNN model (Ren et al., 2015) to regress the 3D centroids of pedestrians.Meanwhile, (Linder et al., 2020) extended the YOLO v3 model (Farhadi and Redmon, 2018) to regress the 3D centroids.In addition to the regression of 3D centroids, Explanable YOLO (Takahashi et al., 2020) used the 4-channel RGBD data directly as input to regress the 3D bounding box of pedestrians using Darknet-53 (i.e., the backbone network used in YOLOv3).When considering indoor depth images with dense pixels values, an image-based CNN approach could be considered.This is not the case when dealing with lower resolution depth images (e.g., from wearable devices).The abovementioned method focuses mainly on the positions and dimensions of the 3D pedestrians and does not take into account their orientation.However, for the collision detection systems, the orientations of the detected pedestrians play an important roleespecially for predicting the movement trajectory.
The orientation of 3D objects can be represented in three ways: (1) treat the orientation estimation as a multi-class classification task by discretizing the orientations into bins (Ozuysal et al., 2009, Ghodrati et al., 2014), (2) apply direct regression (Geiger et al., 2012), and (3) in more recent years, combine both by first classifying orientation into bins and then regress the residual of orientation within a bin for refinement (e.g., F-PointNets (Qi et al., 2018)).
Frustum Pointnets (F-PointNets) (Qi et al., 2018) is a seminal work which extracts features directly from point cloud data.It first detects objects in 2D images and then extracts the foreground points using Pointnets (Qi et al., 2017).These foreground points are used to estimate the 3D bounding boxes.This method was applied to the JRDB dataset collected from a social robot (Shenoi et al., 2020).Multiple variations of F-PointNets have been proposed to improve the performance of 3D object detection.Frustum ConvNet (F-ConvNet) (Wang and Jia, 2019) is an end-to-end network, which first generates a sequence of frustums by sliding along the frustum axis with certain interval.Then, it encodes each slice of the frustum with Pointnets and encodes the sequence using a fully convolution network (FCN).A high dimensional convolution operator capturing local features enhanced with color and temporal information was proposed in (Wang et al., 2020).The work uses an early fusion approach directly on raw data from LiDAR, Camera and Radar and uses a 7D frustum representation which includes the color and precise time of the sequence.A plugin framework is designed to extract radar point cloud features efficiently.This architecture efficiently estimates the 3D bonding box along with a predicted velocity along the x and y axis.
Fusion of human pose information
Human pose estimation on 2D images are nowadays a welldeveloped research area.Many existing software can help researchers to easily estimate human body keypoints and generate skeletons of the people, e.g., OpenPose (Cao et al., 2019), AlphaPose (Xiu et al., 2018).Merely estimating is often not enough, hence it becomes important to make further use of this information.One of the most common application scenarios is action recognition.Handcrafted features are most frequently used.(Li et al., 2020) utilized such high level features to classify pedestrian motion state (i.e., walking and standing).Different features have been considered, e.g., positions of body keypoints reduced to neck location as origin and then normalized by the bounding box's height.In recent years, the skeletons were also encoded using Recurrent Neural Network (RNN), Convolutional Neural Network (CNN), or Graph Convolutional Network (GCN) for action recognition (Ren et al., 2020).Furthermore, such high level features have also been used for water level estimation when people are submerged in flood water, e.g., positions of body points (Bischke et al., 2019) and distance between certain keypoints (Feng et al., 2020).
In addition to the applications above, human pose can also be used as a clue to better estimate the orientation of the pedestrians.Pedestrians in different orientations show a different appearance.However, this information is rarely considered in the current research.
Accurate calibration between image and 3D information is important to fuse semantic information from images as in (Vora et al., 2020).When dealing with a loosely calibrated commercialgrade devices (e.g., a wearable) it would be advantages to consider higher level image feature information .In this work, we propose to introduce the 2D pose information to the 3D pedestrian detection task.This is also a very early investigation on the data collected from wearable devices.The data from such device deserve special treatment because of their typically low sampling rates and weak computing power.
DATA
Augmented Reality (AR) devices which can render 3D content in the real world are capable of sensing the surrounding environment.In this research, the Hololens 2 is used as a data capture device by exploiting its raw sensor streams.The wearable mixed reality device (Figure 2) offers 2D and 3D data describing its surrounding environment, which have been made available with the release of the Research Mode API (Ungureanu et al., 2020).
The far-depth sensor, which runs at lower frames of 1 to 5 fps, can be used to create 3D maps.The RGB streams from the Hololens PV (PhotoVideo) camera, along with the abovementioned depth sensor streams, can be captured, stored, and downloaded to create RGBD datasets.This has been used to create ego pedestrian detection dataset.
The Simulated Shared Space (SSS) Dataset
An indoor experiment was conducted with a stationary Hololens user watching an open space with social interactions and pedestrian motion.In the experiment, the device captured the RGBD data of the scene.The scene recording included pedestrian encounter, dispersal, and random walk in the space of dimensions 7m x 7m.
The dataset hereby called the Simulated Shared Space dataset (SSS Dataset) records the ego view of a pedestrian wearing the sensing aid.We hypothesize that this dataset is suitable for pedestrian detection research for wearable mobility aids.The complete dataset includes 430 pedestrian frames, with each frame containing a maximum of three pedestrians.The pedestrians perform arbitrary movements and interactions in all directions within the Field of View (FoV) of the device during the experiment.
For a given RGB and its corresponding depth image, using the known calibration between the sensors, RGB information can be projected to the depth information and vice versa, exploiting the multi-sensor information for multimodal detectionbased approaches.
Annotation
The captured dataset was annotated using a semi-automated annotation procedure.As the 2D person detection in images is a well-studied problem, we used the well-known person detection, YOLO (Farhadi and Redmon, 2018), as they perform well in most cases.To obtain the 3D bounding box, region growing on the projected depth points of 2D image detection followed by box estimation was used to create ground truth automatically.This step was followed by a 3D orientation and manual error correction of pedestrians using LabelCloud (Sager et al., 2021).A total of 855 pedestrian instances (3D bounding boxes) were annotated using the above mentioned procedure with the pedestrian orientation distribution shown in Figure 4 for the complete dataset.Orientation 0 indicates that people are walking from left to right from the Hololens perspective (Figure 3).
PEDESTRIAN POSE ENHANCED FRUSTUM POINTNETS (PPEF-POINTNETS)
Given a dataset from a body worn sensor system, the aim of our work is to output 3D bounding boxes and detect people in the RGBD frames.Our framework (Figure 1) aims at improving the 3D pedestrian detection pipeline F-PointNets by including 2D human pose as additional features.
2D Pose and Hand Crafted Features
Pedestrians with different orientations will be projected as different shapes on a 2D image.This difference can provide additional clues to estimate the orientation of a pedestrian.Although this information is implicitly encoded in the local deep feature of the detected 2d pedestrians, it is not sensitive to small orientation changes.In order to explicitly encode and utilize this information, we use 2D human pose estimation and incorporate this information into the learning process.
OpenPose (Cao et al., 2019) for example, is a state-of-the-art 2D pose detection framework that can identify 25 landmark points(keypoints) of the human pose skeleton using the body-25 1 model.Once the 2D keypoints of different body parts are extracted with the framework, small variations of pixel distances between keypoints can well represent significant 3D pose information.
Even when the model detects 25 landmark points of the human body, not every keypoint can contribute to 3D pedestrian detection.For example, facial keypoints would be less accurately detected at a distance.
For the above-mentioned dominant keypoints and with an iterative feature selection and representation, we hypothesize that handcrafted pose features Fpose could improve the feature representation of pedestrians in the F-PointNets (Qi et al., 2018) framework.As pedestrians would appear in different distances from the camera when captured from an image source, these handcrafted features should be less sensitive to scale changes.A scale factor (SF) is introduced as a solution to this problem in our framework.SF is the distance between the shoulder and hip joints, which is used for feature normalisation: We have developed the following feature representations to be combined with the deep features from F-PointNet: Distance Ratio (DR) : From the given set of 13 keypoint features 2D pose features, the euclidean distance between the shoulder joints and hip joints were calculated.
Both distances were normalized with the scale factor SF as in Equation (1).They were then applied to the network as pose features Fpose.Optimised Distance Ratio (ODR) : The distance ratios calculated in the previous step are mostly values in the range between 0 and 1 for standing pedestrians.Smaller values correspond to people facing the camera sideways, larger values correspond to people facing the camera frontally or from behind.Very small differences in pedestrian orientations can not be well represented by simply calculating these distance ratios.Therefore we applied a negative log transformation to these ratios to exaggerate the small orientation differences to better encode the orientation information.
Optimised Distance with Keypoint Position and Distances (ODPD) : Inspired from the recent works in applying high-level 2D pose features (Li et al., 2020), along with the Optimised Distance Ratio, we include the normalised position and distance for all other joints, as depicted in Figure 5.
For the normalised position (Np), as in (Li et al., 2020), the coordinates are first translated to a coordinate system with the neck as the origin.The position of the keypoints of the arms (2-7) and legs (9-14) are normalised with the SF.
Normalised distance (N d ) computes the Euclidean distance of the keypoints on the arms and legs.For the legs, four distance features would include the distance between left hip and left knee, left knee and left ankle, and the same for the right legs.While for the arms, the four features include distances between the left elbow and left shoulder, left elbow and left wrist, and corresponding features from the right arm.In total, 8 features would be normalised by the SF.
Proposed PPEF-PointNets Pipeline
The handcrafted features Fpose, are introduced into the framework F-PointNets (Qi et al., 2018) by adding an off-the-shelf 2D pose detector and feature selection to the existing pipeline when detecting pedestrians.In this section, we explain our PPEF-PointNets and how we fuse the additional 2D pose to it.
A raw point cloud is obtained from RGBD scans using calibration data and depth re-projection.The raw point cloud and RGB images are inputs to the network.
Firstly, 2D pedestrians are detected in the RGB images using deep learning based 2D detections (e.g., YOLO).The 2D bounding box in RGB images is geometrically extruded to extract the corresponding frustum point cloud containing points from the point cloud that lie inside the 2D box when projected into the image plane, following the steps for frustum proposal generation as in F-PointNets.
Secondly, the RGB images are also passed to the pose detector followed by the handcrafted feature extraction Fpose as described in the previous section.Thirdly, the instance segmentation module as proposed in F-PointNets applies PointNets (Qi et al., 2017) to all points contained inside the entire proposed frustum to extract features.As the features pass from the segmentation module to the amodal box estimation modules, points are transformed from the camera coordinate system to the local object coordinate system.The 3D amodal bounding box estimation module uses these extracted point features along with the Fpose and applies a T-Net (Figure 6) to infer the coordinates of the 3D bounding box of the object.The loss functions used in the network are the same as proposed in F-PointNets.
EXPERIMENTS
With the proposed method in Section 4, we experimented with the different strategies of pose fusion in PPEF-PointNets with our SSS dataset collected with wearable sensors.
For the experimental evaluation, we have used state-of-the-art pre-trained models.While image detection using YOLOv3 pretrained on COCO was used to detect pedestrians in the frustum proposal step, OpenPose was used to extract poses from the SSS Dataset.As the pose estimation with this multidetector works by detecting poses for multiple pedestrians occupying a single frame in one shot, a post-processing step was included to map the 2D bounding box detections to their corresponding 2D poses.With the detected poses, the handcrafted features Distance Ratio,Optimised Distance Ratio and Optimised Distance with Keypoint Position and Distances were computed.Hence each pedestrian for an RGB image in the SSS Dataset is characterised by a 2D bounding box and hand crafted features from the pose.
To test our network, we train our PPEF-PointNets with the three different feature sets as introduced in Section 4.1.We train for 150 epochs with a batch size of 32 and a learning rate of 0.001.
The training was completed on a Nvidia 1080Ti GPU machine with the dataset randomly split into training (80%) and test sets (20%).
For comparison, we trained a F-PointNets v1 model with the SSS Dataset.This model served as the baseline for the followup comparison.The performance of the network was measured with the AP and AOS as used in the KITTI benchmark (Geiger et al., 2012) to indicate whether the pose feature fusion was beneficial for pedestrian 3D object detection.AP is the Average Precision often used for evaluating object detectors and AOS is the Average Orientation Similarity proposed in KITTI for evaluating 3D orientations.The quantitative evaluation is summarized in Table 1.
As can be seen from the results, the optimised distance features improves (except IoU=0.3) over the baseline in overall 3D pedestrian detection performance.In contrast to the expectations and also as in other works (Yu et al., 2019) where the position and distance of keypoints were used, adding them in ODPD did not show further improvements.This may be due to the fact that such features do not cope well with changes in perspective and different poses of people.
In further, the AP and AOS at different IoU thresholds are visualized in Figure 8.The model using 2D pose information achieved better performance for almost all the IoU thresholds.
To further evaluate the improvement in accuracy, an IoU of 0.1 was set, and all the true positives detected in Optimised Distance (ODR) were compared with the baseline.It can be noted that 63% of the True Positives detected show improved 3D IoU when the pose is added, with a mean improvement of 13%.We consider this as clear evidences for the benefits of integrating 2D handcrafted pose features, as done by our approach.Leveraging reliable 2D pose estimates yields a higher performance for 3D detection.Furthermore, we also achieve a lower error for the orientation estimation.
Qualitative results are presented in Figure 7.The results of the model with the best performance are compared with the results of the baseline approach.From the visualization in 3D, the In order to present the localization performance of pedestrian detection, examples of 3D pedestrian detection are visualized in bird's eye views as in Figure 9.A better performance has been achieved by the proposed method.
CONCLUSIONS AND OUTLOOK
In this paper, we realised 3D pedestrian detection on data collected from wearable sensors.F-PointNet was fused with extra high-level 2D human pose information via experimenting with three types of handcrafted features.The newly proposed pipeline demonstrated a better performance compared to the original F-PointNet.
However, our framework has only been tested with a dataset collected from wearable Hololens device with less complicated indoor scenarios.Investigation of the performance of our approach on newly published indoor pedestrian detection datasets (Shenoi et al., 2020) could be a direction for future work.Also as the 3D detection system is intended for AR device, its real time performance and efficiency has to be evaluated in a next step.
While detection is an important component in collision detection, realising other components of a detection pipeline (e.g., 3D tracking) by including the improved orientation estimates and motion prediction (Cheng et al., 2020) need to be addressed in the near future.Combining this collision detection system with a 3D interface via a wearable device could leverage the power of perception and visualisation and hence controlling the walking behaviour of pedestrians in shared spaces.
Figure 1 .
Figure 1.The whole proposed pedestrian pose based fusion approach incorporating the 2D pose features into the F-PointNets.
Figure 3 .
Figure 3.The RGB (left) and depth (right) data captured by ego pedestrian in the interaction space.
Figure 4 .
Figure 4. Orientation distribution for SSS dataset distributed into 18 bins with a bin size 20°.
Figure 6 .
Figure 6.Bounding box estimation module with pose fusion
Figure 7 .
Figure7.Qualitative comparison of pedestrian 3D detection results using baseline (red bounding boxes on the left) and our proposed approach using ODR features (green bounding boxes on the right).The white bounding boxes are the manually annotated ground truth.
Figure 8 .
Figure 8.The AP and AOS for different values of IoU threshold for ODR compared against Baseline F-PointNets | 5,286.4 | 2022-05-18T00:00:00.000 | [
"Computer Science"
] |
Rodent community patterns and their dynamics in the Chaihe Forest area in Zhangguangcai Mountains
To understand the dynamics of forest rodent community patterns on a time scale, this study conducted a survey in Daqing Forest Farm, Chaihe Forest Region from 2014 to 2016 and analyzed the rodent community pattern and its changes based on the results of Ruyong (1959). The results showed that the species and distribution of rodents varied in different habitats. The biomass of rodents increased and decreased in the plots with low and moderate disturbance, respectively, while it increased in the plots that were disturbed the most.
Introduction
Rodent community ecology has played an important role in animal community ecology for a long time since the research by Zhong (Zhong et al. 1981). Since then, studies on rodent communities have gradually increased in China. Research (Han et al. 2004;Han et al. 2006) has shown that anthropogenic factors increase the community diversity index, climate, landform, soil structure and precipitation and determine the Author-formatted, not peer-reviewed document posted on 10/01/2022. DOI: https://doi.org/10.3897/arphapreprints.e80243 types and diversity of rodent communities (Wu 2014). The rodent community scale undergoes a spatial organizational form of change along with the change in the plant community, particularly in different months; total food, precipitation and other factors in the natural environment change the diets of rodents, which affects the spread of plant seeds and the changes in food consumption, also resulting in a change in the rodent community structure (Askins et al. 1990;Herrert 1994;Vickery 1994;Ford et al. 1999;Helzer et al. 1999;Vieira 1999;Paul 2007).
In 1959, Ruyong conducted a detailed study on the community composition, seasonal variation and vertical distribution of rodents in the Chaihe forest region ). In addition to natural impacts, such as climate change, human activities, such as tree cutting, artificial afforestation, agricultural planting, the construction of houses and roads, and the processing of domestic sewage, have also had a profound impact on the ecological environment of forests. Thus, they affect the survival and distribution of rodents. In this study, relevant sampling sites in the Chaihe forest region were studied again from 2014 to 2016 to analyze the forest rodent community pattern and its changes based on the historical research results.
Place and method Selection of study area and study site
The study area was selected in the Chaihe Forest Region (128°59′30′′E − 129°54′30′′E, 44°47′ 45′′N − 45°37′30′′N), southeast of Heilongjiang Province in the middle and lower reaches of Mudanjiang in the Zhangguangcai Mountains, which are part of the east slope of the Changbai Mountain System. That is the same area where Sun conducted a rodent community survey in 1959. The Chenguang Forest Farm is located in the upper reaches of Sandaohezi. There are broad-leaved forests along the river, with a few coniferous trees, that comprise 10 % of the local area. There are a large number of coniferous and broad-leaved mixed forests in the mountains that comprises 85 % of the local area. Farmland comprises 4 %, and residential areas comprise 1 %. The Daqing Forest Farm is located in the middle reaches of Sandaohezi at the junction of coniferous and broad-leaved mixed forest belt and the broad-leaved forest belt. Mixed forest is the primary type of forest, comprising approximately 60 % of the total area. The broad-leaved forest comprises 30 %; the meadows comprise 9 %, and the residential area comprises 1 %. Erdaohezi is located at the junction of the Mudanjiang River and its tributaries Erdaohezi, which is in the broad-leaved forest belt. Broad-leaved forest is the primary forest type, comprising approximately 50 % of the total area. There are also some forests along the river, comprising 15 % of the total area, 30 % of the arable land and 5 % of the residential area. Based on the habitat characteristics of the sample plots and the habitat classification adopted by Sun (1959), four different types of typical habitats were selected, including coniferous and broad-leaved mixed forests, broad-leaved forests, forest meadows and forest land. Author-formatted, not peer-reviewed document posted on 10/01/2022. DOI: https://doi.org/10.3897/arphapreprints.e80243 To accurately compare the changes in the rodent community, this study was consistent with the method described by Sun (1959). From April to September of each year from 2014 to 2016, the clip-day method was used to study relative quantities. The trap was placed in a straight line with 25 clips per line. The study lasted for two days and nights. The capture results were checked each morning, and the study line was changed after two days and nights. The area was surveyed twice a month. Each type of sample set including approximately 2 to 3 line traps with lines that were 100 m long and spaced approximately 20 to 40 m apart.
Data analytical method
In this study, α diversity and β diversity were used to describe the diversity of the rodent populations. The species number of α diversity, Shannon-Weiner diversity, Simpson diversity and Pielou's evenness indices were used to describe the diversity of rodent populations at multiple time scales in the Chaihe forest region, and the dynamics of changes in diversity were compared. β-diversity is the range of community composition changes, which can be used to describe the diversity of animal communities at different spatial and temporal scales. In this study, the Cody, Sorenson similarity and Whittaker similarity indices were used to describe the temporal dynamics of rodent community turnover and similarity.
Morning light forest farm
The Morning light forest farm is less disturbed by humans and lacks a forest edge, The results for this habitat type are shown in Table 3-1. The dominant species in the mixed coniferous and broad-leaved forests was brown leewards, with a capture rate of 12.25 %, followed by the striped field mice (Apodemus agrarius), with a capture rate of 4.06 %. These rodents were only captured in the mixed coniferous and broad-leaved forests, with capture rates of 0.77 % and 0.09 %, respectively. The total percentage of rodents captured in the mixed coniferous and broad-leaved forests was 17.22 %, which was higher than that in the other two habitats. Only two species were captured in the broad-leaved forest, with a capture rate of 8.42 % and 3.26 %, respectively. Five species of rodents were captured in forest meadows, and the capture rate of brownbacked rodents was 7.63 %. This indicated that they were the dominant species in meadows. The capture rates of the striped field mice and hamsters were 2.05 % and 0.78 %, respectively. Striped field mice and rats (Rattus norvegicus), which are closely related to human activities, were also captured at rates of 0.57 % and 0.14 %, respectively. This may be owing to the fact that the meadows in this region are primarily distributed near human settlements along foothills.
Daqing Forest Farm
The habitat of Daqing Forest Farm is dominated by coniferous and broad-leaved mixed forests, accompanied by broad-leaved forests and a small number of meadows Author-formatted, not peer-reviewed document posted on 10/01/2022. DOI: https://doi.org/10.3897/arphapreprints.e80243 and farmland. The survey results are shown in Table 3-1. The dominant species in the coniferous and broad-leaved mixed forests were brownbacks, with a capture rate of 9.28 %, followed by striped field mice, with a capture rate of 4.97 %. The capture rate of hamsters was 0.12 %. Only two species were captured in the broad-leaved forests, with a capture rate of 6.42 % and 7.21 %, respectively. Six species of rodents were captured in the forest meadow, with a capture rate of 9.06 %, which included the dominant species in the meadow. The capture rate of the striped field mice was 1.49 %, while that of Cricetulus was 0.23 %. That of the striped field mice was 1.49 % and 0.46 %, respectively. Black Lineage in Forest Farmland. The capture rate of rats was 8.53 %, which made them the dominant species; the capture rate of rats was 3.53 %, that of striped field mice was 1.76 %, and that of Cricetulus was 0.88 %.
Erdaohezi Forest Farm
A large area of farmland appeared in the Erdaohezi forest area owing to human disturbance, resulting in serious fragmentation of habitat. Broad-leaved forests composed of Mongolian oak (Quercus mongolica) were the primary forest land, and a small number of meadows and plantations were also found. The survey results are shown in Table 3-1. The dominant species of rodent in the broadleaf forests was striped field mice, and the capture rate was 15.35 %. The capture rate of the buff breasted rat (R. flavipectus) was 1.89 %, and that of the striped field mice was 1.79 %.
Striped field mice are the dominant species in forest lands, with a capture rate of 14.39 %. The capture rate of hamsters, rats and Oriental voles was 7.19 %, 2.54 % and 0.42 %, respectively. Six species of rodents were captured in forest meadows. The capture rate of Cricetulus was 13.07 %; that of the striped field mice was 7.05 %; that of the striped mice was 3.53 %; that of reed voles (Microtus fortis) was 1.87 %; that of rats was 1.24 %, and that of buff breasted rats was 0.21 %.
Species and distribution of rodents
The distribution of various rodents in different habitats differed as shown in Table 3 28.03 % of the total captured species, respectively. The broad-leaved forest was also dominated by brown dorsal horns and striped field mice, comprising 43.19 % and 50.00 % of the total capture, respectively. However, there were more striped field mice than the XX, but there were more striped field mice than brown dorsal horns.
There was a relatively large distribution of rats in the forest meadow, which was composed of 46.40 % of brown dorsal horns, 18.86 % of hamsters, 14.39 % of striped field mice and 13.40 % of striped field mice. Striped field mice, hamsters and rats comprised 58.41 %, 24.34 % and 12.83 %, respectively. Few Oriental voles were caught in the survey. They were only distributed in meadows and farmland. Northern red-backed voles (Myodes rutilus) were only captured in the coniferous and broad-leaved mixed forests of Chenguang Forest Farm at higher altitudes, and none of these species were captured in other plots. Rats only appeared in plots that were near residential areas.
Overall, the capture rate of rodents in the coniferous and broad-leaved mixed forests was the highest in forest areas that were far away from farmland and villages, and crops in the farmland also attracted a large number of rodents. In addition, rats and striped field mice were also distributed in parallel with human habitation.
Therefore, the farmland also had a high capture rate.
Rodent community distribution dynamics
As shown in Table 3 There were fewer Oriental voles in the Daqing and Erdaohezi Forest Farms, and they were not captured in the Chenguang forest farm with coniferous and broad-leaved mixed forests at higher altitudes. Since rats are atypical forest rodents and are associated with human habitation, only a small number of them were captured near the settlements.
Chenguang forest farm is located at the end of the forest road Bachen line, making it the least disturbed by human activities, followed by the Daqing Forest farm.
Both places have a good forest ecosystem. Erdaohezi is the most seriously disturbed by human activities, and the broad-leaved forest + farmland ecosystem is the primary ecosystem. The biomass of rodents in the forest ecosystem of Chenguang Forest Farm and Daqing Forest Farm decreased significantly, while the rodents in the broad-leaved forest + farmland ecosystems of Erdaohezi were highly adaptable, and the biomass of rodents increased significantly.
Changes of α diversity in the rodent community
The Shannon-Wiener diversity index indicated that the diversity of rodents in the Chenguang Forest Farm decreased, and this index decreased from 1.312 to 1.093.
Combined with the survey results of rodents, it was apparent that the decrease in diversity index was caused by a decrease in evenness because the number of species did not change. The main reason was the significant increase in the proportion of striped field mice, which increased from 9.62 % to 24.25 %. The diversity of rodents also decreased in the Daqing Forest Farm. The Shannon-Wiener index decreased from 1.991 to 1.531, which was owing to the decrease in species number and evenness.
Combined with the results of survey on rodents, the proportion of dominant species increased from 37.34 % to 53.90 %, which was the main reason for the decrease in evenness. The diversity of rodents in the Erdaohe River increased slightly, and the Shannon-Wiener index increased from 1.998 to 2.075. The results of survey of rodents indicated that although the number of species decreased by one species, the evenness increased. The proportion of striped field mice decreased significantly from 47.10 % to 31.55 %, which combined with the significant increase in the proportion of Cricetulus from 5.16 % to 23.51 %, were the primary reasons for the increase of Table 3 -4.
Changes in the β diversity of rodent communities
The Cody index can describe the change in rodent species. The Cody index of rodents before and after the Chenguang forest farm with the least amount of human disturbance was 0, indicating that there was no change in the composition of rodent species on this farm from 1959 to 2016. The Cody index of the Daqing forest farm that had been moderately disturbed was 1.50, which was the highest in the three different disturbance plots. Erdaohezi had been interfered with the most strongly, and its Cody index was 0.50. This indicates that the replacement status of rodent species is lower than that of the Daqing forest farm. The Sorenson similarity index of 1.0 for the Chenguang forest farm indicated that it had the least amount of disturbance, and the species composition of rodents did not change. The Sorenson similarity index of the Daqing Forest Farm was 0.769, and the group of rodent species substantially changed.
According to the survey results, five species did not change; two original species disappeared, and one new species was added. The Sorenson similarity index of Erdaohezi was 0.923, and the composition of rodent species was highly similar. The survey results indicate that one species was reduced, and no species increased. The Whittaker similarity index was 0.840 for the Chenguang Forest Farm. The composition of rodents changed to some extent, but the similarity remained high. The Whittaker similarity index of the Daqing Forest Farm was 0.821, and the changes in the composition proportion of rodents were slightly larger than those of the Chenguang Forest Farm.
Discussion
Among the theories about the impact of interference on biodiversity, the moderate interference hypothesis is an important theory, which is supported by many studies (Collins et al. 1995;Morris et al. 2006;Hiddink et al. 2007). A comparison of the diversity of small rodents from 1959 to 2016 using the Shannon-Wiener index showed that the diversity of rodents in Chenguang Forest Farm and Daqing Forest Farm that were less disturbed decreased, and the diversity of rodents in Erdaohezi that had a greater level of disturbance increased slightly. The Simpson diversity index indicated that the rodent diversity in Chenguang Forest Farm and Erdaohe Forest Farm increased, while that in Daqing Forest Farm decreased. The Cody index of rodents in the Chenguang Forest Farm before and after human disturbance was 0, indicating that the species composition of rodents in Chenguang Forest Farm did not change before and after human disturbance. The rodents in Daqing Forest Farm obviously had species replaced, and the Cody index was 1.50. This value was the highest among the three plots with different degrees of disturbance. The replacement status of rodent species in Erdaohezi was lower than that in the Daqing Forest Farm. From 1959 to 2016, rodents in the Chaihe forest area were not subjected to natural and human disturbance according to a certain frequency or a single factor. The characteristics of disturbance were continuous and comprehensive. The disturbance not only affected rodents themselves but also had an important impact on their habitats.
Conclusions
The distribution of various rodents in different habitats in the Chaihe forest area varied. Compared with the survey results in 1959, the richness of rodents in coniferous and broad-leaved mixed forest and broad-leaved forest had decreased significantly, while the richness of rodents in swampland increased. The diversity of rodents in the coniferous and broad-leaved mixed forests decreased significantly.
However, the diversity of rodents in the broad-leaved forests did not change significantly, and the diversity of rodents in the swamp increased. The uniformity of coniferous and broad-leaved mixed forest, broad-leaved forest and swampland increased. The richness of different habitats was meadow > field > coniferous and broad-leaved mixed forests > broad-leaved forest. The evenness index of rodents in different habitats was meadow > broad-leaved forest > field > mixed forest. The biomass of rodents both grew and declined. The biomass of the plots at lower and medium amounts of disturbance decreased, while that of the plots that were disturbed the most strongly increased. | 3,952.2 | 2022-01-10T00:00:00.000 | [
"Environmental Science",
"Biology"
] |
Wheeled Rovers With Posable Hubs for Terrestrial and Extraterrestrial Exploration
Space exploration and work such as search and rescue or resource mining is dangerous and often unsuited to manned platforms due to the associated dangers and costs. As a result unmanned wheeled rovers dominate the sectors as they exhibit a lower cost of transport to other legged systems. However, wheels have limited debogging and self-recovery ability if they become stuck. We propose a Posable Hub, where using electric linear actuators instead of a rigid spoke structure, the wheel centre hub can be actively manipulated. We construct a four wheeled rover with Posable Hubs and perform experiments on debogging, chassis levelling for sloped and uneven terrains and generating locomotion with a failed drive motor by converting gravitational potential energy into rotational motion. Our experiments compare the results to classical wheels and validate the superiority of our Posable Hubs for extreme and unstructured terrains such as environments experienced in terrestrial and extraterrestrial exploration.
I. INTRODUCTION
Unmanned robotic platforms are often utilised for applications where a human life may be put in danger, or development and monetary costs are too significant for a manned platform. Common applications for such rovers include urban and non-urban search and rescue [1], [2], disaster relief [3], various mining [4] and most evidently extraterrestrial applications [5].
With the increased interest and ability brought on by technological advances, human kind is more engaged than ever in exploring planetary bodies, moons and asteroids, as well as establishing a permanent base on nearby planets [6]. With this exploration comes a push for more effective and robust locomotion systems that can overcome extreme, unpredictable and unstructured terrain. As a locomotion system's efficiency is highly dependant on the contact it makes with the ground [7], wheels continue to be the locomotion method of choice for platforms designed for these operational environments. Wheels coupled with a suspension system allow desired operation to be maintained [8], as these mechanisms protect the vehicle from mechanical vibrations, prolonging the robot's operational life [9].
The associate editor coordinating the review of this manuscript and approving it for publication was Yingxiang Liu .
However, these systems present certain and unique disadvantages, such as higher launch volume requirements, when compared to a platform with no, or in-wheel suspension, and limited obstacle clearing abilities and slip [10]. A further significant drawback is their lack of debogging ability, as seen with NASA's Spirit rover ceasing operations after becoming stuck in a sand trap, unable to recover [11]. Further, no current wheeled systems allow for chassis pose selection, and very few systems offer redundancy upon motor failure.
Our research proposes a Posable Hub for travel over unstructured terrain. Such a system allows small robots to travel in rough terrain by adjusting the robot pose while not affecting the physical size of the overall wheel, and maintaining a close efficiency to that of classical wheels. Our system further offers the ability to generate locomotion from gravity, should a drive motor fail. We demonstrate the superiority of our wheel in sloped environment traversal, and its debogging ability by allowing the rover to actively manipulate its wheelbase. A rover using our wheels is shown in Fig. 1.
II. RELATED WORK
Robots designed for unstructured terrain traversal usually fall into one of two categories, wheeled and legged systems. Most common systems seen in the field are robots utilising conventional wheels with suspension systems, due to their simplicity and low Cost of Transport (COT). These systems often present certain mobility limitations and hence legged robots, with high numbers of degrees of freedom and superior obstacle clearing abilities are then used. However, legged platforms are of higher mechanical, electrical and control complexity and result in a higher COT.
Most applications of these systems focus on unstructured terrains for terrestrial and extraterrestrial applications, as unstructured terrain can be found all over the solar system. We review systems with abilities to perform space exploration as well as terrestrial applications as our proposed system is well suited for both types of uses.
A. WHEELED PLATFORMS
A wheeled locomotion platform most fundamentally consists of a wheel, suspension unit and a drive unit. A variety of system configurations are commonly used in the robotics field and exact configuration generally depends on the application. These systems are designed for specific uses as important design considerations have to be considered to suite an specific, predetermined application.
Most commonly utilised systems for terrestrial robotics are a double wishbone [12] and Rocker-Bogie [13] suspension. As this system is mounted outside of the wheel it adds to the size of the wheeled system. This outside mounting allows the wheel to have high vertical travel, which directly contributes to the size of obstacles the system can overcome while maintaining surface contact. This is very effective at providing continuous wheel traction over varied terrain [14].
Further, Rocker-Bogie suspension is most commonly used for extraterrestrial rovers and consists of three wheels fixed to two geometric arms that are secured to the vehicle via pivot points [15]. This configuration provides a passive system with an exceptional ground clearance and the ability to drive over obstacles that are as tall as a single wheel [16]. When an obstacle is encountered, the pivot arms allow the leading wheel to translate its forward motion into an upward motion and effectively climb over the obstacle, much taller than the wheel's height. Most notable example of this system used in the field is NASA's extraterrestrial rovers [17].
Configurable wheels and whegs have been proposed and combine desired properties from both wheeled and legged systems to create an alternative design for robotic platforms [18]. In doing so they can be designed for specific use cases and therefore optimised for desired performance properties such as size, weight and shape [19]. The whegs can transform from a round wheel to a leg-like system, significantly increasing their ability to overcome obstacles. Cockroach inspired whegs can climb obstacles of 175% their height [20], and others passively geared whegs up to 240% their height [21].
B. LEGGED PLATFORMS
Other emerging approaches to locomotion over unstructured terrain are legged systems. Legged systems such as Bipeds [22], Quadrupeds [23], [23], Hexapods [24] and other variants are able to navigate such terrain due to their extra degrees of freedom, when compared to a wheel. This allows for greater obstacles to be overcome for similarly sized systems, of wheeled or tracked equivalence.
Legged systems attract uses from robotics for significant off-road uses, due to their exceptional mobility as systems with higher degree of freedoms provide better ability to traverse the surface [25]. However, this significant benefit requires complex control, accurate forward perception and proves inefficient from the perspective of power consumption and time, compared to wheels and tracks. As a results of the COT of legged robotic being significantly higher than wheels, wheels and suspension systems tend to see higher use on exploration rovers.
C. OUR WORK -POSABLE HUB
Our work focuses on extending the ability of wheels, while maintaining a relatively low COT. We propose, design, test and validate a wheel with a movable centre hub. This results in increased degrees of freedom from a traditional wheel, yet limits the power consumption when compared to a leg.
We focus on overcoming obstacles and maintaining a posable chassis for sensitive payloads, as legs are able to achieve. When the extra degrees of freedom, and possibility of the chassis, are not needed, our wheel is passive and consumes no more power than a traditional wheel. This allows for the COT and obstacle-clearing ability to be dynamically chose and adjusted based on the specific operational requirements.
Our system also remains self contained, with all the components located within a wheel. Doing this limits the required storage and operational volume and is significantly lower than a Rocker-Bogie or double wishbone system. The low volume proves beneficial for extraterrestrial applications as the required launch volume can be minimised.
III. SYSTEM DESCRIPTION A. POSABLE HUBS
The Posable Hubs used in the rover construction are originally proposed in [26], [27] and adapted for linear actuators in [28]. The wheels have three Degrees of Actuation (controllable Degrees Of Freedom) in x, y and yaw in centre hubs plane, where y is parallel to the force of gravity and x therefore parallel to the chassis. The yaw rotation of the centre hub is kinematically restricted as it co-rotates with the axle to VOLUME 8, 2020 transfer rotational energy to the outer rim. Motion in y and x is controlled via the three electronic linear actuators mounted to the outer rim and centre hub.
These actuators can be locked in any position, to maintain the low Cost of Transport of traditional wheels. Upon operational requirements the actuators move the centre hub within its workspace as desired to exhibit capabilities not seen on traditional wheels. The wheels are constructed from rigid aluminium and pneumatic rubber tires with a diameter of 600mm. These wheels have a maximum static loading of 600N and maximum dynamic loading of 75N. The actuators used are Actuonix T16 Micro Linear Actuator and make use of a worm drive achieving a maximum stroke velocity of 46mm −s .
B. ROVER CHASSIS
The rover chassis was constructed from aluminium sheets and square profiles to optimise the strength-to-weight ratio. The chassis measures 1000 × 600mm with an 800mm wheelbase. Components such as motor mounts and axles bearings are mounted directly to the frame to maintain rigid structure. The chassis houses an internal payload area of 0.035m 3 to accommodate any further system upgrades or generally payload. The chassis is also capable of carrying external payloads mounted on top of its aluminium skin. The rectangular construction allows for easy component mounting and replacement/upgrades should they be necessary. This is a prototype chassis and was designed for ease of construction and testing, one for a specific application can be designed to optimise certain operational requirement such as an increased payload bay.
1) MOTORS AND DRIVE TRAIN
Each wheel is individually driven by a DC motor via a high torque gearbox. The motors used are rated for 63rpm and 1.76 N.m of torque. Each motor is mounted firmly to the chassis and is connected to the axle via a timing belt with a further 3:1 reduction. The resultant toque on the axle is therefore 5.28 N.m and max wheel rpm is 21.
The motors and the drive trains are mounted firmly to the chassis, however the timing belt is removable. This was designed to allow decoupling between the motor and the axle for simple conversion from a driven axle to a free turning axle. This was required for generation of motorless motion, explained in Section V-C. The motors are controlled via a 17A peak motor controller that receives PWM from the onboard controller. The current of each motor is read by the controller, and fused to ensure safe operating limits are maintained.
2) ELECTRICAL POWER SYSTEM
The electrical power system for the rover is contained on a single printed circuit board (PCB). The PCB encloses the motor controllers for the main drive motors, voltage converters and fuses for further power distribution. The main power comes from an onboard battery, that is distributed to the motor controllers and wheels via the PCB. Each wheel then has further motor controllers enclosed within that power the linear actuators for hub control. The Arduino microcontrollers are powered by the battery and step down the voltage to the required level for data recording sensors. A main power switch and a safety push switch are installed that can cut power should it be necessary.
3) COMPONENT WEIGHTS
The weight of all the components was important and considered during the design process. An emphasis was put on keeping the rover lightweight, in order to minimise the load on the wheel actuators and minimise the power usage. Each component was chosen with consideration of other components, to ensure everything works together as a complete system. Table 1 lists the individual weight of each component, and the overall weight of the system. This weight is used for the Cost of Transport calculations in later sections of this article.
C. SOFTWARE AND CONTROL FRAMEWORK
The rover has an onboard microprocessor, an Arduino Mega. This unit acts as a master controller that takes in input from a joystick, and other onboard sensors to record data. The master also processes the ride height, and chassis horizontal offset positions, and communicates this to each wheel.
Each wheel has an Arduino Nano built in as a slave, that performs the necessary kinematics to determine the position of each of its three linear actuators, based on the ride height and horizontal offset position it receives from the master. The slave also controls each motor controller that in turn controls each linear actuator, and reads the actuator positional feedback from a potentiometer. A PID controller is used to control the actuator positions. Both the master and the slave are programmed in C and use I2C and Serial as the communication protocols to communicate amongst each other.
1) DATA COLLECTION AND PROCESSING
The data was collected using on board current sensors, Inertial Measurement Unit (IMU) and encoders. At the beginning of each control loop, all the data was collected and recorded onto an SD card. The current sensor was placed at the input connection of the battery to the power distribution board.
The IMU was mounted to the centre of the chassis and recorded the acceleration and gyroscopic data in roll, pitch and yaw. Wheel rotations were recorded using a quadrature encoder with an accuracy of 2400 tics per rotation. This rotational position was then used to control the wheel centre offset to maintain the set ride height, and also to record the rotational data of the wheels. The data was processed and smoothed in MATLAB, as well as analysed and plotted.
IV. MODELLING AND FUNCTIONALITY A. FRAMES OF REFERENCE
Frames of reference used for the description of this platform in kinematics begin with an inertial world frame F I with all others enclosed within. Each wheel has a body frame F B that is attached to the internal radius of the wheel rim, and co-rotates with the wheel. The axle mounting of each wheel has a hub F H reference frame attached to the centre of rotation, and co-rotates with the wheel. As the origin of F H (O H ) is also permanently fixed to the axle, it can be used to describe the rotational 2 centre of each wheel, and also the permanent axle mounting position on the extremities of the chassis. To describe the chassis position in F I , a reference frame F K is attached to the centre of the chassis. These reference frames and associated measurements are illustrated in 3. Fig. 3 (left) shows the rover body forward tilt and 3 (right) shows side to side tilt. Maximum tilt about forward X K (θ f ) and side Z K (θ s ) directions is calculated using
B. BODY POSE LIMITS
where r max is the maximum achievable manipulated radius offset and D A−A is the axle-to-axle distance of the wheelbase.
Once the θ imax is know, θ i value is given by where θ i is either θ f or θ s . θ fmax in X K direction, and θ smax in Z K were calculated for our specific platform and are 3.563 • and 4.731 • respectively. θ f and θ s range is then determined by Equation 2 ride height envelopes are given by Normal Ride Height ± | max actuation | for our platform that is min 251.5mm and max of 351.5mm
C. WHEEL RIDE HEIGHT CONTROL ON SLOPES
Each wheel has the ability to adjust its ride height independently of other wheels. This mean that the distance between point O H in B Y and a flat horizontal terrain can be manipulated and actively set. This is useful for precisely controlling the tilt of the chassis on uneven terrain such as slopes. We can calculate the ride height of ith wheel using where RH N is the nominal ride height of the wheel and RH i is dependant on RH i position in Z K and given by where W is the width of the chassis and θ is either θ s or θ f to denote the tilt angle and direction, as show in Fig. 3.
D. COST OF TRANSPORT
Cost of Transport (COT) is used to calculate the efficiency of a transportation system. It is a dimensionless measure, and as VOLUME 8, 2020 a result allows comparison between a wide variety of locomotion systems such as walking, swimming and driving. It is useful for the evaluation of this system as we can calculate COT of the system when actuated, and obtain the base data for comparison of our system to the traditional wheel and DC motor.
COT is calculated via one of two ways using where E is energy used to move the system with mass m a distance of d, under standard gravity g. Or, the energy P used to move the system at a constant velocity v, under standard gravity g.
E. ROVER FUNCTIONALITY
This rover has various functionalities not seen on traditional rovers, the major functions are explained below.
1) RIDE HEIGHT ADJUSTMENT
The ride height of the rover is determined by the wheel radius and the chassis height from the axle. The normal ride height of the chassis is 230mm, and with adjustment of the centre hub, the effective ride height changes with the effective radius in the range of 230 ± 50 mm.
2) BODY LEVELLING
By manipulating the ride height of each wheel whilst in motion on uneven terrain, each wheel acts as its own suspension system, adjusting its ride height to compensate for the uneven terrain. In doing so each wheel maintains the required ride height to keep the chassis level. This technique can be used to maintain a level chassis while traversing uneven terrain.
3) SLOPE TRAVERSAL
Similarly to body levelling on uneven terrain, the same technique can be used to maintain a level chassis when driving on an incline and experiencing sideways chassis tilt. Further, if driving along the gradient of an incline, each wheel can be adjusted to maintain optimal Centre of Gravity (COG) and increasing rover stability by lowering the COG, or by shifting it uphill. The maximum slope able to mitigate through body levelling is determined by the wheel workspace and the chassis geometry, and can be calculated as outline in Section IV-B.
4) WHEELBASE ADJUSTMENT
The wheelbase of the rover is determined by the spacing of its axles which always remain fixed to the chassis. However, the Posable Hubs can change their centre of rotation and rotate about a different point the their geometric centre. This allows for the effective wheelbase of the rover to be manipulated.
The geometric wheelbase for this rover is 800mm, axle-to-axle. However the Posable Hub allows for 100mm of manipulation [26] due to the workspace of the wheel, therefore the effective wheelbase is 800 ± 100 mm.
V. EXPERIMENTAL VALIDATION
For initial experiments and calibration the rover was placed on crates that supported its chassis and allowed the wheels to turn free without loading. After initial calibrations were completed, the rover was placed on a level surface on its wheels. The PID controllers for the wheel actuators were then calibrated under a real world loading environment using standard gravity.
Initial terrain testing and slip modelling experiments were performed in a wooden terrain box to allow interchangeable terrain, and provide a controlled environment. The test bed was just wider than the vehicle and measured 4.8 metres in length. Primary testing in this environment was done using sand and gravel, that were 100mm deep along the test area. This provided a convenient and controlled environment to test the various functionality of the rover under a controlled environment. This setup is shown in Fig. 5 including the terrain box and the rover within.
A. CENTRE OF ROTATION OFFSET EFFECT ON WHEEL ROTATION
The maximum revolutions per minute (rpm) achieved by a wheel is directly proportional to the rpm that the drive train can exert on the axle. A traditional rigid wheel simply rotates at the same velocity as the axle. The Posable Hubs are mechanically coupled to the axle as traditional wheels, however when manipulating their ride height, the actuation speed of the centre hub affects the maximum achievable rotation.
As the wheel does not rotate about its geometric centre, its actuators require time to keep the centre at the desired position. The time required is proportional to the actuation distance, as the further away from the geometric centre the hub is actuated, the greater the distance the actuators have to move. As a result the rover's maximum velocity, while actuating the wheels, is the defined as VA max .
The time t r for the actuators to move the centre hub in a full circle at different ride heights was experimentally measured and recorded in table 2. rpm can then be determined and each value substituted to find VA max using where rpm is the wheel rpm and r is wheel radius. Table 2 shows the speeds obtained. Table 2 shows that smaller ride height offsets are able to be achieved faster, as expected, and be maintained at higher wheel revolutions. Further to this table it is important to note that the motor can exert a maximum of 21 rpm on the axle, as a result the speed limiting component of this drive is the motor. On the extremities of the centre hub workspace, ±50mm, the actuators can maintain the position up to 24 rpm, the motor can only exert 21 rpm, therefore the actuators are of sufficient velocity to not inherit maximum velocity.
B. POSABLE HUB FOR DEBOGGING
Once the Posable Hubs were shown to not impact the maximum speed of rotation, its uses were tested for help with debogging, when the rover became stuck. The rover was placed in a sand environment and held until the front wheels dug a hole and got stuck. The timing belt on the rear wheels was then disconnected to de-couple the wheels from their motors and allow the rear wheels to become free turning. This was done to use the wheel axle encoders to show the true distance travelled, and compare wheel slip of the front (powered) wheels.
Using the rover in a 2-wheel-drive configuration also allowed for the rover to become stuck more easily, as the large radius tires and 4-wheel-drive gives it exceptional grip with the ground and is difficult to bog. Setup is shown in Fig. 5.
It was found that adjusting the effective vehicle wheelbase length helped the rover in becoming unstuck, from a bog that could not be freed by simply rotating the wheels. This required both the front wheels to extend forward, and the read wheel to extend backwards simultaneously. As the maximum workspace reach of the hub inside the wheel is 100mm, the overall wheelbase of the rover can be adjusted by 200mm. This allows for the overall wheelbase adjustment of 25% of the chassis designed fixed size.
Once the rover became stuck, the rear and front wheels were adjusted to contract or extend the wheelbase. This resulted in the front wheels being pushed and pulled out of the hole to regain traction. Fig. 6 shows the results.
As seen in Fig. 6, the front two wheels experienced a significant amount of slip as the rear, undriven, wheels remained stationary. At around a travelled distance of 0.25m the wheelbase was expanded by 100mm, which allowed the front wheels to regain traction and pull the vehicle forward. Once the rover became unstuck, the wheels were actuated again to their normal geometric state, to remove the need for constant readjustment, and limit the current usage. At around 0.75m, the rover became stuck again, the same debogging manoeuvre was performed to regain traction. Note the relationship of the wheel actuation with the magnitude of the current usage is somewhat limited.
Not actuating the wheels uses less current, however does not allow the rover to dig itself out of the hole. Injecting extra energy into the system to actuate the wheels, allows the bog to be overcome and vehicle to continue on its trajectory. In this experiment, the wheel actuation manoeuvre used 13% more power than when the wheels are completely passive, only for the required time. A significant benefit of this is that it only requires extra energy when being performed, once the vehicle is free of the bog, the wheels return to a normal state and use less power, maintaining the low Cost of Transport of wheels while offering the benefit of extra degrees of freedom, when compared to traditional wheels. This experiment demonstrated that when a traditional wheel is stuck and unable to recover, using a Posable Hub to manipulate the rover wheelbase directly contributes to the rover recovering and becoming unstuck.
C. MOTORLESS MOTION
Motorless motion is a concept originally proposed in our previous paper [29]. This concept requires the centre hubs of the wheels to be offset in the horizontal axis, while the rim of the wheel remains stationary. Doing this creates a moment about the geometric centre of the wheel, as gravity is no longer acting through the origin of the axle. This in turn creates an unstable system and forces it to roll in an VOLUME 8, 2020 FIGURE 6. Fig. shows the data recorded during a debogging experiment. Front wheels extender forward while rear extended backwards, to increase the rover wheelbase. Two powered front and two free-turning rear wheels were used to record slip.
under-damped manner, in the horizontal direction of the centre hub offset. This method of generating locomotion requires a free-turning axle, or a clutch to decouple the existing drive motor (performed by manually removing the timing belt).
Paper [29] uses pneumatic cylinders to achieve this, with a focus on slopped terrain. Using a wooden beam connecting two wheels to conduct the preliminary experiments. This papers uses electric actuators to actuate the centre hub, on a four wheeled rover. The overall control system of this version is significantly more accurate and requires less power to run. Testing on a rover with four wheels also extrapolates real world applications and allows direct comparison to traditional wheels with a rotational motor drive.
The experiments performed for motorless motion consisted of using DC motors to drive each wheel and recording the IMU, current and velocity data of the rover driving with no wheel actuation. This required the wheels to act completely passive, with the centre hub locked in the geometric centre of the wheel. This experiment was performed to gather data of the system acting as a traditional rover. A secondary experiment consisted of decoupling the DC motor from the drive axle to allow free-tuning, via removing the timing belt from the drive train. This resulted in all four, previous driven wheels, becoming undriven and completely free turning.
The onboard computer then measured the quadrature encoders for the rotational position of each wheel, and adjusted the horizontal wheel offset accordingly. This resulted in exhibiting a Sustained Driving Gait [29], which required the centre hub position to be constantly readjusted to maintain a smooth chassis motion. These experiments were performed on a structured, continuous terrain of polished concrete, and on an asphalt road. The rover produced motion in the forward and backward direction, no steering manoeuvres were performed. The data is shown in Fig. 7.
Referring to Fig. 7, the left two graphs show data for our actuated wheel, while the graphs on the right show data for a traditional, unactuated wheel. Looking at the bottom two graphs, the DC motor driven wheels achieved a steady state velocity of 0.27ms −1 while consuming 2.3A. Using Equation 7, the Cost of Transport is found to be 0.202. Likewise for the actuated wheel, using no DC motor, the CoT is 0.584 with a velocity of 0.4ms −1 and current usage of 4A.
These differences in the CoT are to be expected as the unactuated wheel only requires a single DC motor to turn the drive train, maintaining its high efficiency. Energy losses in this instance are most evident through bearing/gear friction, motor heat, and noise. There is no evident loss of energy in the transfer of rotation from the axle to the wheel, as the wheel actuators are locked in place via their worm drives.
Using the motorless motion technique, with a free turning axle, the system complexity is increased. The axle is also mounted to pillow block bearings for support, and experiences loss of energy via friction. However, now there are three motors powering each actuator, and two mounting points per actuator. This results in extra friction being created in the geometry of the wheel as its centre hub moves. Each linear actuator also experiences friction and stiction along its entire length of actuation. A higher count of components interact with each other and create heat, therefore more energy is lost compared to a traditional wheel.
A further point made clear by the data in Fig. 7 is that the experiment was able to achieve a faster velocity when using energy from gravity than the specific drive motors used. This is directly because of the limitation of the rotational speed of the DC motor used for a traditional wheel. However, comparing the metres per second per ampere figure, the motorless motion technique achieved 0.1ms −1 A −1 , while the traditional wheel achieved 0.117ms −1 A −1 . This shows that velocity and current usage are proportional and overall not dissimilar between the two locomotion techniques.
Our proposal for the use of motorless motion driving technique is as a redundancy method. A DC drive motor proves more efficient however should the primary drive method fail, the motorless motion can be used with the disconnection of the drive motor. This method can also be used to assist in FIGURE 7. Shows the chassis stability, rover velocity and system current draw of a traditional wheel and DC motor model (right) as wells as a rover using our actuated wheels (left).
slope traversal by inducing further energy into the rover and shifting its centre of gravity uphill.
VI. FIELD DEPLOYMENT
After the initial experiments were successfully concluded, the platform was tested in the field. Tests performed were focused at exploring the use of adjusting the ride height of the individual wheels to maintain a level chassis on sloped terrains. The benefits of this functionality are in increased platform lateral stability, by controlling the chassis COG, the rover is less prone to rollovers on slopped terrain [30]. A sloped paved road was utilised as a test environment, as well as unstructured terrain on an unmaintained sloped track.
Further, if driving along the gradient of an incline, each wheel can be adjusted to maintain optimal COG and increasing rover stability by lowering the COG, or by shifting it uphill. The benefit of this functionality is the increased platform lateral stability, by controlling the chassis COG, the rover is less prone to rollovers on slopped terrain [30].
The experiments involved first driving the rover up and down the sloped terrains, with the wheels non actuated. This allowed for data to be recorded and later used for comparison to a traditional wheel. This was then repeated while the wheels were actively actuated, to hold the chassis level with gravity. Fig. 8 shows the chassis on a sloped grass area, with the wheels not actuated on the right side and wheels actively actuated on the left side. Similarly, Fig. 9 shows the same concept performed on a sloped, off-road track.
Figs. 8 & 9 show the chassis maintained at a completely level pose, as the workspace limits have not been reached. Fig. 10 however shows the slope too great for the rover to fully level the chassis, as the limits of wheel hub workspace has been reached. Limits are calculated in Section IV-B. The data from these experiments is presented in Fig. 11. The left side of the Fig. shows the data from a sideways slope VOLUME 8, 2020 (as shown in Fig. 8) and the right side shows data from a forward slope (as shown in Fig. 10).
From the data plots it is clear that the Posable Hubs are able to significantly manipulate the chassis pose, only limited by the centre hub workspace and the overall chassis geometry. Data from the sideways slope shows the chassis being able to completely overcome the slope, and maintain the body at a level pose while using roughly 75% more current. This has an increase on the Cost of Transport of the rover, however, allows for a greater range of applications.
Likewise a forward slope of 12 degrees was tested on unstructured terrain. In this experiment the chassis was able to be adjusted by 5 degrees, due to the geometry of the system and highly uneven terrain. The system used about 250% more energy than the equivalent experiment with non actuated wheels. This great increase in current is partially due to the uneven terrain, requiring more adjustment, and the changed weight distribution of the chassis, resulting in some wheels experiencing higher loads.
Overall the experiment validated the functionally of the Posable Hubs in an uncontrolled field environment. The effects of slopes on chassis pose are able to be mitigated by the Posable Hubs, with the sacrifice of lower Cost of Transport. However, as the actuation can be controlled, the amount of extra power usage can be controlled to ensure it remains within the acceptable threshold.
VII. PERFORMANCE AND LESSONS LEARNED
Performing the experiments in the controlled test environments of indoor with air conditioning, indoor without air condition-ing and and in the field proved useful as different functions of the Posable Hubs were able to be tested. A variety of terrains and terrain environments yielded some useful insights into the performance of the rover.
A. THERMAL
Initial testing of individual wheels was performed indoors at room temperature, under regulated air humidity and temperature. The actuators demonstrated desired performance and radiated heat into the surrounding air to maintain a cool operating temperature, with and without loading.
Secondary experiments were performed indoors, in an uncontrolled temperature environment with an average temperature of 30-35 • Celsius. This increase in temperature contributed to the degraded performance of the actuators after some time, as they noticeably retained more heat and the motor coils became less efficient. The rover was allowed to cool before further experiments were performed.
Finally, testing the rover in a completely uncontrolled environment proved most difficult. In the field, the average air temperature was 30-35 • Celsius, however as the rover was not undercover, the sun heated up the actuators to a significantly higher temperature. The actuator motor enclosing, and subsequent hub mounting points are constructed from black ABS plastic, this resulted in higher absorption of the sunlight and resulted in overheating of the actuators after minimal use.
Other components inside the chassis did not experience the same issue as the body is made from semi-reflective aluminium, and has four high flow fans to circulate air throughout for cooling. The addition of active cooling for the actuators would improve their performance. Chassis ventilation fans are shown in Fig. 2.
B. MECHANICAL STRESS
The Posable Hub wheels were built using off-the-shelf components with custom 3D printed mounts, and custom control setup. Due to each actuator mount requiring a degree of freedom, and overall wheel having six mounting point for actuators, measures were taken to reinforce these mounts and ensure the wheels in-plane stability is not compromised. For applications such as motorless motion and general chassis pose selection this setup was sufficient in controlling mechanical stresses felt by the rover chassis.
However, for debogging and slope traversal some of the mechanical points did not constrain the actuators enough, resulting in torsional stresses on the wheel, perpendicular to its rotation. This was most noticeable during slope traversal on unstructured terrain, as the wheels would slip and twist.
The current design is satisfactory for proof of concept, however further reinforcement or mounting point redesign is required for unstructured terrain with any increased payload on the rover.
C. CONTROL SYSTEM
Cost of transport was found to be directly linked to the amount of time the actuators were manipulating the position of the Posable Hub. Initial PID controllers were tuned and an overall control system was implemented for the control of the Posable Hubs position. However, as this is a prototype and we present the initial experiments, validating the functionality was a higher priority than tuning the control system.
Further work can be done to optimise the actuator, and overall controllers for the wheels. This can also include new functionality that has not yet been implemented into the system, such as lifting one wheel at a time and partially rotating the wheel before placing it down again, to achieve forward 'walking' with the Posable Hubs.
VIII. CONCLUSION
Space exploration and work such as search and rescue or resource mining is dangerous and often unsuited to manned platforms due to the associated dangers and costs. In the example of extra terrestrial exploration, unmanned rovers dominate the sector due to their lower cost and size as they do not have to house life support for astronauts. Further, due to remote or autonomus operation, no human life is endangered.
Rovers use various locomotion systems with wheels dominating the sector due to their simplicity and low Cost of Transport. However, wheeled system present significant drawback such as their limited ability to debog after becoming stuck in a sandy environment. Suspensions systems and mechanical design can limit this drawback however no solution exist that fully fill the gap with these requirements.
To address this, we propose the Posable Hub system using linear actuators to actively change the centre of rotation of the rim. We focus on constructing a rover utilising four Posable Hubs, theoretical calculations to show system advantages and limitations, and perform a number of field trials to prove the usefulness of the various benefits these wheels provide.
We validate the superiority of a Posable Hub in assisting in debogging operations. As a classical wheel only possesses one degree of freedom (DOF), rotation, it only has one way of getting out of bog, by rotating forward or backwards. Our wheel has three DOF and allows the centre hub to move around in the rim plane. This, as a result, allows for the wheel to be used in a clawing motion, while rotating, to free the rover from the bog by manipulaton its ground contact patch.
A number of further experiments and field trials are performed to evaluate the performance of other benefits of our system. We show the platform driving on various slopes of up to 14 degrees while keeping the chassis horizontally levelled, due to the ability of the wheels to adjust their ride height.
We further demonstrate the ability of the wheels to generate forward and backward linear motion by converting the gravitational potential energy to rotational motion of the wheels. Using the hubs to offset the centre of rotation, instability is introduced into the system and gravity generated a moment, which in turn generates linear motion of the rover, when the rotational drive motors have been decoupled from the axles. This in turn acts as a redundant locomotion technique. | 9,210.6 | 2020-08-21T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Physics"
] |
Correção de astigmatismo misto de alta magnitude com excimer laser em dois tempos cirúrgicos High magnitude mixed astigmatism correction with excimer laser surgery
High astigmatism correction represents a challenge for the refractive surgeon with current available technology. Excimer laser correction should be considered as an option in the available therapeutic arsenal. We report a patient with astigmatism higher than eight diopters to whom it was used a treatment with LASIK (Laser Assisted In Situ Keratomileusis) in two surgical moments, using a new generation of excimer laser with an optimized aspheric profile.
R
egular astigmatism leads to image formation in two per pendicular planes, with two different foci, producing image distortion, shades, or even diplopia.The greater its magnitude, the greater the loss in visual quality and the chance of such symptoms.In mixed astigmatism, for example, image formation occurs in two axes, one anterior and one posterior to the retina. (1,2)isual correction of astigmatism can be achieved using glasses, contact lenses, refractive surgery with excimer laser, intrastromal corneal ring implants (in specific cases, especially in irregular astigmatism secondary to corneal ectasia), intraocular lenses (in patients with concomitant cataract), and even corneal transplantation (in patients who cannot undergo other treatments, e.g.those with severe corneal ectasia). (1)Each case should be assessed in detail and correctly diagnosed before performing any procedure. (3)e report on a patient with high astigmatism, a condition that is uncommon in clinical practice and whose characteristics often hinder a safe and adequate ablation.
CASE REPORT
Male, 32-year-old patient working as a construction foreman, wishing to correct his refractive error.He reported low vision with glasses and intolerance to contact lenses.He had not used any type of visual correction for 5 years.
Ophthalmic examination found an uncorrected visual acuity of 20/200 in both eyes and a corrected visual acuity (CVA) of 20/80 in the right eye (RE) and partial 20/50 in the left eye (LE).Static and dynamic refraction were identical (RE:+2.00-9.00 X10º; LE:+1.50 -8.00 X 175º).Tomography of the cornea and anterior segment with aPentacam™device (OCULUS Optikgeraete GmbH, Wetzlar, Germany), showed anterior corneal astigmatism of 8.3 dioptres in the RE and 7.2 dioptres in the LE(Figures 1 and 2).A detailed clinical examination found no abnormalities in both eyes.
Since the patient had regular astigmatism with no signs of corneal ectasia or other eye conditions, correction withexcimer laser refractive surgery was suggested.There was, however, a technical limitation, as the maximum cylindrical correction with the available equipment, a Schwind Amaris™ device (Schwind GmbH & Co, Kleinostheim, Germany), is seven dioptres.
It was thus decided that correction of the cylindrical error would be done in two steps with a minimum interval of three months between them.The first procedure aimed fora correction of +2.50 -7.00 X 10º in the RE and +2.00 -7.00 X 175º in the LE, with an optical zone of 6.7mm for both eyes.
The first LASIK procedure was performed on February 25 th , 2011 using a Moria M2™ (Moria SA, Antony, France) automated microkeratome, and the flap was created with a 130ìm steel blade.The residual bed after the first procedure was estimated at 352ìm for the RE and 389ìm for the LE.
Three months after the first procedure,static refraction was+0.50-4.00X10º in the RE and -0.50 -2.00X160º in the LE, with both eyes showing a CVA of 20/30p.Atomography of the cornea and anterior segment using a Pentacam™device (Figures 3 and 4) showed anterior corneal astigmatism of 4.2 dioptres in the RE and 1.9 dioptres in the LE.Central pachymetry was 498ìm in the RE and 487ìm in the LE.
The second procedure was performed on July 1 st , 2011.The flap was lifted manually with a surgical spatula.The Schwind Amaris™device was programmed for a correction of +1.00 -4.00 X10º in the RE and +0.25 -2.50 X165º in the LE using the same method (aberration-free) as in the previous procedure.The optical zone was 6.7mm in both eyes.The residual bed after the second procedure was estimated at 331 ìm in the RE and 305 ìm in the LE.
Manifest refraction one month after the procedure was -0.50 -0.50X0º in the RE and -0.50 -0.50X170º in the LE, with a CVA of 20/25p in both eyes.Refraction under cycloplegia on February 27 th , 2012, i.e.one year after the first procedure, was -0.50X0°in the RE and -0.50X 170° in the LE, with a CVA of 20/25 in both eyes.In the last follow-up visit on April 17 th , 2013 refraction values remained stable.Arbelaez and Vidal suggest that improvement in lines of visual acuity can occur in up to one third of operated patients, with good correction of cylindrical components greater than 2 dioptres using Schwind Amaris (6) and Alegretto WaveLight lasers. (7)n the case reported here there was improvement of more than 3 lines of visual acuity in the right eye and 2 lines in the left eye.Both surgical procedures led to significant improvement in spherical and cylindrical components, which is not the general rule in similar cases (8) .For example, Igarashi et al. have shown a greater regressionin spherical components and a greater stability in cylindrical components (9) .
Chiseliþã et al. have shown that in the correction of high astigmatism there is no significant induction of higher-order aberrations. (7)This is particularly the case for optimised aspheric methods with cyclotorsion control. (11)Aslanides stresses that using cyclotorsion correction or compensation leads to better outcomes than when these are not used. (12)Brunson states that when there is an increase in the number of higher-order aberrations greater than 0.35ìm,wavefront-guided treatment is a better option. (13)In the case reported here, we opted for optimised aspheric treatment with cyclotorsion control, as this seemed to be the best surgical approach for our patient.
According to the literature, LASIK treatment can be performed safely when the astigmatism is located on the anterior surface of the cornea, with correction being significantly higher compared to posterior astigmatism. (14)his report illustrates an alternative approach in the surgical correction of a high cylindrical error using two consecutive procedures, sincecorrection of the total error was not possible using the available technology.
Figure 1 :
Figure 1: Preoperative tomography of the RE (composite elevation map of the anterior, posterior, sagittal curvature, and thickness of the cornea).
Figure 2 :
Figure 2: .Preoperative tomography of the LE (composite elevation map of the anterior, posterior, sagittal curvature, and thickness of the cornea).
Figure 3 :
Figure 3: Postoperative tomography of the RE (composite elevation map of the anterior, posterior, sagittal curvature, and thickness of the cornea).
Figure 4 :
Figure 4: Postoperative tomography of the LE (composite elevation map of the anterior, posterior, sagittal curvature, and thickness of the cornea). | 1,535.8 | 2015-04-01T00:00:00.000 | [
"Medicine",
"Physics"
] |
Evidence for herbaceous seed dispersal by small-bodied fishes in a Pantanal seasonal wetland
We analysed the germination of seeds after their passage through the digestive tract of small floodplain fishes. Samples were collected in five open flooded fields of the northern Pantanal in March 2011. All fishes were sacrificed and their intestinal contents were removed. The fecal material was weighed and stored at 4 °C in a GF/C filter wrapped in aluminum foil. The material was then transferred to a receptacle containing sterilised soil from the sampling area. The fecal samples were kept in a germination chamber for 68 days and then transferred to a greenhouse for another 67 days. We collected a total of 45 fish species and 1014 individuals which produced a total amount of 32g of fresh fecal mass and 11 seedlings. We were able to identify six seedlings: two Banara arguta, two Steinchisma laxa, one Hymenachne amplexicaulis and one Luziola sp.. The fish species that produced samples with seedlings were Astyanax assuncionensis, Metynnis mola, Plesiolebias glaucopterus, Acestrorhyncus pantaneiro and Anadoras wendelli. With the exception of B. arguta the remaining plant species and all fish species were not known to be associated with the seed dispersal process of these plants. We found a ratio of 0.435 seedlings.g–1 of fresh fecal material, which is100 times higher than the amount of seedlings encountered in fresh soil mass (92,974 grams) in seed bank studies conducted in the same study area. In particular, Astyanax assuncionensis and Metynnis mola were among the most frequent and most abundant fish taxa in the area. Together with the high seed concentration in the fish fecal material, this evidence allows us to conclude that such fish species may play an important role in seed dispersal in the herbaceous plants of the Pantanal.
Introduction
Life history traits determine the relationship between the number and competitive ability of offspring.However, tradeoffs in plant reproductive strategies are not simply limited to quantitative vs. qualitative aspects of seed production.The limited extent of movement in plants makes access to sites for seedling development a crucial aspect of reproductive effort.
Seed dispersal may be advantageous if it allows the seedling to escape a highly competitive environment, avoid predation associated with seed-rich sites, transport the seed to the right microhabitat (Davidson and Morton, 1981) or allow the establishment of new populations that might persist if the site of the parent population disappears ( Van der Valk and Davis, 1978;Howe and Smallwood, 1982;Schneider and Sharitz, 1988;Nathan and Muller-Landau, 2000;Howe and Miriti, 2004).
Fruit and seed consumption are among the most common seed dispersal strategies.Vertebrates play an important role as dispersal vectors.Seed dispersal by vertebrates may occur due to non-intentional seed collection (adhesive seeds) or by the excretion of viable seeds (Buide et al., 1998;Herrera, 2002).The passage through the digestive trait may exert a positive or a negative effect on seed viability and on the timing of seed germination.For most plants with small seeds, the time spent in the vector's gut tends to be longer.This longer passage of time may cause more damage to soft-skinned seeds, but the seeds may also be transported farther (Traveset, 1998).Typically, the passage through the digestive tract in birds and bats increases the seed germination rate by 55% and 58%, respectively.For reptiles, no effect on the germination rate was found in 56% of the cases examined.If an effect was found, however, it was always positive (Traveset, 1998).Studies of seed dispersal by fishes are scarce.Nevertheless, most authors are aware of the substantial potential of fishes as seed dispersers, particularly in the tropics (Gottsberger, 1978;Goulding, 1980Goulding, , 1983;;Kubitzki and Ziburski, 1994;Waldhoff et al., 1996;Horn, 1997;Pilati et al., 1999;Banack et al., 2002;Mannheimer et. al., 2003;Gomiero and Braga, 2003;Maia et al., 2007;Lucas, 2008;Galetti et al., 2008;Anderson et al., 2009;Pollux, 2011;Anderson et al., 2011;Horn et al., 2011).
Among tropical environments, wetland vegetation may be especially sensitive to fish-mediated seed dispersal (Anderson et al., 2011).Most plant species in wetlands are herbaceous with small-sized seeds that are likely to disperse further than large-sized seeds.However, most studies of fishes as seed dispersers address relatively large fish species and woody plant species.Large fishes are usually not present in wetland floodplains with the exception of the Amazonian igapó forests floodplain (Goulding, 1980), mostly because the waters are shallow and are far from large river beds.It is important to stress that wetland floodplains such as the South American Pantanal are rich in small-seeded plants (mostly herbaceous) and have a highly dynamic patchy vegetation distribution (Junk and Piedade, 1993;Zeilhofer and Schessl, 2000).Another important aspect of the seed dispersal process that is rarely addressed in fish studies is the in situ quantitative effect of fishes in the seedling production.Hereby we considered all fish species as potential dispersers instead of focusing only on the frugivores.We also make a quantitative assessment of small-bodied fish contribution to the seedling production using fishes collected directly from their natural environment.
Our objective is to determine whether the small fishes inhabiting Pantanal wetlands are able to excrete viable seeds.We also address the relative contributions of seed dispersal by fish (ichthyochory) and by water (hydrochory), comparing the amount of seedlings produced by fish feces with the amount of seedlings produced in a nearby soil seed bank study.For this we assume that most seedlings produced from the soil seed bank came from hydrochoric events.Most likely, the seedlings produced from fish fecal material will belong to the herbaceous plant group due to the small size of the seeds.
Study area
The Pantanal wetlands occupy an area of 140,000 km 2 in the central portion of South America.The area has little relief.Its elevation ranges from 100 to 180 m, and its regional hydraulic gradient does not exceed 15 cm.km -1 .The complex hydrography of the Pantanal is associated with a variety of soil types.These physical factors produce a landscape that tends to be organised in patches (Rio de Janeiro, 1974;Girard et al., 2010).
The regional climate is the Köppen AW type, hot and humid with rainy summers and dry winters (Köppen, 1948).The rainfall ranges from 800 to 1400 mm.year -1 , occurring primarily between November and March.The average regional temperature varies between 17° and 32 °C (Brasil, 1997;Fantin-Cruz et al.,2011).
The area sampled in this study is located near Nossa Senhora do Livramento Municipality, in the Cuiabá River drainage.All sampling sites were positioned in flooded fields away from the Cuiabá River overflow zone.The vegetation consisted primarily of Brachiaria humidicola pasture bordered by small bushes with little canopy cover.
Data sampling
We sampled five sites to a depth of 1 m.The sites were located at least 1 km apart (Figure 1).Fishes were collected using a 1 m 2 screen constructed of 2 mm mesh.To collect a sample, the screen was swept close to the ground and then lifted under the floating macrophytes that formed the vegetation.
Fishes were identified by species in situ, washed with clean water and released in a plastic bag filled with clean water.All individuals were kept alive in containers and separated by species until they were processed in the field laboratory.
In the laboratory, the fishes were sacrificed by exposure to a low temperature, and their intestinal contents were removed.The intestinal contents were combined with the sample resulting from the filtration of the water from the plastic bag that had contained the fish.This fecal material sample (FMS) was weighed and stored at 4°C in C in Whatman GF/C 47 mm filter wrapped in aluminum foil.Fish fecal material was grouped by fish species at each sampling site.If a determined fish species was already collected in a sampling site, it was collected nevertheless in the next site based on the premise that the possible species-specific diet variation could produce fecal samples with or without seeds.Each fish species collected was considered a subsample for a sampling site and the fecal material of all individuals was put together in the same germination sampling unit.According to that sampling design any fish species could have up to five replicates if it was collected in all five sampling sites.The material was then transferred to a receptacle containing sterilised soil from the sampling area.
These receptacles were kept in a germination chamber for 68 days and then transferred to a greenhouse for another 67 days.The soil was sterilised by treating it for 90 minutes inside a steam sterilizer under 1 atm pressure.The germination chamber was programmed to simulate a 12 hour day followed by a 12 hour night, with constant temperatures of 28°C for the diurnal period and 26 °C for the nocturnal period.
We investigated the quantitative effect of fishes on the seed dispersal process by analyzing the number of seedlings produced by the total mass of substrate at each sampling site.This number was obtained by dividing the number of seedlings produced for a sampling site by the total amount of substrate mass collected at the site.The fish fecal mass that did not produced seedlings was summed with the mass that produced seedlings to reach the total amount of fish fecal material for site.In order to evaluate the importance of ichthyochory and hydrochory in the seed dispersal process we compared the number of seedlings produced between the FMS and a soil seedbank study performed in the a neighboring area (Pagotto et al., 2011).Because the data were not normally distributed, we evaluated the difference between these values with a Mann-Whitney test.Pagotto and colleagues (2011) work was performed in a nearby area less than 10 km from the sampling sites of our study.
Results
We captured 1014 specimens of 45 species and collected 32 g of feces.Most of the specimens belonged to smallbodied species or were young of the year (Table 1).Astyanax assuncionensis (Gery, 1972), Plesiolebias glaucopterus (Costa and Lacerda, 1988) and Aphyocharax anistsi (Eigenmann and Kennedy,1903) were found at all sampling sites, and 18 other species were found in at least three of the five sampling sites (Figure 2).Plesiolebias glaucopterus, Moenkhausia dicroura (Kner, 1858), Metynns mola (Eigenmann & Kennedy, 1903), Trigonectes balzani (Perugia, 1891) and Serrapinus sp.(Boulenger, 1900) were the most abundant species, representing more than 50% of the total number of fishes collected (Table 1).After 135 days of the germination experiment, the FMS from five fish species had produced a total of 11 seedlings (Table 2).All sampling sites except site B (Figure 1) had fishes that excreted viable seeds.We were able to identify six seedlings: two Banara arguta Briq.from, two Steinchisma laxa (Sw.)Zuloaga and one Hymenachne amplexicaulis from Astyanax assuncionensis FMS, and one Luziola sp. from Plesiolebias glaucopterus.All the other seedlings were monocotyledonous but failed to grow to an age that allowed them to be identified.
The soil seedbank experiment (Pagotto et al., 2011) produced fewer seedlings per gram of substrate than the FMS (Mann-Whitney U=56, p=0.05).Pagotto et al. ( 2011) reported a ratio of 0.004 seedlings.g - of moist soil substrate, whereas our results showed a ratio of 0.435 seedlings.g - of moist fecal substrate (Table 3).
We obtained seedlings from fishes belonging to many different functional groups, including the piscivore Acestrorhyncus pantaneiro, the invertivores Plesiolebias glaucopterus and Astyanax assuncionensis, the herbivore Metynnis mola and the planktivore/detritivore Anadoras wendelli.None of the cited species were known to take part in plant seed dispersion processes.In the case of certain small-bodied species with a limited range of movement across the floodplain (e.g., P. glaucopterus and A. wendelli) seed dispersal from their feces over a distance is relatively unimportant.However, the plants may still benefit from the passage of their seeds through the fish's digestive tract.Considering the high abundance and frequent presence of P. glaucopterus in the region, it is possible that the passage of seeds through the digestive tract of this fish may act as a germination trigger.Although few previous studies have analysed the effect of passage through the digestive tract of fishes on seed germination, the results of these studies indicate that no effect can be demonstrated or that the observed effects are mostly positive (Traveset, 1998).
It is possible that the seedling produced by the piscivorous A. pantaneiro might be the result of accidental ingestion or might represent a seed that was previously ingested by one of the fish's prey.Although this species is widely distributed in the study area, its role as a disperser may be minor due to its sit-and-wait predatory strategy.In contrast, A. assuncionensis and M. mola may play significant roles as seed dispersers, and the former species may be particularly important.A. assuncionensis was found at all sampling sites.Its FAS produced almost one-half of the seedlings and three of the four species or morphospecies among the identified seedlings.In addition, it is a highly mobile fish and may travel throughout the study area.
Most of the seedlings that we obtained were monocotyledonous, with the exception of two Banara arguta plants.This species was expected to be dispersed by fishes due to its well-known association with aquatic environments and fruit consumption by fishes (Morais and Silva, 2010;Costa-Pereira et. al., 2011).Steinchisma laxa is a common herbaceous plant used as pasture in the area and associated with activities relevant to the economy of the Pantanal.Both Hymenachne amplexicaulis and Luziola sp. are aquatic plant species that occur mainly during the flooding season.With the exception of B. arguta these plant species were not known to interact with fishes in their seed dispersal processes.Most herbaceous species have small seeds, which may remain in the digestive tracts of the dispersers for longer periods than bigger seeds (Garber, 1986;Levey and Grajal, 1991;Gardener et al., 1993).This prolonged passage time can increase the distance between the source plant and its seedlings, reinforcing the importance of small-bodied fishes as dispersers.Given that most of the Pantanal vegetation produces fruits and seeds during the flooding season and that small-sized seeds are also dispersed by the wind and the water, it is probable that waterborne dispersal are the primary forces responsible for soil seed bank formation.Nevertheless, this study shows that the seedling germination rate, expressed on a substrate-mass basis, is 100 times higher in fishes than in soil seed bank material.
Fish floodplain density in the area is approximately 290,000 fishes per hectare (Fernandes, 2007).Considering that a fish may retain a seed in its guts for two days it would excrete seeds 45 times during a 90 days typical flooding season.Our results suggest that nearly one percent of the floodplain fishes may excrete seeds.Also most of the viable seeds will produce seedlings in the last two weeks of the flooding season where the water table is already shallow or withdrawing.Considering the number of seed-carrying fishes and the last two weeks of the flooding season the number of seedlings produced by fishes could reach more than 20,000 seedlings per hectare.
Seed dispersal by fishes is particularly important in seasonally flooded environments (Galetti et al., 2008), and we have found evidence of this significance in the Pantanal.In the Pantanal, the terrestrial vegetation is closely linked to the dynamics of the aquatic environment (Girard et al., 2010), and this linkage influences the patchiness of the Pantanal landscape.It is possible that seed dispersal processes usually associated with the water itself might be enhanced by the complementary contributions made by the fishes, particularly in view of the high seed concentration found in the fish fecal material.
Conclusions
We conclude that small-bodied fishes of the Pantanal flooded fields are able to excrete viable seeds which associated with their elevated numbers make them potentially important seed dispersers.
Figure 1 .
Figure 1.Sampling site distribution and geographical position.
Figure 2 .
Figure 2. Occurrence frequency of fishes among sampling sites.
Table 1 .
Fish species presented by sampling site, total species abundance and mean body size.
Table 2 .
Fish species that excreted viable seeds, number of fishes that contributed to a FMS for the seedlings, time elapsed for seedling germination, FMS mass.
Table 3 .
Basic statistics of the comparison between ichthyochory (seeds in fish feces) and hydrochory (soil seebank) seedling per substrate mass production.Laura Paiva, Msc.Silvana Ferreira and Enésio Fransisco for their support in the field and data collection.This work was supported by the resources of LTER -PELD site 12 developed under the CNPq scientific program. | 3,735.2 | 2014-08-01T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Asymptotic Conversion Point Equations for Converted Waves Reflected from a Dipping Reflector
P-S converted seismic wave exploration plays an important role in detecting complex geologic structures. In this research, we derive two new asymptotic conversion point equations for the P-S converted wave reflected from a dipping reflector. The first is a quadratic asymptotic conversion point equation of the P-S converted wave reflection from a dipping reflector (DACP equation), and the second is a linear asymptotic equation (ADACP equation). DACP and ADACP equations depend on the velocity ratio (VP/VS) of the stratum, the offset (X), the depth (Z) of the conversion point, and the dip angle of the stratum. The last parameter is the most sensitive of the DACP and ADACP equations in determining the conversion point position. The two new equations can predict the conversion point positions on a deep dipping reflector accurately and directly. The accuracy of the conversion point position at shallow depth determined by the DACP equation is better than using the ADACP equation. For a shallow conversion point, for example Z/X = 0.5, the errors of the conversion point prediction in the horizontal distance (CP error) are less than 2% for the DACP equation, but the CP errors are very large for the ADACP equation. If Z/X is greater than 3, the CP errors of the ADACP equation are less than 3% and this equation is more computationally efficient than the DACP equation.
IntRoDuCtIon
P wave reflection seismology has been successfully applied in oil exploration in resent years.Recent applications, however, have been directed to exploring complex structures in small scale reservoirs.Although a P wave can be easily and efficiently generated, its resolution may not be sufficient to meet present needs.For the same frequency, an S wave has a shorter wavelength and provides better vertical resolutions than those of P wave (Geis et al. 1990).However, it is difficult to generate an S wave in seismic exploration, and an S wave can not propagate deeply due to the absorption of the strata.But P-S converted wave exploration is expected to play an important role in improving the resolution of the image of a geologic structure by the recent progress in seismic recording system and seismic data processing.Therefore in some circumstances, a P-S converted wave has a higher image resolution than a P wave (Tatham and Goolsbee 1984;Frasier and Winterstein 1990).The large velocity contrast between salt and sediment can generate a strong P-S converted wave that can be used to delineate the salt base more accurately than an ordinary P wave survey (Lu et al. 2003).For shallow structures, a P-S converted wave image has a higher resolution than a P wave image (Garotta et al. 2003).
The conversion point (CP) of a P-S converted wave on a horizontal interface is not at the midpoint of the offset between the source and the receiver, see Fig. 1.This causes some difficulties for the common depth point (CDP) binning.Tessmer and Behle (1988) derived two equations to delineate the trajectories of the CP with depth from an isotropic horizontal layer: the conversion point equation (CP equation) and the asymptotic conversion point equation (ACP equation).The former equation can predict the CP position exactly, and the latter can only approximate the deep CP position.The trajectory of the CP position with depth is hyperbolic and flattens out to a straight line when the horizontal layer becomes deep (Thomsen 1999).By using the CP and ACP equations, the common conversion point (CCP) binning can be easily implemented in the P-S converted wave seismic data processing.Chang and Tang (2005) extended the CP equation and derived a new quartic equation (DCP equation) for the P-S converted wave reflected from a dipping reflector.This equation can accurately predict the conversion point of the P-S converted wave reflected from a dipping reflector.An analytic solution of the CP equation on a dipping reflector was proposed by Yuan et al. (2006).The CP equation for the refracted P-S converted wave from a dipping reflector was derived by Tang et al. (2007).
A converted wave's DMO (dip move-out) equation can calculate the travel time of a P-S converted wave reflected from an isotropic stratum with a dipping reflector (Ikelle and Amundsen 2005).However, it can not determine the CP position on a dipping reflector.Using a converted wave's DMO correction in the CCP binning will result in an incorrect CP position on the dipping reflector.Therefore, using the DCP equation in the CCP binning can avoid this problem because the DCP equation can determine the correct CP position on the dipping reflector.
At present, the DCP equation, a quartic equation, is time-consuming for computation.The analytic solution of the CP equation on a dipping reflector proposed by Yuan et al. (2006) is complicated and computationally extensive for CCP binning.In this research, we will derive new asymptotic equations of the DCP equation for the deep reflected P-S converted wave.These equations will be more computationally efficient and direct in resolving the CP position for the P-S converted wave reflected from a deep dipping reflector.
AsymPtotIC ConvERsIon PoInt EquA-tIons
In a uniform horizontal isotropic stratum, if the P wave is generated at the source (S) and propagates downward to the bottom of the stratum, shown as Fig. 1, it reflects at the interface and converts to an upward S wave and will propagate to the receiver (G).The reflection point is the CP.The ACP equation is (Tessmer and Behle 1988) where c is the velocity ratio of P and S waves (V P /V S ) of the stratum, X is the offset between the source and the receiver, and X 1 is the surface distance between the source and the CP.
The ray path of the P-S converted wave reflected from a dipping reflector with a dip angle of θ is shown in Fig. 2.
The incident angle of P wave is θ P , and the reflected angle of S wave is θ S .The dashed line ( GM ) is parallel to the dipping interface and passes through the receiver point.It intersects the P wave's ray path at M. The location of the point O on GM is determined by drawing a line that is perpendicular to GM and passes through CP.The lengths of OM , OG , and OCP are X 11 , X 22 , and Z 1 , respectively.Z is the depth of CP.X 2 is the surface distance between the receiver and CP, and If a source is located at M, Eq. ( 1) can be re-written as Grouping X 11 to the left and dividing the entire equation by Ray path of a P-S converted wave reflected from a dipping layer.θ is the dip angle of the dipping reflector.
Fig. 1.Ray path of a P-S converted wave reflected from a horizontal layer.S is the source, G is the receiver, and Z is the thickness of the layer.CP is the conversion point, and X is the offset between the source and receiver.X 1 and X 2 are the surface distances from CP to the source and the receiver, respectively.
In terms of trigonometry, Eq. ( 3) can be re-written as tan tan i i + = + 1 = , then Eq. ( 5) can be re-written as Substitute Eqs. ( 8) and (9) into Eq.(4), then Equation ( 10) is the asymptotic conversion point equation of the P-S converted wave on a dipping reflector (DACP equation), which is a quadratic equation of X 1 .Re-placing X 2 by X -X 1 , the analytic solution (X 1 ) of Eq. ( 10) is (see Appendix A) Since V P > V S , then 0 1 If Z >> X, the terms in the square root of Eq. ( 11), and Then, Eq. ( 11) can be reduced to (see Appendix B) Equation ( 13) is the asymptotic equation of the DACP equation (ADACP equation), which is a linear equation of X.When the dipping reflector becomes horizontal (θ = 0°), the ADACP equation is equal to the ACP equation.
If a stratum's velocity ratio (c) and dip angle (θ) are 2.0 and 20°, respectively, the offset (X) between the source and the receiver is 0.1 km.The true positions of the CP position of the P-S converted wave along the dipping reflector is shown as the solid curve in Fig. 3 which is calculated by the quartic DCP equation (Chang and Tang 2005).The trajectory of the CP position becomes an oblique straight line at the deep depth.The hyperbolic dashed line calculated by the quadratic DACP equation is the asymptotic line of the CP.The straight long-short dashed line is the linear approximation of the CP, which is obtained from the ADACP equation.At the deep depth, the dash and long-short dash lines are close to the solid line, implying that the DCP equation can be approximated by the DACP and ADACP equations.However at a shallow depth, the ADACP equation has larger error than the DACP equation.
The DACP and ADACP equations are derived from the ACP equation, so they have the same assumption.When the depth of the horizontal (or dipping) reflector is greater than the offset (Depth >> Offset), the ACP, DACP, and ADACP equations can predict the conversion point accurately.
DIsCussIon
The ACP equation is a function of the velocity ratio (V P /V S ) of the stratum and the offset (source-receiver).The DACP and ADACP equations are derived from the ACP equation.Therefore, these two new equations depend not only on V P /V S and offset, but also on the dip angle and depth of the stratum.) of the DACP and ADACP equations with different c (V P /V S ) are shown in Fig. 4. The CP errors of the DACP and ADACP equations go to zero when the depth-offset ratio (Z/X) is large, but they become large if Z/X is small.The CP error of the DACP equation seems to be insensitive to c.But for the ADACP equation, greater c has smaller CP error.The CP errors of the ADACP equation for any c are always greater than those of the DACP equation.Thus, the DACP and ADACP equations can predict the CP position very well at the deep depth.The CP errors of the DACP equation are smaller than those of the ADACP equation, especially at a shallow depth.
We are also interested in knowing how the dip angle of the reflector affects the CP error of the DACP and ADACP equations.For a dipping stratum with c = 2.0, the CP errors of the DACP and ADACP equations with different θ are shown in Fig. 5.This result is similar to that in Fig. 4. The CP errors of the DACP and ADACP equations approach to zero when Z/X is large, but they become large as Z/X is small.Except that, when θ is very large (for example θ = 80°), the DACP equation can calculate the CP position very precisely for any Z/X values.The greater θ has smaller CP error for the two equations.The CP errors of the ADACP equation for any θ are always greater than those of the DACP equation.In addition, both the DACP and ADACP equations are more sensitive to θ than to c in terms of the CP errors.
The CP errors of the DACP and ADACP equations will reduce to zero when Z/X is large despite the values of θ and c.In addition, when Z/X = 0.5 and θ > 30°, the CP errors of the DACP equation are less than 2%, but the CP errors of the ADACP equation are approximately 18%.Nevertheless, the CP errors of the ADACP equation will be less than 3% if the Z/X is greater than 3. Therefore, for doing the CCP binning of the "deep" reflected converted wave, it is more timesaving if the linear ADACP equation is used.The DACP equation can be used for the "shallow" seismic data because the CP errors of the DACP equation are less than 2% if the Z/X is greater than 0.5 and the θ is greater than 10°.Yuan et al. (2006) proposed the analytic solution of the converted wave reflected from a dipping reflector which is an exact solution of the DCP equation.But too many variables are needed in their equation.Although the DACP and ADACP equations are asymptotic equations and there are some errors in the conversion point position estimation at a shallow depth, when comparing the other errors which result from the trace binning for the P-S converted wave seismic data processing, those particular errors are quite small.
ConClusIon
We have derived the DACP and ADACP equations from the ACP equation for the P-S converted wave reflected from a dipping reflector.The DACP equation is a quadratic equation and the ADACP is a linear equation.Therefore, in estimating the CP position of P-S converted wave using the DACP and ADACP equations is faster than using the DCP equation which is useful for the CCP binning and move-out correction of the P-S converted wave reflected from a dipping reflector.These new equations depend on the velocity ratio of the stratum, the offset between the source and receiver, the dip angle and depth of the dipping reflector.The dip angle is the dominant factor of the DACP and ADACP equations in determining the CP position.Therefore, the success of using the DACP and ADACP equations in the P-S converted wave data processing relies heavily on the accuracy of determining the dip angle of the stratum.
For the P-S converted wave, Therefore, the solution of the Eq.(A-5) is , Eq. (B-3) can be re-written as For a dipping stratum with dip angle of 30°, the errors of the conversion points in the horizontal distance (
Fig. 3 .
Fig. 3.The trajectories of the CP with depth for the DCP, DACP, and ADACP equations.
Fig. 4 .
Fig. 4. The relationships of the errors of the conversion point in the horizontal distance (CP error) vs. the depth-offset ratio (Z/X) for DACP and ADACP equations with three velocity ratios (c = 1.5, 3.0, and 4.5).The dip angle (θ) of the dipping reflector is 30°. | 3,264.8 | 2010-01-01T00:00:00.000 | [
"Geology"
] |
Mechanisms involved in nicotinamide adenine dinucleotide phosphate (NADPH) oxidase (Nox)-derived reactive oxygen species (ROS) modulation of muscle function in human and dog bladders
Roles of redox signaling in bladder function is still under investigation. We explored the physiological role of reactive oxygen species (ROS) and nicotinamide adenine dinucleotide phosphate (NADPH) oxidase (Nox) in regulating bladder function in humans and dogs. Mucosa-denuded bladder smooth muscle strips obtained from 7 human organ donors and 4 normal dogs were mounted in muscle baths, and trains of electrical field stimulation (EFS) applied for 20 minutes at 90-second intervals. Subsets of strips were incubated with hydrogen peroxide (H2O2), angiotensin II (Ang II; Nox activator), apocynin (inhibitor of Noxs and ROS scavenger), or ZD7155 (specific inhibitor of angiotensin type 1 (AT1) receptor) for 20 minutes in continued EFS trains. Subsets treated with inhibitors were then treated with H2O2 or Ang II. In human and dog bladders, the ROS, H2O2 (100μM), caused contractions and enhanced EFS-induced contractions. Apocynin (100μM) attenuated EFS-induced strip contractions in both species; subsequent treatment with H2O2 restored strip activity. In human bladders, Ang II (1μM) did not enhance EFS-induced contractions yet caused direct strip contractions. In dog bladders, Ang II enhanced both EFS-induced and direct contractions. Ang II also partially restored EFS-induced contractions attenuated by prior apocynin treatment. In both species, treatment with ZD7155 (10μM) inhibited EFS-induced activity; subsequent treatment with Ang II did not restore strip activity. Collectively, these data provide evidence that ROS can modulate bladder function without exogenous stimuli. Since inflammation is associated with oxidative damage, the effects of Ang II on bladder smooth muscle function may have pathologic implications.
Introduction human tissues samples were de-identified and were not used in clinical investigations, their use did not require Institutional Review Board approval under the common Rule of Protection of Human Subjects Regulation. That said, their use was approved by the Temple University Institutional Biosafety Committee (# 10799) and met Biosafety in Microbial and Biosafety Laboratory and OSHSA Standards.
This study also utilized a total of 4 normal control dogs, 3 males and 1 female. The 3 males were mixed-breed hound dogs, 6-8 months old, weighing 20-25 kg (Marshall BioResources, North Rose, NY). The female dog was an adult beagle, 8 months old, that was obtained from Envigo Global Services, Inc. Denver, PA. All experiments performed on dog tissues were approved by the Institutional Animal Care and Use Committee according to guidelines of the National Institute of Health for the Care and Use Laboratory Animals and the United States Department of Agriculture and the Association for Assessment and Accreditation of Laboratory Animal Care. Dogs were group housed according to the institution's standard husbandry with 12-hr exposure to light/dark cycles. The male dogs were sham-operated control animals derived from other larger studies focusing on nerve transfer for pelvic organ reinnervation or heart failure.
Bladder muscle strip contractility studies
Each of the whole human bladders collected from 7 organ transplant donors were used for the in vitro muscle strip contractility studies. The specimens were harvested within 30 min after cross-clamping the aorta and transported to the laboratory within 40 hours immersed in Belzer's Viaspan1 University of Wisconsin organ transport solution on wet ice.
Dissections of all specimens were performed in a cold room (0-5˚C), maintaining the tissues on ice during the dissections. Bladder muscle strips were dissected from the central middle part, at least 1 cm above the ureteral orifices. These dissections were performed using sharp micro scissors and 5x magnifying loops. The mucosa was separated from the underlying layers by sharp dissection, as well as from the peritoneal fat present in human bladders. Muscle strips were obtained with the long axis parallel to the direction of the visible muscle fiber bundles. Strips were clamped between force transducers and positioners and mounted in muscle baths (S1 Fig) containing 10 ml of Tyrode's solution aerated with 95% O 2 and 5% CO 2 at 37˚C.
Strips were initially stretched slowly to 20 mN of isometric tension and allowed to relax to approximately 10 mN of basal tension [30]. Although the relaxation response of the strips was not particularly tested, we did not observe any differences between strips during their predrug baseline responses. Previously it has been reported that the relaxation of human detrusor strips, evaluated using the β-adrenoceptor agonist, was not associated with gender, age, or passive tension (10 mN) and KCl-induced tone [31]. Contractile responses were monitored with isometric force transducers, as previously described [32]. Electrical field stimulation (EFS) of 8, 12 and 24 volts (V), 1 millisecond (ms) pulse duration and 30 Hertz (Hz) frequency was delivered to each strip using a Grass S88 stimulator (Natus Neurology, Inc., Warwich, RI) interfaced with a Stimu-Splitter II (Med-Lab Instruments, Loveland, CO) power amplifier and LabChart1 software (ADInstruments). After a 30-minute equilibration, strips were exposed sequentially to an isotonic buffer containing 120 mM potassium chloride (KCl), which was immediately washed out after maximal responses were produced. Strips were then sorted out for each treatment group according to their responses to KCl such that the mean contractile response to KCl was the same between drug treatment groups. Expectedly, the average responses to KCl were not different between strips that were assigned for each treatment in either humans or dogs (S2 Fig). After re-equilibration for approximately 1 hour, trains of EFS of 1 ms pulse duration, 12 V, 8 Hz at 90 second intervals were applied to each strip for about 20 minutes.
Then, subsets of strips were incubated with either: 1) 100μM of hydrogen peroxide (H 2 O 2 , catalog # H325-500, Fisher Chemicals, East Bunker Court Vernon Hills, IL); 2) 100μM of apocynin, an inhibitor of Nox enzymes and ROS scavenger (catalog # 178385, Calbiochem-Milli-poreSigma, Sigma-Aldrich Inc., St. Louis, MO); 3) 1μM of the Nox activator angiotensin II (Ang II, catalog # ALX-151039-M005, Enzo Life Sciences, Inc., Farmingdale, NY); or 4) 10μM of ZD7155 hydrochloride, an AT1 receptor specific antagonist (catalog # 1211, Tocris Bioscience, Minneapolis, MN), each for 20 minutes in continued trains of EFS. Next, the same subsets of strips treated first with antagonists (treatment #1) were treated with either H 2 O 2 or Ang II (treatments #2 and/or #3). For that, subsets of strips were sequentially treated with: 1) apocynin (treatment #1) for about 20 minutes and then with either H 2 O 2 (treatment #2) or Ang II (treatment #2) for similar time frames without washout of the apocynin; 2) apocynin (treatment #1) for about 20 minutes, then H 2 O 2 (treatment #2), then followed by Ang II (treatment #3), each for about 20 minutes without washout of the earlier treatments; or 3) ZD7155 (treatment #1) for about 20 minutes, followed by Ang II (treatment #2) for about 20 minutes without washout of the ZD7155. Each treatment was for about 20 minutes, or when either the maximum contraction or maximum inhibition was achieved or when the tension returned almost to baseline levels in case of treatment with either H 2 O 2 or Ang II. Responses to 30μM of the muscarinic receptor agonist bethanechol (catalog # 1071009, Sigma-Aldrich, Saint Louis, MO) were then determined in the continued presence of the previously added drugs (treatments #1─3), to test the viability of strips at the end of each experiment (S3 Fig). Responses to bethanechol were not different between strips that were subjected to different drug treatments in either humans or dogs. All responses were measured as tension and expressed in milli Newtons (mN).
As was previously reported, no sex differences were found in strips from male versus female humans [30]; therefore, data from these muscle strips were grouped together. Sex differences could not be examined in dog tissues because samples of convenience were used (i.e., only 1 female dog versus 3 males).
Measurement of superoxide
Superoxide levels were measured in homogenized dog bladder muscle using lucigeninenhanced chemiluminescence which shows the ability of the smooth muscle tissue to generate superoxide. For this, we used adjacent mucosa-denuded muscle segments of dog bladder tissue (obtained as described earlier). Briefly, mucosa-denuded muscle segments were cut into small pieces, transferred into cryovials, immediately flash-frozen using liquid nitrogen, and stored at -80˚C until use. Each piece was ground into powder using a mortar and pestle. The sample was put into the mortar with a small amount of liquid nitrogen, after which once evaporated, the muscle piece was ground into powder. The powder was collected and transferred to a prelabelled eppendorf tube, and vortexed 3 times, for 1 min each, using 1X Hank's Balanced Salt Solution (HBSS, catalog # 14175-095, Gibco, Thermo Fisher Scientific, Green Island, NY). This was performed without centrifugation since total muscle homogenates were required.
The protein concentration of each sample was measured on each assay day for higher accuracy using a Pierce™ BCA Protein Assay Kit (catalog # 23227, Pierce, Rockford, IL). For the lucigenin assay, 10 mM of dark-adapted lucigenin (10,10'-dimethyl-9,9'-biacridinium, dinitrate, catalog # 14872, Cayman Chemicals, Ann Arbor, MI) was diluted in a sample buffer solution (0.8 mM MgCl 2 and 1.8 mM CaCl 2 in HBSS) to a final concentration of 25μM in the buffer solution. Primarily, lucigenin is a chemiluminogenic substrate that upon oxidation gives high yield of light-emitting products (photons) and chemiluminescence [33,34]. In the wells of a white opaque 96 well microplate (Nunclon™ Delta Surface, Flat-Bottom Microplate, catalog # 136101, Thermo Scientific, DK-4000 Roskilde, Denmark), 25μl of each sample's total homogenate were added into triplicate wells. Then, 115 μl of sample buffer without lucigenin and 40 μl of sample buffer with lucigenin were added to each well with a final lucigenin concentration after NADPH addition (see later text) at 5μM. The plate was then placed into a luminometer plate reader (GloMax1 Discover Dual Injectors with Pumps, catalog # GM3030, Promega, Madison, WI) that had been warmed to 37˚C. Basal levels were determined by measuring the light emitted from each well over a period of 15 min. Then, 20μl of 1mM NADPH (made fresh on each assay day; sodium salt, catalog # 9000743, Cayman Chemicals, Ann Arbor, MI) was injected into each well using the dual injectors to final concentration of 100μM (which triggers a high increase in ROS instantly after addition [35][36][37]. The plate was then read again over a 15-minute period. Next, 4μl of 1M Tiron solution (4,5-dihydroxy-1,3-benzene-disulfonic acid, a superoxide scavenger, catalog # ab146234, abcam, Waltham, MA) was added to each well using the dual injectors to a final concentration of 20 mM in each well, and the plate was reread over a 15-minute period. The plate was run at 37˚C during all plate reads. Reads are reported as relative light units (RLU) emitted over time (i.e., the photoemission was assayed). The amount of superoxide produced over time was calculated as follows: the values for light units obtained over the measured period for each run (basal, NADPH, and Tiron) were averaged so that for each well, there are 3 values per run. We divided these by 25 (since there was 25μl of sample per well) to get the mean light units (MLU) per μl, per well. We then divided by the protein concentration (microgram per μl) to calculate MLU per microgram of tissue. To then calculate the "actual increase in ROS", we subtracted the basal value from the NADPH and Tiron values.
Statistical analyses
Statistical analyses were performed using Prism version 9.4.1 (GraphPad Software, La Jolla, CA). Data are presented as mean ± 95% confidence intervals (CI). P-values were adjusted for multiple comparisons whenever applicable and values of 0.05 or less were considered statistically significant for all analyses. Numbers of human or animal specimens per group and per treatment (indicated as "N"), and the numbers of muscle strips per treatment (indicated as "n"), are listed in all figures. A repeated measures mixed-effects REML (Restricted Maximum Likelihood) model was used to compare treatment results, using the factors drug treatment (pre-versus post-treatment), and species (human versus dog). This was followed by Sidak's multiple comparison post hoc tests to determine differences between groups. Adjusted p values are reported.
Exogenous ROS, hydrogen peroxide (H 2 O 2 ), enhanced EFS-induced bladder strip contractions in both species
The effects of application of the ROS, hydrogen peroxide (H 2 O 2 ), at the physiological concentration of 100μM, enhanced EFS-induced smooth muscle strip contractions similarly in both species in the mixed-effects statistical model (treatment effect, p = 0.004; species effect, p = 0.8). Post hoc analyses showed in human bladders that 100μM H 2 O 2 enhanced EFSinduced smooth muscle strip contractions, compared to EFS-induced contraction before application of the H 2 O 2 (5.8 ± 6.0 pre-H 2 O 2 versus 8.3 ± 8.4 post-H 2 O 2 ; p = 0.04, Fig 1). Similarly, in dog bladders, 100μM H 2 O 2 increased the EFS-induced contractions (5.4 ± 2.2 pre-H 2 O 2 versus 8.2 ± 1.8 post-treatment; p = 0.04, Fig 1). It has been known that different levels of H 2 O 2 induce specific intracellular responses [38][39][40]. In our studies, H 2 O 2 at a concentration of 100μM was added to the muscle bath as exogenous agent and maybe it isn't produced physiologically in our in-vitro system. Although, we did not test smaller concentrations of H 2 O 2 , since the concentration that we tested occurs within the low range of H 2 O 2 that has been indicated as a physiological concentration or as an optimal extracellular, sub-toxic, or non-lethal concentration [41][42][43][44][45][46]. In human airway epithelium, it has been shown that H 2 O 2 at a concentration of 100μM enhances intracellular ROS production without affecting their viability, proliferation or morphology, while at the higher concentrations of 300 and 500μM, H 2 O 2 significantly induces cell death hours after treatment [47]. For future studies, it may be of interest to test different concentrations of H 2 O 2 .
Also, the addition of H 2 O 2 into the muscle bath caused direct strip contractions in bladder strips that were similar in both species (Fig 2A-2C; mixed-effects model: treatment effect, Fig 2B and 2C).
The NADPH oxidase (Nox) inhibitor and ROS scavenger, apocynin, attenuated EFS-induced bladder strip contractions in both species
We next examined the effects of inhibiting ROS generating enzymes using apocynin at the concentration of 100μM on EFS-induced muscle contractions. The mixed-effects statistical model showed a treatment effect (p = 0.04), yet no differences between humans and dogs (species effect, p = 0.5). Post hoc analyses showed that apocynin (100μM) attenuated intrinsic muscle strip activity in bladders from humans, compared to the strips' pre-application results (8.1 ± 6.5 for pre-apocynin versus 4.2 ± 4.6 for post-apocynin; p = 0.008, Fig 3A). As described in the methods, this was followed by additional sequential treatment with H 2 O 2 (a representative trace showing the treatment sequence is shown in Fig 3B). In the human bladder strips, treatment with H 2 O 2 (treatment #2) following apocynin treatment (treatment #1) slightly enhanced EFS-induced muscle contraction (4.2 ± 4.6 for post-apocynin versus 5.4 ± 3.6 after treatment with H 2 O 2 ; Fig 3A). This H 2 O 2 treatment result did not differ significantly from either post-or pre-apocynin results (p = 0.6 and p = 0.3, respectively).
Key Nox activator, angiotensin II (Ang II), increased EFS-induced contractions in dog bladder strips only
Additionally, we examined the effects of administration of a key Nox activator and a pro-inflammatory peptide Ang II (1μM), since it had been demonstrated that in vascular smooth muscle cells, Ang II increases H 2 O 2 levels. The mixed-effects statistical model showed a treatment effect (p = 0.01), yet no differences between humans and dogs (species effect, p = 0.8). However, in human bladders, the post hoc analyses showed that application of Ang II (1μM) did not significantly enhance EFS-induced contractions (7.6 ± 9.1 for pre-Ang II versus 10.3 ± 7.5 for post-Ang II, p = 0.1, Fig 4). In contrast, in dog bladders, Ang II enhanced the EFS-induced contractions (5.0 ± 3.9 for pre-Ang II versus 10.1 ± 5.3 for post-Ang II, p = 0.03, Fig 4).
We sought to find a difference in the human versus dog results. Knowing that there were age differences between the human donors (which ranged between 22 and 57 years) and dog donors (which were of similar age), we correlated the Ang II EFS-induced contraction results with donor age. We found that the post Ang II treatment results from the human bladder muscle strips correlated strongly and negatively with the age of the donor (r = -0.82, p = 0.01). There was no correlation in the dogs.
Ang II increased direct muscle strip contractions significantly in human bladder strips
In contrast to the above results, Ang II greatly induced direct muscle strip contractions in bladder strips. The mixed-effects statistical model showed a treatment effect (p = 0.0006), a species effect (p = 0.02), as well as a treatment x species effect (p = 0.04). The post hoc analyses showed in the human bladder muscle strips that Ang II greatly augmented direct muscle strips' contractions (4.2 ± 0.8 for pre-Ang II versus 33.5 ± 16.2 for post-Ang II, p < 0.0001, Fig 5A and 5C). Ang II also caused contractions of dog bladder muscle strips (2.8 ± 1.3 for pre-Ang II versus 12.0 ± 8.7 for post-Ang II, p = 0.04, Fig 5B and 5C). The post-treatment effects of Ang II were statistically significantly different between human and dog bladder muscle strips (p = 0.004).
Ang II treatment after apocynin treatment enhanced EFS-induced contractions in both species
We further explored the effects of treatment with Ang II added sequentially after apocynin treatment, or apocynin and then H 2 O 2 treatment, before the Ang II treatment (Fig 6A-6C, with representative tracing of the treatment sequence shown in Fig 6B and 6C). The mixedeffects statistical model showed a treatment effect (p = 0.04), yet no species effect (p = 0.2) or treatment x species effect (p = 0.2). The post hoc analyses showed a pre-versus post-apocynin treatment effect of depression of the EFS-induced strip contractions (Fig 6), similar to that seen in Fig 3's experiment. Yet, the secondary treatment with Ang II showed an enhanced EFS-induced strip contractions that was back to the control levels, compared to post-apocynin treatment in both human (12.6 ± 11.8 versus 4.4 ± 4.6, p = 0.4) and dog (6.5 ± 3.4 versus 3.2 ± 1.1, p = 0.3) bladders (Fig 6A). A similar recovery from the apocynin reduced contractions was observed after the Ang II treatment, regardless of whether H 2 O 2 was included prior to the final Ang II treatment or not (see representative traces in Fig 6B and 6C).
The AT1 receptor-specific inhibitor, ZD7155, attenuated EFS-induced contractions
We next examined the effects of administration of AT1 receptor specific antagonist, ZD7155 (10μM) on EFS-induced activity. The mixed-effects statistical model showed a treatment effect (p = 0.0007), but no species difference (p = 0.8) or species x treatment effect (p = 0.8). The post
ROS levels in dog bladder muscle tissue
Adjacent dog bladder muscle samples were prepared as total homogenates for lucigeninenhanced chemiluminescence assays. We found that the addition of NADPH (100μM) to lucigenin-containing buffer enhanced the lucigenin signal in the samples over background by stimulating ROS production (p = 0.001, Fig 8A and 8B). The ROS levels in response to the superoxide scavenger, Tiron (20 mM), were lower than that elicited by NADPH (p = 0.04, Fig 8A), suggesting that NOX is a significant source of superoxide in dog bladder muscle in response to NADPH exposure.
Summary of objective and results
Utilizing in vitro studies, we aimed to explore the physiological role of ROS/ Nox in regulating muscle function in bladders collected from humans and dogs with no known bladder pathologies. The exogenous ROS, H 2 O 2 , enhanced EFS-evoked contractions (and also directly enhanced muscle strip contractions), while the Nox inhibitor and ROS scavenger, apocynin, attenuated the EFS-induced contractions. Treatment with H 2 O 2 following apocynin treatment improved the EFS-induced contractions. The enhancement of EFS-evoked contractions by H 2 O 2 and the inhibition of these contractions by apocynin demonstrates the functional relevance of ROS in regulating human and dog bladder smooth muscle activity and suggests that endogenous Nox-derived ROS regulates smooth muscle function. The Nox activator and inflammatory mediator, Ang II, known to act via the AT1 receptor [48,49], enhanced the EFSinduced contractions in dog, but not human bladder muscle strips (Fig 4), yet induced direct strip contractions in both species (Fig 5). Also, treatment of apocynin-treated strips with Ang II restored EFS-induced contractions in both species. Blockade of the AT1 receptor using a specific inhibitor, ZD7155, reduced the EFS-induced contractions in both species. The augmentation of contractions by Ang II suggests that activation of Nox via a receptor's ligand can also enhance smooth muscle activity, while the inhibitory effect of the selective antagonist ZD7155 indicates that the effect of Ang II is mediated by the AT1 receptor.
Effects of the exogenous ROS, H 2 O 2 on EFS-induced contractions
We observed enhanced EFS-induced contractions by the exogenous ROS, H 2 O 2 in strips from both human and dog bladders (Fig 1). Similarly, H 2 O 2 treatment enhances EFS-induced contraction of cat tracheal strips [18], and isolated rat bronchi [22], and application of a superoxide generating compound, pyrogallol, enhances EF-induced contraction of rat mesenteric arteries [50]. H 2 O 2 treatment potentially leads to increased sensitivity to trains of EFS through the stimulation of intramural nerves and membrane-bound receptors. For example, EFS can excite detrusor smooth muscle strips directly by the release of neurotransmitters acetylcholine and/or ATP, dependent on the species [51, 52]. Nerve-evoked contractions at frequencies of 8-32 Hz appear to occur predominately by the response to released acetylcholine in urinary bladders [53]. The amount of the acetylcholine released endogenously from postganglionic nerves during EFS is frequency-dependent and correlates with the observed contractile force of detrusor muscle [54].
Even though H 2 O 2 is a naturally occurring oxidative compound [55], it can cause damage to cellular and intracellular components [1,20]. High concentrations of H 2 O 2 (exceeding 300μM) may damage smooth muscle contractile protein function and cause a net reduction of contraction despite raising intracellular calcium concentrations [56]. In muscle strips from rabbit bladders, Francis and colleagues showed that contractile responses of rabbit bladder strips to field stimulation, at the frequencies of 2, 8, and 32 Hz, were decreased by increasing H 2 O 2 concentrations [57]. An inhibitory effect on contractility was observed at H 2 O 2 concentrations exceeding 10 mM, presumedly due to a loss of sensitivity of the contractile proteins to calcium that would reduce net muscle contractility. Additionally, a study in pigs revealed that bladder smooth muscle tissues become susceptible to oxidative stress induced by ROS (Cumene hydroperoxide, 0.1-0.8 mM, lipophilic hydroperoxide). This effect was proposed to involve muscarinic receptor destruction and consequently, a reduction in strip contraction, yet no apparent effect on the cholinergic nerves as they still responded to the electrical stimulation at the frequencies used (8 and 32 Hz) [53]. Our lower H 2 O 2 concentration of 100μM [16,60]. It was reported in cat trachea that the increased intracellular calcium produced by H 2 O 2 was associated with a slow increase in baseline muscle tension and augmentation of EFS-evoked contractions [18]. We cannot rule out such effects of H 2 O 2 during EFS that involve nerve activity and receptor activation, since these were not examined in our experiments.
Other possible reasons for our observed enhancement of EFS-induced and direct contractions of the muscle strips by H 2 O 2 , rather than the inhibition seen in the Francis et al. study in rabbit bladder [57] could be the different stimulation parameters used. The setup of the EFS train in the Francis et al. study was different from what we used in our study (50 V versus 12 V stimulus intensity, 2 ms versus 1 ms pulse duration, 10 versus 90 sec for the inter-train interval, frequencies of 2, 4, 8, 16, and 32 Hz versus only one frequency of 8 Hz). The 12 V used in our study was enough to produce maximal tension, yet apparently low enough to avoid any tissue fatigue during the 20 min of testing. Also, we used mucosa-denuded human and canine detrusor strips rather than full thickness rabbit bladder strips used in the Francis et al. study. We also only examined the effects of a single dose of H 2 O 2 (100 μM) on EFS-induced contractions.
Direct effects of H 2 O 2 on muscle strip contractility
The direct effect of H 2 O 2 (Fig 2A-2C) is likely due to intracellular oxidative effects [55] and suggests that this reactive oxygen radical is playing a role in the contractility of human and dog bladder smooth muscle strips, as has been shown in rats [61]. Also, H 2 O 2 has been shown to contract dog lung strips, bovine trachealis muscle strips [21], and rat airways [22]. These results indicate that H 2 O 2 did not affect the ability of strips to produce maximal force, as previously reported [56]. A study in pigs revealed that bladder smooth muscle tissues can become susceptible to oxidative stress induced by ROS, and that the effect was attributed to muscarinic receptor destruction and consequently reduction in strip contraction [53]. This stress may explain the gradual decline in contractility of the muscle strips with continued exposure to H 2 O 2 in both human and dog bladders.
Effects of apocynin on EFS-induced muscle contractility
In both human and dog bladders, EFS-induced muscle contractility was attenuated by treatment of muscle strips with apocynin (Fig 3). Apocynin is an inhibitor of Noxs and a scavenger of non-radical oxidant species, such as H 2 O 2 [62][63][64][65]. Apocynin inhibits the assembly of Nox responsible for ROS production [66], an inhibition that reduces intracellular ROS generation by Nox [67]. Furthermore, it has been reported that the Nox enzyme responsible for ROS production is fundamentally inactive in resting conditions, and that EFS elicits increases in ROS that can be blocked by apocynin [68,69]. Our data and these past findings combined suggest that under physiological conditions, some ROS are released endogenously, in part from Nox enzyme activity, that then contribute to the normal EFS amplitude so that when apocynin is added, this endogenously and spontaneously released H 2 O 2 or other superoxide is suppressed (different studies reported the additional off-target effects of apocynin as a scavenger of nonradical oxidant species) [62,63]. Apparently, apocynin can modulate strip function and ROS activity via multiple mechanisms speculated as its actions [65]. With its lack of specificity, it is reasonable to anticipate that the inhibition of EFS by apocynin does not specify it as the only evidence for the role of Nox in regulating bladder muscle function. Thus, endogenous ROS generated from Nox enzymes can regulate smooth muscle function and modulate key bladder functions without exogenous stimuli, further evidence for the functional significance of ROS in bladder function.
We next examined if H 2 O 2 treatment following apocynin treatment could counteract the apocynin attenuation of EFS-induced contractions (Fig 3). We found that treatment with H 2 O 2 following apocynin treatment restored the apocynin suppressed EFS-induced muscle strip contraction, consistent with reduced ROS production due to the inhibitory effect of apocynin on Nox enzymes which can be restored by supplementing exogenous ROS, which could suggest that H 2 O 2 -induced contraction did not involve Nox activation. The exogeneous ROS (H 2 O 2 ) bypasses the Nox activation and directly acts on the contractile machinery. On the other hand, the inability of apocynin to counteract the effect of H 2 O 2 shows that apocynin does not act as a non-specific antioxidant against H 2 O 2, rather it acts as an inhibitor to directly suppress Nox activity. However, Figs 1 and 2 show that H 2 O 2 enhances muscle contraction (both EFS and direct), as has another study using rat bladder [61]. Next, due to the specificity of apocynin as an inhibitor of Nox [64], and based on findings by this study and others that EFS elicits increases in ROS that are blocked by apocynin [68,69], we suggest that the initial effects of apocynin treatment shown in Fig 3A is the suppression of endogenous ROS from the Nox enzymes, and that the response to an exogenous application of H 2 O 2 then bypasses this inhibition and mirrors that seen without apocynin pre exposure (Fig 1). We suggest that these results confirm the involvement of extracellular ROS in activating the contractile machinery in strips of human and dog bladders, and that the muscle strips are subjected to ROS and Nox regulation.
Effects of Ang II on EFS-induced muscle contractility
The observed enhancement of contractile responses to EFS by Ang II (1μM) in dog bladder strips (Fig 4) is in agreement with findings of previous in vitro experiments on urinary bladder smooth muscle from several species that revealed that Ang II is a potent contractile agent in this tissue [70][71][72][73][74]. In addition, the enhancement of contractile responses to EFS by Ang II has been well documented in vascular tissues of several species [50, [75][76][77][78][79]. Ang II is known as a potent stimulator of vascular ROS generation and that the potentiation of EFS-induced contractions by Ang II is mediated by superoxide production [80][81][82][83]. Additionally, it had been demonstrated that in vascular smooth muscle cells, Ang II increases H 2 O 2 levels indirectly through the activation of Nox enzymes and consequently superoxide anion production [23,64,[84][85][86][87], and that the effect of superoxide is mediated, at least in part, through its conversion to H 2 O 2 [23,37,88,89]. Surprisingly and different from dogs, Ang II did not enhance EFSinduced contractions in human bladders (Fig 4). It is worth noting that besides the natural variations between human subjects (we examined 7), we had a much wider variation in ages in the human donors (22 to 57 years) than in the dogs which were of similar age (6-8 months). The post Ang II treatment results correlated strongly and negatively with the age of the human donor (r = -0.82). Thus, many variables, including age, might contributed to the difference in the effect of Ang II on EFS in these two groups.
Direct effects of Ang II on muscle strip contractility
Bath-applied Ang II (1μM) caused an increase in strips' basal tension that reached a peak before gradually declining in continued exposure to Ang II in both human and dog bladders (Fig 5A and 5B). In isolated human detrusor muscle, it has been reported that in the continued presence of Ang II, desensitization of the functional response occurs, and that repeated administration of Ang II after a previous administration fails to initiate contractions (i.e., tachyphylaxis) [70,90]. Therefore, to avoid such condition and to minimize errors during data interpretation, we only used a single dose of Ang II (1μM) per single muscle strip. The mean contractions of humans and dog strips following Ang II application (Fig 5C) matches results of several in vitro studies showing that Ang II causes contraction of human, canine, and rabbit bladder muscle [70,72,91,92]. It is well documented that exposure to Ang II mediates its muscle contractile effect, at least in part by stimulating the activation of smooth muscle Nox enzymes implicated in mediating Ang II effect by the generation of ROS, and consequently superoxide production [23,37,82,93]. Changes in intracellular calcium concentration have also been proposed as the main mechanism involved in the regulation of Ang II-induced smooth muscle contraction [94,95].
We show here for the first time that the suppressive effects of apocynin on EFS-induced muscle contractions can be restored by Ang II treatment (Fig 6). We also observed the enhancement of direct contractions after the addition of Ang II following apocynin treatment. Studies have shown the excitatory effect of Ang II on EFS-induced muscle contractions is eliminated by apocynin [50,96]. Also, contractility studies of human blood vessels showed that enzymes other than Nox play a role in Ang II-induced superoxide production, since inhibitors of these enzymes blunted Ang II mediated EFS-induced contractions [93,97]. In addition to the possible role of superoxide, several other possible mechanisms may mediate the effects of Ang II on EFS-induced contractions, including neurotransmitter biosynthesis [98] and release [99], and/ or neurotransmitter reuptake blockade [100]. Collectively, these results suggest that Ang II can act as both activator and enhancer of bladder smooth muscle contractile activity. This effect is partly mediated via Nox-derived ROS production, another novel finding of this study.
Effects of AT1 receptor specific antagonist, ZD7155, on EFS-induced muscle contractility
The observed inhibition of EFS-induced contractions in both human and dog bladders by the AT1 receptor specific antagonist, ZD7155 (10μM; Fig 7) is in line with what has been reported in adult mammalian cells-that the physiological effects of Ang II are achieved mainly by binding directly to, and activating the receptor subtype AT1 [49, 50, 101]. These responses are supported by findings showing the presence of AT1 receptor in the detrusor smooth muscle layers of different species [48,70,90,91,102,103]. The inhibited EFSinduced activity upon AT1 receptor antagonism suggests the possibility of local Ang II formation within detrusor muscle cells, as previously reported [104]. Since Ang II is a potent ligand for AT1 receptor, one might expect it to reverse the ZD715 blockade, however, the concentration that we used for Ang II was only 1μM, while that for ZD7155 was 10μM, given the affinities of both ligands to the receptor and assuming competitive antagonism, the antagonistic effect is prevailing. The non-responsiveness to Ang II application after ZD7155 treatment in strips isolated from human and dog bladders (Fig 7) further confirms that Ang II-induced contractions in human and dog detrusor strips is mediated through the AT1 receptor and supports specific action on AT1 receptor.
Enhanced ROS levels in dog bladder muscle
Superoxide was measured in dog bladder muscle tissue using lucigenin-enhanced chemiluminescence, as this method was shown to be a sensitive for superoxide detection [34, 105,106]. The observed enhancement in ROS production (Fig 8) is in agreement with previously reported work [107]. Thus, this data suggests that the enhanced chemiluminescence in response to NADPH is mediated by superoxide because the scavenger of superoxide Tiron was able to inhibit this response [107].
There are limitations to our study. We did not measure ROS production in human or dog bladder strips, in the absence or presence of apocynin and/or Ang II to support the functional data obtained in muscle baths because that was beyond our scope of study and not feasible using our muscle bath methodology (shown in S1 Fig). Also, commercially available and most widely used methods to measure peroxide or assess ROS are shown to be open to artefacts, as recently reported [108]. Thus, results obtained from using such methods should be interpreted with caution. Additionally, while ROS detectors, such as the free radical analyzer or microelectrode sensor, might be good tools to measure ROS, it is not feasible to use those methods in our system due to the larger volume of buffer that we add into a muscle bath (10 ml) relative to the low levels of ROS that are produced (picomolar to low micromolar). Please see S1 Fig. that shows the muscle strip mounted in a muscle bath. Also, most ROS are short-lived (lifespans of milliseconds or less), so it is hard to be certain of the amounts measured [108].
The other limitation is that we did not measure the superoxide levels in human bladder muscle tissue. However, we did not want to rely our conclusions exclusively on lucigenin data because the muscle strip contractility results related more to bladder muscle function and support the conclusions that the induced muscle contraction are mediated by ROS production, specifically by enhanced superoxide levels.
Conclusion
Collectively, these data provide evidence for the functional significance of Nox-derived ROS in human bladder and that ROS can modulate bladder function without exogenous stimuli. The excitatory effects of angiotensin II on bladder smooth muscle function may have significant pathological implications since inflammation is an important mechanism associated with oxidative damage. were already subjected to the indicated drug treatment(s). All drug treatments that were added before bethanechol treatment are indicated on the X-axis in either human or dog strips. The maximal responses to 30μM bethanechol are expressed in milli Newtons (mN | 8,506.4 | 2023-06-23T00:00:00.000 | [
"Medicine",
"Environmental Science",
"Biology"
] |
Recent Advances in Heat Transfer Enhancements : A Review Report
Different heat transfer enhancers are reviewed. They are (a) fins and microfins, (b) porous media, (c) large particles suspensions, (d) nanofluids, (e) phase-change devices, (f) flexible seals, (g) flexible complex seals, (h) vortex generators, (i) protrusions, and (j) ultra high thermal conductivity composite materials. Most of heat transfer augmentation methods presented in the literature that assists fins and microfins in enhancing heat transfer are reviewed. Among these are using joint-fins, fin roots, fin networks, biconvections, permeable fins, porous fins, capsulated liquid metal fins, and helical microfins. It is found that not much agreement exists between works of the different authors regarding single phase heat transfer augmented with microfins. However, too many works having sufficient agreements have been done in the case of two phase heat transfer augmented with microfins. With respect to nanofluids, there are still many conflicts among the published works about both heat transfer enhancement levels and the corresponding mechanisms of augmentations. The reasons beyond these conflicts are reviewed. In addition, this paper describes flow and heat transfer in porous media as a well-modeled passive enhancement method. It is found that there are very few works which dealt with heat transfer enhancements using systems supported with flexible/flexible-complex seals. Eventually, many recent works related to passive augmentations of heat transfer using vortex generators, protrusions, and ultra high thermal conductivity composite material are reviewed. Finally, theoretical enhancement factors along with many heat transfer correlations are presented in this paper for each enhancer.
Introduction
The way to improve heat transfer performance is referred to as heat transfer enhancement (or augmentation or intensification).Nowadays, a significant number of thermal engineering researchers are seeking for new enhancing heat transfer methods between surfaces and the surrounding fluid.Due to this fact, Bergles [1,2] classified the mechanisms of enhancing heat transfer as active or passive methods.Those which require external power to maintain the enhancement mechanism are named active methods.Examples of active enhancement methods are well stirring the fluid or vibrating the surface [3].Hagge and Junkhan [4] described various active mechanical enhancing methods that can be used to enhance heat transfer.On the other hand, the passive enhancement methods are those which do not require external power to sustain the enhancements' characteristics.Examples of passive enhancing methods are: (a) treated surfaces, (b) rough surfaces, (c) extended surfaces, (d) displaced enhancement devices, (e) swirl flow devices, (f) coiled tubes, (g) surface tension devices, (h) additives for fluids, and many others.
Mechanisms of Augmentation of Heat Transfer
To the best knowledge of the authors, the mechanisms of heat transfer enhancement can be at least one of the following.
(1) Use of a secondary heat transfer surface.
(3) Disruption of the laminar sublayer in the turbulent boundary layer.
International Journal of Chemical Engineering (4) Introducing secondary flows.
(8) Enhancing effective thermal conductivity of the fluid under dynamic conditions.
(13) Modification of radiative property of the convective medium.(14) Increasing the difference between the surface and fluid temperatures.(15) Increasing fluid flow rate passively.(16) Increasing the thermal conductivity of the solid phase using special nanotechnology fabrications.
Methods using mechanisms no.(1) and no.(2) include increasing the surface area in contact with the fluid to be heated or cooled by using fins, intentionally promoting turbulence in the wall zone employing surface roughness and tall/short fins, and inducing secondary flows by creating swirl flow through the use of helical/spiral fin geometry and twisted tapes.This tends to increase the effective flow length of the fluid through the tube, which increases heat transfer but also the pressure drop.For internal helical fins however, the effect of swirl tends to decrease or vanish all together at higher helix angles since the fluid flow then simply passes axially over the fins [5].On the other hand, for twisted tape inserts, the main contribution to the heat transfer augmentation is due to the effect of the induced swirl.Due to the form drag and increased turbulence caused by the disruption, the pressure drop with flow inside an enhanced tube always exceeds that obtained with a plain tube for the same length, flow rate, and diameter.
Turbulent flow in a tube exhibits a low-velocity flow region immediately adjacent to the wall, known as the laminar sublayer, with velocity approaching zero at the wall.Most of the thermal resistance occurs in this lowvelocity region.Any roughness or enhancement technique that disturbs the laminar sublayer will enhance the heat transfer [6].For example, in a smooth tube of 25.4 mm inside diameter, at Re = 30, 000, the laminar sublayer thickness is only 0.0762 mm under fully developed flow conditions.The internal roughness of the tube surface is well-known to increase the turbulent heat transfer coefficient.Therefore, for the example at hand, an enhancement technique employing a roughness or fin element of height ∼ 0.07 mm will disrupt the laminar sublayer and will thus enhance the heat transfer.Accordingly, mechanism no.(3) is a particularly important heat transfer mechanism for augmenting heat transfer.
Li et al. [5] described the flow structure in helically finned tubes using flow visualization by means of high-speed photography employing the hydrogen bubble technique.They used four tubes with rounded ribs having helix angles between 38 • and 80 • and one or three fin starts, in their investigation.Photographs taken by them showed that in laminar flow, bubbles follow parabolic patterns whereas in the turbulent flow, these patterns break down because of random separation vortices.Also, for tubes with helical ridges, transition to turbulent flow was observed at lower Reynolds numbers compared to smooth tube values.Although swirl flow was observed for all tubes in the turbulent flow regime, the effect of the swirl was observed to decrease at higher helix angles.Li et al. [5] concluded that spiral flow and boundary-layer separation flow both occurred in helicalridging tubes, but with different intensities in tubes having different configurations.As such, mechanisms no.(4) and no.(5) are also important heat transfer mechanisms for augmenting heat transfer.
Arman and Rabas [7] discussed the turbulent flow structure as the flow passes over a two-dimensional transverse rib.They identified the various flow separation and reattachment/redevelopment regions as: (a) a small recirculation region in front of the rib, (b) a recirculation region after the rib, (c) a boundary layer reattachment/redevelopment region on the downstream surface, and finally (d) flow up and over the subsequent rib.The authors noting that recirculation eddies are formed above these flow regions, identified two peaks that occur in the local heat transfer-one at the top of the rib and the other in the downstream recirculation zone just before the reattachment point.They also stated that heat transfer enhancement increases substantially with increasing Prandtl number.Therefore, the mechanism no.(6) plays an important role in heat transfer enhancements.
Heat transfer enhancements associated with fully/partially filing the fluidic volume by the porous medium take place by the following mechanisms [8,9].(7) Enhancing effective thermal conductivity of the fluid under static conditions.(8) Enhancing effective thermal conductivity of the fluid under dynamic conditions.(9) Delaying the boundary layer development.(10) Thermal dispersion.(11) Increasing the order of the fluid molecules.(12) Redistribution of the flow.(13) Modification of radiative property of the convective medium.
Ding et al. [10] showed that fluids containing 0.5 wt.% of carbon nanotubes (CNT) can produce heat transfer enhancements over 250% at Re = 800, and the maximum enhancement occurs at an axial distance of approximately 110 times the tube diameter.These types of mixtures are named in the literature as "nanofluids" and it will discussed later on in this report.The increases in heat transfer due to presence of nanofluids are thought to be associated with the following mechanisms.(7) Enhancing effective thermal conductivity of the fluid under static conditions.
(8) Enhancing effective thermal conductivity of the fluid under dynamic conditions.
Flexible fluidic thin films were introduced in the work of Khaled and Vafai [11,12] and Khaled [13].In their works, they describe a new passive method for enhancing the cooling capability of fluidic thin films.In summary, flexible thin films utilize soft seals to separate between their plates instead of having rigid thin film construction.Khaled and Vafai [11] have demonstrated that more cooling is achievable when flexible fluidic thin films are utilized.The expansion of the flexible thin film including flexible microchannel heat sink is directly related to the average internal pressure inside the microchannel.Additional increase in the pressure drop across the flexible microchannl not only increases the average velocity but also, it expands the microchannel causing an apparent increase in the coolant flow rate which, in turn, increases the cooling capacity of the thin film.Khaled and Vafai [12] and Khaled [13] have demonstrated that the cooling effect of flexible thin films can be further enhanced if the supporting soft seals contain closed cavities filled with a gas which is in contact with the heated plate boundary of the thin film.They referred to this kind of sealing assembly as "flexible complex seals".The resulting fluidic thin film device is expandable according to an increase in the working internal pressure or an increase in the heated plate temperature.Therefore, mechanism no.(15) can have an impact in enhancing heat transfer inside thermal systems.Finally, the mechanism no.(14) finds its applications when rooted fins are utilized as an enhancer of heat transfer [14].Finally, the last mechanism will be discussed when the topic of ultra high thermal conductivity composite materials is discussed.
Heat Transfer Enhancers
From the concise summary about mechanisms of enhancing heat transfer described in last section, it can be concluded that these mechanisms can not be achieved without the presence of the enhancing elements.These elements will be called as "heat transfer enhancers".In this report, the following heat transfer enhancers will be explained: 3.1.Extended surfaces (Fins); 3.2.Porous media; 3.3.Large particles suspensions; 3.4.Nanofluids; 3.5.Phase change devices; 3.6.Flexible seals; 3.7.Flexible complex seals; 3.8.Vortex generators; 3.9.Protrusions; and 3.10.Ultra high thermal conductivity composite materials.
Heat Transfer Enhancement Using Extended
Surfaces (Fins) 3.1.1.Introduction.Fins are quite often found in industry, especially in heat exchanger industry as in finned tubes of double-pipe, shell-and-tube and compact heat exchangers [15][16][17][18][19][20].As an example, fins are used in air cooled finned tube heat exchangers like car radiators and heat rejection devices.Also, they are used in refrigeration systems and in condensing central heating exchangers.Moreover, fins are also utilized in cooling of large heat flux electronic devices as well as in cooling of gas turbine blades [21].Fins are also used in thermal storage heat exchanger systems including phase change materials [22][23][24][25].To the best knowledge of the authors, fins as passive elements for enhancing heat transfer rates are classified according to the following criteria.
(1) Geometrical design of the fin.
(3) Number of fluidic reservoirs interacting with the fin.
(4) Location of the fin base with respect to the solid boundary.
Laminar Single-Phase Heat Transfer in Finned Tubes.
Laminar flow generally results in low heat-transfer coefficients and the fluid velocity and temperature vary across the entire flow channel width so that the thermal resistance is not just in the region near the wall as in turbulent flow.Hence, small-scale surface roughness is not effective in enhancing heat transfer in laminar flow; the enhancement techniques employ some method of swirling the flow or creating turbulence [6].Laminar flow heat transfer and pressure drop in "microfin" tubes (discussed later) was experimentally measured by [35].Their data showed that the heat transfer and pressure drop in microfin tubes were just slightly higher than in plain tubes and they recommended that microfin tubes not be used for laminar flow conditions.This outcome has also been confirmed in the investigations of Shome and Jensen [36] who concluded that "microfinned tube and tubes with fewer numbers of tall fins are ineffective in laminar flows with moderate free convection, variable viscosity, and entrance effects as they result in little or no heat transfer enhancement at the expense of fairly large pressure drop penalty".
Turbulent Single-Phase Heat Transfer in Finned Tubes.
Turbulent flow and heat transfer in finned tubes has been widely studied in the past and the literature available on the experimental investigations of turbulent flow and heat transfer in finned tubes is quite extensive.One of the earliest experimental work on the heat transfer and pressure drop characteristics of single-phase flows in internally finned tubes dates back to 1964 when Hilding and Coogan [37] presented their data for ten different internal fin geometries for a 0.55 in.(14 mm) inner diameter copper tube with 0.01 in.(0.254 mm) straight brass fins using air as the test fluid.Hilding and Coogan [37] observed that the heat transfer is enhanced by around 100-200% over that of the smooth tube and the enhancement is accompanied by a similar increase in the pressure drop.The Reynolds number in this study ranged from 15, 00 to 50, 000.
Kalinin and Yarkho [38] used different fluids in the range 1500 ≤ Re ≤ 400, 000 and 7 ≤ Pr ≤ 50 to investigate the effect of Reynolds and Prandtl numbers on the effectiveness of heat transfer enhancement in smoothly outlined internally grooved tubes.The ranges of the transverse groove heights tested were 0.983 ≤ d/D ≤ 0.875, (where d is the fin tip diameter and D is the inner diameter) with maximum groove spacing equal to the pipe nominal diameter.They reported that the critical Reynolds number at which transition to turbulent flow occurs decreased from 2400 for a smooth tube to 1580 at e/D = 0.875 and fin spacing equal to half the pipe diameter for a grooved tube, with a maximum increase in the heat transfer coefficient of up to 2.2 times the minimum measured value.The authors also observed that the behavior of the Nusselt number for the tested range of Prandtl numbers is independent of the Prandtl number.
In their two papers, Vasilchenko and Barbaritskaya [39,40] published their results for the heat transfer and pressure drop of turbulent oil flow in straight finned tubes with 4 ≤ N ≤ 8 and 0.13 ≤ e/D ≤ 0.3, for an operating condition range of 10 3 ≤ Re ≤ 10 4 and 70 ≤ Pr ≤ 140.Their results showed that the heat transfer is enhanced by 30% to 70% over that of smooth tubes for the finned tubes tested.Correlations for predicting the friction factors and the Nusselt numbers were also presented.
In the work of Bergles et al. [41], heat transfer and pressure drop data for straight and spiral finned tubes of fin heights from 0.77 to 3.3 mm with water as the working fluid was investigated.The Reynolds number based on the hydraulic diameter ranged from around 1, 500 to 50, 000.They found an earlier transition from laminar to turbulent flow and their friction factor data indicated that the smooth tube friction factor correlations could also be used for the tested finned tubes in the turbulent region.The heat transfer coefficients were found to be up to twice that of comparable smooth tubes.From their heat transfer data, they concluded that the hydraulic diameter approach is effective for correlation only in the case of straight fins of moderate heights.
Watkinson et al. [42,43] in their two separate works, performed experiments for water and air flows, respectively, in a tube-in-tube heat exchanger under isothermal heating conditions to study the turbulent heat transfer and pressure drop characteristics of straight and helically finned tubes.A total of eighteen tubes, 5 with straight fins and 13 with spiral fins, having internal fin geometries with fin starts from 6 to 50, 0.026 ≤ e/D ≤ 0.158, helix angles from 0 • to 15 • , and inside diameters of 0.420 to 1.196 inch were examined for 7 × 10 3 ≤ Re ≤ 3 × 10 5 and 0.7 ≤ Pr ≤ 3.4.A commercial smooth copper tube was also tested for comparison.The air results presented show that for most tubes, from Re = 50, 000 up to Re = 300, 000 the heat transfer is enhanced up to 95% over that of a smooth tube.On the other hand, in water flow tests, at Re = 50, 000 heat transfer is enhanced up to 87% over that of a smooth tube, but at higher Reynolds number, the finned tubes approached smooth tube heat transfer performance.A maximum increase of 100% in the pressure drop over smooth tube was observed for tubes with tall helical fins.Separate empirical nondimensional heat transfer correlations were presented for water and air, for both straight and spiral fin tubes, having a form similar to the smooth tube turbulent Sieder-Tate correlation with additional parameters consisting of the ratios of the inter fin spacing to the tube diameter and the fin pitch to the tube diameter.For straight fin tubes, the inter fin spacingto-diameter ratio and for spiral finned tubes the pitch-todiameter ratio were incorporated to form a modified Blasiustype correlation for predicting the friction factor.These correlations predicted their data to within a maximum error of 13%.
Carnavos [44] tested eight finned tubes (both straight and helically finned) to obtain heat transfer and pressure drop data for cooling of air in turbulent flow employing a double tube heat exchanger.The tubes tested had fin starts from 6 to 16 and helix angles from 2.5 • to 20 • .The results were presented on the hydraulic diameter basis for the range 10 4 ≤ Re h ≤ 10 5 and correlations were proposed to predict the heat transfer and the pressure drop.The reported heat transfer correlation was in the form of a modified Dittus-Boelter single-phase correlation having an additional correction factor "F", consisting of the ratios of the nominal heat transfer area to the actual heat transfer area and the actual flow area to the core flow area, respectively.The secant of the helix angle raised to the third power was also included in "F".The friction factor was also in the form of a modified Blasius type equation with a correction factor "F * " comprising of the ratio of the actual flow area-to-the nominal flow area and the secant of the helix angle.In a later investigation, Carnavos [45] used the same apparatus and three more tubes with number of fin starts and helix angles up to 38 • and 30 • , respectively, to extend his air results by including experimental data conducted for heating of water and a 50% w/w ethylene glycol-water solution.The data for air obtained earlier [44] was reexamined and a set of correlations that predicted their entire data obtained with air, water, and ethylene glycol-water solution to within ±10%, were proposed in the ranges 10 4 ≤ Re h ≤ 10 5 , 0.7 ≤ Pr ≤ 30, and 0 ≤ α ≤ 30 • .The Nusselt number correlation was allowed to retain its original form whereas the helix angle dependency in the friction factor correlation was slightly changed.
Armstrong and Bergles [46] conducted experiments for electric heating of air in the range 9, 000 ≤ Re ≤ 120, 000 and Pr = 0.71, using seven different silicon carbide finned tubes all having straight fins.The tubes tested had fin starts from 8 to 24 and e/D from 0.06 to 0.15.The results indicated that the heat transfer is enhanced by around 30-100% over that of a smooth silicon carbide tube.Their heat transfer data was predicted to within ±20% by the Carnavos [45] heat transfer correlation, but a large disagreement was observed between the measured friction factors and those predicted by the Carnavos [45] friction factor correlation.
Since their introduction more than 20 years ago, "microfin" tubes have received a lot of attention playing a very significant role in modern, high-efficiency heat transfer systems.Microfin enhancements are of special interest because the amount of extra material required for microfin tubing is much less than that required for other types of internally finned tubes [47].Of the many enhancement techniques which have been proposed, passively enhanced tubes are relatively easy to manufacture, cost-effective for many applications, and can be used for retrofitting existing units, whereas active methods, such as vibrating tubes, are costly and complex [48].Moreover, these tubes ensure a large heat transfer enhancement with a relatively small increase in the pressure drop penalty.Microfin tubes are typically made of copper and have an outside diameter between 4 and 15 mm.The principal geometric parameters that characterize these tubes [49] are: the external diameter, fin height (from 0.075 mm to 0.4 mm), helix angle (from 10 • to 35 • ), and the number of fin starts (from 50 • to 60 • ).These dimensions are in contrast to other types of internal finning that seldom exceed 30 fins per inch and fin heights that range from several factors higher than the microfin tube height.Currently, tubes with axial and helical fins, in rectangular, triangular, trapezoidal, crosshatched, and herringbone patterns are available.Important dimensionless geometric variables of an internally microfinned tubes include the dimensionless fin height (ε/D, fin height/internal diameter) and the dimensionless fin pitch (p/ε, fin spacing/fin height).A microfin tube typically has 0.02 ≤ ε/D ≤ 0.04 and 1.5 ≤ p/D ≤ 2.5, [50].As microfinnned tubes are typically used in evaporators and condensers, thus most of the extensive existing research literature on microfinned tube performance characteristics is devoted to two-phase refrigerant flows.Schlager et al. [51] and Khanpara et al. [52] are typical examples of such investigations, showing a 50% to 100% increase in boiling and condensation heat transfer coefficients with only a 20% to 50% increase in pressure drop.However, single-phase performance of microfinned tubes is also an important consideration in the design of refrigeration condensers as a substantial proportion of the heat transfer area of these condensers is taken up in the desuperheating and later subcooling of the refrigerant.Consequently, accurate correlations for predicting the single-phase heat transfer and pressure drop inside microfinned tubes are necessary in order to predict the performance of these condensers and to optimize the design of the system.Khanpara et al. [52] investigated the heat transfer characteristics of R-113, testing eight microfinned tubes in the range 60 ≤ N ≤ 70, 0.005 ≤ e/D ≤ 0.02, and helix angles from 8 • to 25 • , for 5 × 10 3 ≤ Re ≤ 11 × 10 3 .The results presented for the single phase heating of R-113 indicated that the heat transfer is enhanced by around 30%-80%.The authors concluded that a major part of the enhancement is due to the increase in the area available for heat transfer and a part of the enhancement is due to flow separation and flow swirling effects induced by the helical fins.This is because the corresponding increase in the heat transfer area over that in a smooth tube is around 10%-50% for the tubes tested.In a subsequent paper, Khanpara et al. [53] also reported the local heat transfer coefficients for single-phase liquid R-22 and R-113 flowing through a smooth and an internally finned tube of 9.52 mm outer diameter, in their paper on in-tube evaporation and condensation characteristics of microfinned tubes.The single-phase experiments were performed by direct electrical heating of the tube walls in the Reynolds number range of 5, 000 to 11, 000 for R-113 and 21, 000 to 41, 000 for R-22.The microfinned tube had of 60 fin starts, a fin height of 0.22 mm, and a helix angle of approximately 17 • .The heat transfer coefficients for the internally finned tube were found to be 50% to a 150% higher than the smooth tube values.
AI-Fahed et al. [54] experimentally tested a single microfinned tube with a 15.9 mm outside diameter having 70 helical fins with fin height of 0.3 mm and a helix angle of 18 • using water as the test fluid in a tube-in-tube heat exchanger.Results were presented for isothermal heating conditions in the range 10, 000 < Re < 30, 000.Under the same conditions, comparative experiments with an internally smooth tube were also conducted.They noted that the heat transfer is enhanced by 20%-80% and the pressure drop is increased by around 30%-80% as compared to the smooth tube values.The experimentally obtained friction and heat transfer data were correlated as a Blasius and a Sieder-Tate type correlation, respectively.The heat transfer correlation predicted their data to within ±25%, showing a large error band while no error band was reported for the friction factor results.The authors reasoned that at Re > 25, 000 the heat transfer enhancement ratio is moderate plausibly because at higher Re numbers the turbulence effect in microfinned tubes becomes similar to that in a plain tube.
Chiou et al. [55] conducted an experimental study with water using two internally finned tubes having the same outer diameter equal to 0.375 in.(9.52 mm).The two tubes had 60 and 65 fins, fin heights of 0.008 in (0.20 mm) and 0.01 in.(0.25 mm), and helix angles of 18 and 25 • , respectively.The Reynolds number in this study ranged from about 4, 000 to 30, 000.Modified Dittus-Boelter type correlations were formulated to predict the value of the heat transfer coefficient for flow Reynolds number greater than about 15000 and 13000, respectively, for each tube.An available heat-momentum analogy based correlation for rough tubes along with a set of constitutive equations for calculating related roughness parameters was utilized to propose correlations for predicting the friction factor and the Nussult number for the entire range of the Reynolds number tested.
Brognaux et al. [50] obtained experimental single-phase heat transfer coefficients and friction factors for three singlegrooved and three cross-grooved microfin tubes all having an equivalent diameter of 14.57 mm, a fin height of 0.35 mm, International Journal of Chemical Engineering and 78 helical fins; only the fin helix angle was allowed to vary between 17.5 • and 27 • Using liquid water and air as the test fluid the experiments were carried out in a double-pipe heat exchanger.Results were presented for cooling conditions in the range 2500 < Re < 50, 000 and 0.7 < Pr < 7.85.Validation experiments with an internally smooth tube were also conducted using water and air.Compared to a smooth tube, the maximum heat transfer enhancement reported was 95% with a pressure drop increase of 80% for water at Pr = 6.8.They also found that the friction factors in microfin tubes do not reach a constant value at high Reynolds numbers as is usually observed in rough pipes.The authors also used their data in the range 0.7 < Pr < 7.85 (using only 2 of the tested tubes) to analyze the dependence of the heat transfer on the Prandtl number exponent.Using the heatmomentum transfer analogy as applied for rough surfaces, they presented their experimental heat transfer and friction factor data as functions of the "roughness Reynolds number" and from cross-plots deduced the Prandtl number exponent to be between 0.56-0.57.The Prandtl number exponent between 0.55-0.57was also determined for the power law formulation Nu = CRe m Pr n .The authors also defined an "efficiency index" (which gives the ratio of the increase in heat transfer to the increase in friction factor for a finned and plain tube, resp.) and presented its value for the different tubes tested.The higher the efficiency index, the better the enhancement geometry.
Huq et al. [56] presented experimental heat transfer and friction data for turbulent air flow in a tube having internal fins in the entrance region as well as in the fully-developed region.The tube/fin assembly was cast from aluminum to avoid any thermal contact resistance.The uniformly heated test section was 15.2 m in length and the inner diameter of the tube was 70 mm which contained six equally spaced fins of height 15mm.The Reynolds number based on hydraulic diameter ranged from 2.6 × 10 4 to 7.9 × 10 4 .The results presented by the authors exhibited high pressure gradients and high heat transfer coefficients in the entrance region, approaching the fully developed asymptotic values away from the entrance section.The enhancement of heat transfer rate due to integral fins was reported to be very significant over the entire range of flow rates studied in this experiment.Heat transfer coefficient, based on inside diameter and nominal area of finned tube exceeded unfinned tube values by 97% to 112% for the tested Reynolds number range.When compared at constant pumping power, an improvement as high as 52% was also observed for the overall heat transfer rate.
With the expressed objective of developing physically based generally applicable correlations for Nusselt number and friction factor for the finned tube geometry, Jensen and Vlakancic [57] carried out a detailed experimental investigation of turbulent fluid flow in internally finned tubes covering a wide range of fin geometric and operating conditions.Two geometrically identical double pipe heat exchangers were used.The test fluid (water and ethylene glycol were used) flowed through the tube side of each of the heat exchangers in counter-flow with hot water in one test section and cold water in the other.Friction factor tests were also conducted under isothermal conditions.A total of sixteen pairs of tubes (15 finned and one smooth tube) with a wide range of geometric variations (inside diameter 24.64-21.18mm, helix angles 0 • -45 • , fin height 0.18mm-2.06mm, and number of fins 8-54) were tested.In the reported results, the authors first described the parametric effects of different fin geometries on turbulent friction factors and Nusselt numbers in internally finned tubes and then go on to prescribe a criterion for labeling a tube as a "high" fin tube (2e/D > 0.06) or a "micro" fin tube.They stated that a microfin tube is characterized by its peculiar pressure drop behavior with long lasting transitional flow up to Re = 20, 000.Trends in the reported data are different depending on whether the tube is a high-fin or a microfin tube.High fin tubes show friction factors curves similar to those of a smooth tube, only displaced higher with the friction factor increasing as the number of fins increases.For microfin tubes, in general, the friction factor is insensitive to the fin height and the Reynolds number up to Re = 20, 000, but beyond this value the friction factor showed a decreasing trend with increasing Re as in smooth tubes and the effect of number of fins, fin height, and the helix angle also comes into play, whenever anyone of these parameter is increased the friction factor increased (exceptions may occur due to difference in fin profile).Overall the reported increase in friction factor for the high finned tubes ranged from 40%-170% and in microfin tubes from 40%-140%, over smooth tubes.For both types of tubes the reported trends of the slope of the Nu curves generally followed that of the smooth tube; however, the trends revealed a different slope at lower Re for the two categories of tubes.This characteristic was attributed by the authors to the greater capacity of swirling flow for higher finned tubes.However, the trends with geometry were similar to those noted for the friction factors.Overall, the reported increase in Nu for the high finned tubes ranged from 50%-150% and in microfin tubes from 20%-220% over smooth tubes.They reported that the correlations from the literature poorly predict their data and based on the findings from the trends observed went on to develop new correlations for friction factors and Nusselt numbers separately for the two categories of tube categories (high and microfin) identified by them.These correlations are applicable to a wide range of geometric and flow conditions for both categories of tubes and estimated well their data as well as the data from the literature.
Webb et al. [58] investigated the heat transfer and fluid flow characteristics of internally helical ribbed tubes.Using liquid water as the test fluid the experiments were carried out in a double-pipe heat exchanger.Results were presented for cooling conditions in the range 20, 000 < Re < 80, 000 and 5.08 < Pr < 6.29.A total of eight tubes (7 ribbed and one smooth tube) all having an inside diameter of 15.54 mm but a wide range of geometric variations (helix angles 25 • -45 • , rib height 0.327 • mm-0.554mm, and number of fin starts 10-45) were tested.The authors presented power law based empirical correlations using their experimental data for the Colburn j-factor and the fanning friction factor, which predicted their data reasonably well.The finned tube performance efficiency index as defined by Brognaux et al. [50] was also determined for the tubes tested, from which the authors concluded that the two key factors that affect the increase of the heat transfer coefficient in helically-ribbed tubes are the area increase and fluid mixing in the inter fin region caused by flow separation and reattachment, and the combination of the two determines the level of the heat transfer enhancement.
Copetti et al. [49] tested a single internally microfinned tube of 9.52 mm diameter using water as the test fluid.Microfin height was 0.20 mm, fin helix angle was 18 • , and number of fin starts was 60. Results were presented for uniform heating conditions in the range 2, 000 < Re < 20, 000.Under the same conditions, comparative experiments with an internally smooth tube were also conducted.They noted that the microfin tube provides higher heat transfer performance than the smooth tube although the pressure drop increase is also substantial (in turbulent flow h microfin /h smooth = 2.9 and Δp microfin /Δp smooth = 1.7 at the maximum Reynolds number tested).The finned tube performance efficiency index as defined by Brognaux et al. [50] were also determined which showed that the heat transfer increase was always superior to the pressure drop penalty.The experimentally obtained Nusselt numbers were empirically correlated separately as a Dittus-Boelter, a Sieder-Tate, and a Gnielinski type correlation.These correlations predicted their data reasonably well.
Wang and Rose [59] compiled an experimental database of twenty-one microfin tubes, covering a wide range of tube and fin geometric dimensions, Reynolds number and including data for water, R11, and ethylene glycol for friction factor for single-phase flow in spirally grooved, horizontal microfin tubes.The tubes had inside diameter at the fin root between 6.46mm and 24.13 mm, fin height between 0.13mm and 0.47 mm, fin pitch between 0.32mm and 1.15 mm, and helix angle between 17 • and 45 • .The Reynolds number ranged from 2.0 × 10 3 to 1.63 × 10 5 .Six earlier friction factor correlations, each based on restricted data sets, were compared with the database as a whole.They reported that none was found to be in good agreement with all of the data and indicated that the Jensen and Vlakancic [57] correlation was found to be the best and represented their database within ±21%.
Han and Lee [60] obtained experimental single-phase heat transfer coefficients and friction factors for four micro finned tubes all with 60 helical fins using liquid water as the test fluid in a double-pipe heat exchanger.The tubes tested had fin helix angles between 9.2 and 25.2 • and fin height between 0.12 mm and 0.15 mm.Results were presented for cooling conditions in the range 3000 < Re < 40, 000 and 4 < Pr < 7. Validation experiments with an internally smooth tube were also conducted.Using the heatmomentum transfer analogy, as used by Brognaux et al. [50], they presented their experimentally determined heat transfer and friction factor correlations as functions of the roughness Reynolds number, Re ε , with a mean deviation and root mean square deviation of less than 6.4%.They noted that the microfin tubes show an earlier achievement of the fully rough region which starts at Re ε = 70 for rough pipes and also validated the finding of Brognaux et al. [50] that the friction factors in microfin tubes do not reach a constant value at high Reynolds numbers as is usually observed in rough pipes surface.No attempt was made to present a direct comparison of heat transfer enhancement between a smooth and a microfinned tube, but an efficiency index was defined.Smaller value of efficiency index means increased friction penalty to establish a given enhancement level.Using this index, the authors noted that the tubes with higher relative roughness and smaller spiral angle show a better heat transfer performance than tubes with larger spiral angle and smaller relative roughness.The authors concluded that the heat transfer area augmentation by higher relative roughness is the main contributor to the efficiency index.
Li et al. [61] experimentally investigated the single-phase pressure drop and heat transfer in a microfin tube with a 19 mm outside diameter having 82 helical fins with a fin height of 0.3 mm and a helix angle of 25.5 • using oil and water as the test fluid in a tube-in-tube heat exchanger.Results were presented for cooling conditions in the range 2500 < Re < 90, 000 and 3.2 < Pr < 220.The pressure drop data were collected under adiabatic conditions.Under the same conditions, comparative experiments with an internally smooth tube were also conducted.Their results showed that there is a critical Reynolds number, Re cr , for heat transfer enhancement.For Re < Re cr , the heat transfer in the microfin tube is the same as that in a smooth tube, but for Reynolds numbers higher than Re cr , the heat transfer in the microfin tube is gradually enhanced compared with a smooth tube.It reaches more than twice that in a smooth tube for Reynolds numbers greater than 30, 000 with water as the working fluid.They attributed this behavior to the decrease in the thickness of the viscous sublayer with increasing Reynolds numbers.When the microfins are inside the viscous sublayer, the heat transfer is not enhanced, while when the microfins are higher than the viscous sublayer, heat transfer is enhanced.They also investigated the Prandtl number dependency of the Nusselt number in the form of NuαPr n and found that the Nusselt number is proportional to Pr 0.56 in the enhanced region and is proportional to Pr 0.3 in the nonenhanced region.For the high Prandtl number working fluid (oil, 80 < Pr < 220), the critical Reynolds number for heat transfer enhancement is about 6000, while for the low Prandtl number working fluid (water, 3.2 < Pr < 5.8), the critical Reynolds number for heat transfer enhancement is about 10, 000.The reported friction factors in the microfin tube are almost the same as for a smooth tube for Reynolds numbers below 10, 000.They become higher for Re > 10, 000 and reach values 40%-50% greater than that in a smooth tubes for Re > 30, 000.They also concluded that the friction factors in the microfin tube do not behave as in a fully rough tube even at a Reynolds number of 90, 000.
An artificial neural network (ANN) approach was applied by Zdaniuk et al. [62] to correlate experimentally determined Colburn j-factors and Fanning friction factors for flow of liquid water in straight tubes with internal helical fins.Experimental data came from eight enhanced tubes reported later in Zdaniuk et al. [63].The performance of the neural networks was found to be superior compared to International Journal of Chemical Engineering the corresponding power-law regressions.The ANNs were subsequently used to predict data of other researchers but the results were less accurate.The ANN training database was expanded to include experimental data from two independent investigations.The ANNs trained with the combined database showed satisfactory results, and were superior to algebraic power-law correlations developed with the combined database.
Siddique and Alhazmy [64] also tested a single internally microfinned tube with a nominal inside diameter of 7.38 mm.Microfin height was 0.20 mm, helix angle was 18 • , and number of fin starts was 50.Experiments were conducted in a double pipe heat exchanger with water as the cooling as well as the heating fluid for six sets of runs.The pressure drop data were collected under isothermal conditions.Data were taken for turbulent flow with 3300 ≤ Re ≤ 22, 500 and 2.9 ≤ Pr ≤ 4.7.The heat transfer data were correlated by a Dittus-Boelter type correlation, while the pressure drop data were correlated by a Blasius type correlation.These correlations predicted their data to within 9% and 1%, respectively.The correlation predicted values for both the Nusselt number and the friction factors were compared with other studies.They found that the Nusselt numbers obtained from their correlation fall in the middle region between the Copetti et al. [49] and the Gnielinski [65] smooth tube correlation predicted Nusselt number values.For pressure drop results, they reported the existence of a transition zone for Re < 11, 500 in which the friction factor data exhibited a local maxima.The presented correlation predicted friction factors values were nearly double that of the Blasius smooth tube correlation predicted friction factors.The authors concluded that the rough tube Gnielinski [65] and Haaland [66] correlations can be used as a good approximation to predict the finned tube Nusselt number and friction factor, respectively, in the tested Reynolds number range.
Zdaniuk et al. [63] experimentally determined the heat transfer coefficients and friction factors for eight helically finned tubes and one smooth tube using liquid water at Reynolds numbers ranging between 12, 000 and 60, 000.The helically-finned tested tubes had helix angles between 25 • and 48 • , number of fin starts between 10 and 45, and fin height-to-diameter ratios between 0.0199 and 0.0327.Power-law correlations for Fanning friction and Colburn jfactors were developed using a least-squares regression using five simple groups of parameters identified by Webb et al. [58].The performance of the correlations was evaluated with independent data of Jensen and Vlakancic [57] and Webb et al. [58] with average prediction errors in the 30% to 40% range.The authors also gave recommendations about the use of some specific tubes used in their experimentations and concluded that disagreements in the experimental results of Webb et al. [58], Jensen and Vlakancic [57], and their own study imply that a broader database of heat transfer and friction characteristics of flow in helically ribbed tubes is desirable.The authors further recommended that more research should be performed on the influence of geometric parameters on flow patterns, especially in the inter fin region using modern flow visualization techniques or proven computational fluid dynamics (CFD) tools.
In a subsequent analysis, Zdaniuk et al. [67] using genetic programming extended their earlier work [63] presenting a linear regression approach to correlate experimentallydetermined Colburn j-factors and Fanning friction factors for flow of liquid water in helically finned tubes.Experimental data came from the eight enhanced tubes used in their previous study [63] discussed above.This new study revealed that, in helically finned tubes, logarithms of both friction and Colburn j-factors can be correlated with linear combinations of the same five simple groups of parameters identified in their earlier work [63] and a constant.The proposed functional relationship was tested with independent experimental data yielding excellent results.The authors concluded that the performance of their proposed correlations is much better than that of the power law correlations and only slightly worse than that of the artificial neural networks.
More recently, Webb [68] investigated the heat transfer and friction characteristics of three tubes (19.05 mm O.D., 17.32 mm I.D.) including one developed by the author (designated as tube T3) having a conical, three-dimensional roughness on the inner tube surface with water flow in the tube.Experiments were conducted in a double tube heat exchanger with water as the cooling as well as the heating fluid.The pressure drop data were collected under adiabatic conditions.The data were taken at a tube side Reynolds number range of 4000-24, 000 and the Prandtl number varied from 6.6 to 5.9.The heat transfer data were correlated by a Dittus-Boelter type correlation, while the pressure drop data were correlated by a Blasius type correlation.The measured maximum uncertainty in the friction factor was reported to be 5.96%, while for most data points, the uncertainty in the measured value of the inside heat transfer coefficient was stated to be 8%.The experimentally obtained values for the Nusselt number were compared with an independent study and were found to be 9%-12% higher.The author reported that the TC3 truncated cone tube provides a Nusselt number 3.74 times that of a plain tube, but it has it has nearly 60% higher pressure drop and concluded that the three-dimensional roughness offers potential for considerably higher heat transfer enhancement (e.g., 50% higher) than is given by helical ridged tubes.Accelerated particulate fouling data were also provided for TC3 tube, and for five different helical-ribbed tubes for 1300 ppm foulant concentration at 1.07 m/s water velocity (Re = 16, 000).The fouling rate was compared with helical-rib geometries reported earlier by Li and Webb [69].The author noted that the TC3 tube shows a very high accelerated particulate fouling rate, which is higher than that of the helical-ribbed tubes tested by Li and Webb [69] and recommended that the 3-D roughness tubes should experience minimal and acceptable low fouling, if used with relatively clean or treated water.
Most recently, Bharadwaj et al. [70] experimentally determined pressure drop and heat transfer characteristics of liquid water flowing in a single 75 fin start spirally grooved tube (inside diameter = 14.808 mm, f in helix angle = 23 • , and f in height = 0.3048 mm) with and without a twisted tape insert.Results were presented for uniform heating conditions at Pr = 5.4 in the range 300 < Re < 35, 000.
The grooves are clockwise with respect to the direction of flow.The authors noted that for the microfin tube experiments transition-like characteristics begin at Re ∼ 3000 and continued up to Re = 7000.Beyond this value of Re, the friction factor remained nearly constant similar to flow in a rough tube.Power law-based correlation for the friction factors and Nusselt numbers were presented in the ranges of 300 < Re < 3000 (laminar), 3000 < Re < 7000 (transition), and Re > 7000 (turbulent).They noted that in the laminar and turbulent ranges of Re, Nu is almost doubled compared to its value in a smooth tube as predicted by the Dittus Boelter correlation.But, for the transition range 3000 < Re < 7000, the Nu-data almost coincide with the same smoothtube correlation predicted value, indicating no enhancement in heat transfer in this range.Constant pumping power comparison with smooth tube showed that the spirally grooved tube without twisted tape yields maximum heat transfer enhancement of 400% in the laminar range and 140% in the turbulent range.However, for 2500 < Re < 9000, reduction in heat transfer was noticed.For the experiments with the twisted tape inserts having twist ratios of 10.15, 7.95, and 3.4, compared to smooth tube, the heat transfer enhancement due to spiral grooves was found to be even further enhanced.They found that the direction of twist (clockwise and anticlockwise) influences the thermo-hydraulic characteristics.Constant pumping power comparisons with smooth tube characteristics show that in spirally grooved tube with twisted tape, heat transfer increases considerably in laminar and moderately in turbulent range of Reynolds numbers.However, for the spiral tube with anticlockwise twisted tape (Y = 10.15),reduction in heat transfer was noticed over the transition range of Reynolds numbers.
From the above comprehensive literature review, it is clear that many studies have been conducted to investigate the heat transfer and pressure drop characteristics of internally finned tubes for single phase situations.However, much of the reported data pertains to large fin systems.Most of the experimental correlations are applicable only for the particular system they were developed for.It is also apparent that quite a few empirical correlations based on experimental data do exist for predicting pressure drop and heat transfer in turbulent flow in finned tubes, but it also seems that there is a substantial disagreement between the results predicted by these different correlations; therefore, a need exists for further research in this area.
Mathematical Modeling of Fins
General One-Dimensional Model.Consider a fin having a length L and a cross-sectional area A C extending from a base surface.The thermal conductivity of the fin is k f .The convection coefficient for the fin facing the outside fluid stream is h f .It is assumed that h f does not vary with position and that the variation of the fin temperature in the transverse direction is negligible.The x-axis is directed along the fin center line starting from the base.The heat diffusion equation for the fin is where T f and T ∞ are the fin temperature and the external fluid far stream temperature, respectively.
Joint-Fins.Consider a thin wall separating two convective media having free stream temperatures T ∞1 and T ∞2 on the left and right sides, respectively.It is assumed that The convective media with temperature T ∞1 is refereed to as the "source" while the other one is refereed to as the "sink".Suppose that a very long fin having a uniform cross-sectional area A C penetrates through the wall linking thermally both the source and the sink.The terminology "joint fin" is used to refer to such kinds of fins.It is assumed that the conduction heat transfer at the fin tips on the source and the sink sides are negligible and that the convection coefficient for the fin in the source side is h f 1 while it is h f 2 on the sink side.Now consider that fin to have a finite length L and insulated from its both ends.The fin heat transfer can be calculated from either the receiver or the sender fin portions.It is equal to the following in a dimensionless form [31] where m 1 and m 2 (the receiver and sender fin portion indices) are equal to Hairy Fin System.Consider a rectangular fin (primary fin) having a length L and a constant perimeter P and a constant cross-sectional area A C .It is extending from a surface which is kept at a base temperature T b .A large number of pin rods (secondary fins) are attached on the outer surface of the primary fin.The resulting fin system is referred as to a "hairy fin system".The secondary fins are uniformly distributed over the primary fin surface.The x-axis is taken along the length of the primary fin starting from the base cross-section while the y-axis is taken in the transverse direction.The one-dimensional conduction heat diffusion equation can be found to be equal to the following [30]: where h d , k d , d, and T f (x) are the convection coefficient between the surface of the secondary fins and the surrounding fluid, thermal conductivity of the secondary fin, secondary fin diameter and the temperature of the main fin at the location of the secondary fin base, respectively.The ratio φ is equal to the total base area of the secondary fins to the total surface area of the primary fin.The index m is equal to Rooted Fins.Consider a fin having a length L and a uniform cross-sectional area A C extending from the interior surface of a wall having a thickness L 1 , (L 1 < L).The temperature of the interior surface is T i while it is equal to T o for the exterior surface.The thermal conductivity of the wall and the fin are k w and k f , respectively.The fin portion facing the outside fluid stream is subject to convection with free stream temperature equal to T ∞ and convection coefficient of h f .It is assumed that h f does not vary with position and that the variation of the fin temperature in the transverse direction is negligible.As such, the performance indicator γ can be equal to the following [14]: where η o is the efficiency of the fin portion facing the outside stream.The surface with x = 0 is the fin root base surface.The factor n is equal to where S is the shape factor.The unit of S is the reciprocal of the length unit.For example, if the wall temperature reaches its one-dimensional temperature field at an average distance t from the surface of the fin, then Biconvection Perimeter-Wise Fins.Consider a fin with uniform cross-section A C having a thermal conductivity k f .A uniform fin portion of perimeter P 1 is subject to convection with a free stream temperature of T ∞1 and a convection heat transfer coefficient of h f 1 while the remaining fin portion of perimeter P 2 is subject to another convection medium with T ∞2 and h f 2 as the convective parameters.Assuming the temperature variation along the cross-section is negligible, the energy equation can be found to be equal to the following [32]: where Biconvection Longitudinal-Wise Fins.Consider a fin with uniform cross-section A C and perimeter P having a thermal conductivity k f and a very long length.The fin portion starting from the base and ending a distance L 1 from the base is subjected to convection with a free stream temperature of T ∞1 and a convection heat transfer coefficient h f 1 .
The remaining portion is subjected to another convective medium with T ∞2 and h f 2 as the convective parameters.
Assuming the temperature variations along the cross-section is negligible, the fin heat transfer rate can be found to be equal to the following [32]: where T b is the base temperature, m 1 = h 1 P/(kA C ) and m 2 = h 2 P/(kA C ).
Biconvection Span-Wise Rectangular Fins.Consider a rectangular thin fin having a thermal conductivity k f , length L and a thickness t.The fin surface is subjected to two different convection conditions.The first region has a span height H 1 while the second region has a span height H 2 .
The convection medium facing the first region has T ∞1 and h f 1 as the free stream temperature and the heat transfer convection coefficient, respectively, while T ∞2 and h f 2 are those corresponding to the convection conditions for the second region.For very long fin, Khaled [32] showed that the fin total heat transfer is equal to the following: where Permeable Fins.Consider a permeable fin with uniform cross-section A C (A C = 2Ht) and a perimeter P (P = 2H) having a thermal conductivity k f and a very long length.The fin encounter uniform flow (with density ρ and specific heat c p ) across its thickness with an average suction speed at the upper surface of εV o where ε is the ratio of holes area to the total surface area.The upper surface of the fin is subjected to convection with a free stream temperature of T ∞ and a convection coefficient h f .The direction of V o is from the upper surface to the lower surface of the fin.Khaled [34] has shown that the fin heat transfer rate is obtained from where Pr is the external fluid Prandtl number.He also derived a correlation for θ (0) using similarity solution for the boundary layer problem above the upper surface of the fin.This correlation is equal to The velocities u ∞o and V o are equal to the following: where d and V are the far stream external normal velocity and d is the diameter of the holes on the fin, respectively.μ is the dynamic viscosity of the external fluid.
Porous Fins.Heat transfer in porous fins has also gained a recent attention of many researchers.Porous materials of high thermal conductivity have been used to enhance heat transfer as will be discussed in Section 3.2.Kiwan and Al-Nimr [33] were among the recent researchers who numerically investigated the effect of using porous fins on the heat transfer from a heated horizontal surface.The basic philosophy behind this kind of enhancers is to increase the effective area through which heat is convected to ambient fluid in addition to augment the convection heat transfer coefficient by increasing mixing or the thermal dispersions levels between the ambient fluid and the solid phase of the porous fin.They found that using a porous fin with certain porosity might give same performance as conventional fin and save 100 times porosity percent of the fin material.
In addition, Kiwan and Zeitoun [71] found that porous fins enhanced the heat transfer coefficient more than 70% compared to that of conventional solid fins under natural convection conditions.
Capsulated Liquid Metal Fins.A capsulated fin is a capsule full of a liquid metal made of a very thin metal shell of very high thermal conductivity.The capsulated fins are attached to hot surfaces in different manners and directions, which allow for the activation of natural convective currents in the liquid metal held inside the capsules.There must be some temperature increase in the direction of the gravitational force to activate the free convective currents within the liquid metal.Aldoss et al. [72] introduced this novel type of heat transfer enhancement technique for the first time, and numerically estimated and compared the thermal performance of a liquid metal capsulated fin with that of a conventional solid fin, investigating the effect of several design and operating parameters.Two equal-size geometries for the capsulated fins longitudinal sectional area were considered: the rectangular and the halfcircular fins.It was found that using capsulated fins might enhance the performance over an equal-size conventional solid fin by about 500% for the conventional steel fin, 270% for the conventional solid sodium fin, and 150% for the conventional aluminum fin for a fin length to width ratio of 5. Using capsulated fins is justified in applications that involve a high-base temperature, high height-to-width aspect ratio, and a high external convective heat transfer coefficient.
Models for Heat Transfer Correlations for Microfins.
In experimental work on turbulent single-phase heat transfer in internally finned tubes, most of the heat transfer correlations can be broadly classified as simple nondimensional empirical correlations, compounded/modified nondimensional empirical correlations containing additional variables adjudged to be of controlling importance by the researcher, and lastly correlations based on the heat-momentum transfer analogy model for rough surfaces as first proposed by Dipprey and Sabersky [73] are used for microfinned tubes.The simple nondimensional correlations use the governing differential conservation equations of the tube side fluid turbulent boundary layer as a basis for defining the appropriate nondimensional groups for correlating the heat transfer data.Thus, the Nusselt number is proposed as a function of the Reynolds and Prandtl numbers.The heat transfer data from these investigations are typically correlated by a relationship of the form the so-called Dittus Boelter type correlation.The variables h, d and k are the fully developed convection heat transfer International Journal of Chemical Engineering coefficient, pipe inner diameter, and the fluid thermal conductivity, respectively.The experimental data is then used to find coefficients C, m, and n using regression analysis or cross-plotting.
In experimental works where very limited testing has been done, for example, using only a single microfinned tube, very simple correlations of the Dittus Boelter type have been proposed to explain the data trend.A good example of this type of a correlation is found in the work of Chiou et al. [55], in which the average Nusselt numbers from the experimentally obtained data were correlated as Nu = 0.043Re 0.8 Pr 0.4 . ( In the same way Copetti et al. [49], have correlated their heat transfer data as Another variation of this approach is seen in the work of Al-Fahed et al. [54] who correlated the average Nusselt numbers from their experimentally obtained data in a Sider-Tate type correlation as As stated above, the compounded/modified nondimensional empirical correlations contain additional variables adjudged to be of controlling importance by the researcher.A good example of this approach is seen in the work of Carnavos [45], who conducted an extensive experimental investigation using eleven internally finned tubes with air, water, and ethylene glycol-water solutions.His heat transfer correlation is given as where We see that this heat transfer correlation is in the form of a modified Dittus-Boelter single-phase correlation having an additional correction factor "F", consisting of the ratios of the actual flow area (A fa ) to the core flow area (A fc ) and the nominal heat transfer area (A n ) to-the actual heat transfer area (A a ), respectively.The secant of the helix angle (α) raised to the third power is also included in "F".Another variation of nondimensional correlation type is to fit the heat transfer data in terms of the so called Colburn j-factor, modified by additional variables adjudged to be of controlling importance.An example of this can be seen in the work of Zdaniuk et al. [63] who experimentally determined the heat transfer coefficients and friction factors for eight helically-finned tubes and one smooth tube using liquid water.Their heat transfer correlation is given as where N s is the number of fin starts, e/D is the fin height to diameter ratio, and α is the helix angle.
Using the heat-momentum transfer analogy as applied for smooth surfaces, Dipprey and Sabersky [73] showed that a similar analogy model is applicable for sand-grain type wall roughness.For turbulent flow in a rough circular tube, this can be given as where the Nusselt number is a function of Reynolds and Prandtl numbers and the friction factor ( f ) which depends on the geometrical variables (e/D, N s and α) of the microfin tube.B(e + ) is the friction similarity function for the rough/microfinned tube where e + is the roughness Reynolds number, while the g(e + ) is the correlating function for the rough/microfinned tube determined separately.These functions will be different for different roughness types, that is, for geometrically different microfinned tubes.An example of this type of heat transfer correlation is found in the work of Copetti et al. [49], who have also correlated their microfinned tube heat transfer data as [74].Moreover, the convective heat transfer coefficient is larger for systems filled with porous material than the systems without porous material due to the large thermal conductivity of the porous matrix compared with the fluid thermal conductivity, especially for gas flows.However, porous media results in substantial pressure drop [9].To minimize pressure drop, partially fillings of porous media can be used.Partial filling of the porous medium has advantage of reducing the pressure drop compared with a system filled completely with porous medium [9].A partial filling of a channel with porous media redirects the flow to escape from the core porous region, depending on the permeability of the porous medium, to the outer region.This effect reduces the boundary layer thickness.As such, the rate of heat transfer is enhanced.Moreover, the porous medium increases the effective thermal conductivity and heat capacity of the fluid-porous material medium and the solid matrix enhances the rate of radiative heat transfer in a system especially if the gas is the working fluid.In summary, the heat transfer enhancements associated with partial filing of the porous medium take place by three mechanisms: (i) flow redistribution, (ii) thermal conductivity modification, and (iii) radiative property modification of the medium.
Many works have been conducted in the domain of partially fillings of porous media.For example, Jang and Chen [75] conducted a numerical study for a forced flow in a parallel channel partially filled with a porous medium by adopting the Darcy-Brinkman-Forchheimer model with a thermal dispersion term.Chikh et al. [76,77] presented an analytical solution for the fully developed flow in annulus configuration partially filled with porous medium.Al-Nimr and Alkam [78] extended the analysis to the transient solution for annulus flow with porous layer.They found that an increase of up to 12 times in the Nu number was reported when annuli partially filled with porous substrates located either on the inner or the outer cylinder in comparison with the clear annuli.Alkam and Al-Nimr [79] further investigated the thermal performance of a conventional concentric tube heat exchanger by placing porous substrates on both sides of the inner cylinder.Numerical results obtained showed that porous substrates of optimum thicknesses yield the maximum improvement in the heat exchanger performance with moderate increase in the pumping power.This kind of heat transfer enhancers are used in wide range of practical applications including: (a) forced channel flow applications [80][81][82][83][84] and (b) renewable energy applications [82,85].Recent reviews of the subject are available in [74,86].
Mathematical Modelings
Darcy Law.Darcy law is one of the earliest flow transport models in porous media.In his experiments on steadystate unidirectional flow in a uniform medium, Darcy [87] revealed proportionality between flow rate and the applied pressure difference.In modern notation, Darcy is expressed by where u, P, μ and K are the Darcy velocity, fluid pressure, dynamic viscosity of the fluid and the permeability of the porous medium, respectively.
Brinkman's Equation.As seen from ( 24), Darcy law ignores the boundary effects on the flow.This assumption may not be valid especially when the boundaries of the porous medium are close to each other.Therefore, the following is used as an extension for the Darcy equation for unidirectional flow Equation ( 25) has been referred in the literature [88,89] as "Brinkman equation".The first viscous term on the right is the Darcy term while the second term in the right is analogous to the momentum diffusion term in the Navier Stokes equation with μ as the effective dynamic viscosity of the medium.For isotropic porous medium, Bear and Bachmat [90] showed that the effective viscosity is related to the porosity through the following relation: where ε and T * are the porosity (void-volume-to-thetotal-medium-volume ratio) and tortuosity of the medium, respectively.
Generalized Flow Transport Model.In cases where fluid inertia is not negligible, another drag force starts to become significant which is the form drag excreted by the fluid on the solid.Vafai and Tien [91] suggested a generalized model for flow transport in porous media based on Brinkman and Forchheimer's equations.The latter equation takes into considerations the presence of form drags due fluid inertia.This generalized model is summarized in the following equation: where c F and ρ f are the dimensionless form-drag constant and the fluid density, respectively.For packed bed of solid spheres of diameter d p , K and c F are equal to the following: In addition, the previous equation did not neglect flow convective terms as does ( 24) and ( 25), the terms on the left side.Note that the last term on the right represents the form drag term.The previous equation is usually referred as Brinkman-Forcheiner's equation (Table 1).
Heat Transfer inside Porous Media.Following the works of Amiri and Vafai [8,92] and Alazmi and Vafai [93] and based on the principle of local thermal nonequilibrium conditions between the fluid and the solid, the energy equations for both the solid and fluid are equal to where T f , T s , u f , k a f , k a s , ε, and h f s are the local fluid averaged temperature, local solid averaged temperature, fluid velocity vector, fluid effective thermal conductivity tensor, solid effective thermal conductivity tensor, porosity of the tensor and interstitial convective heat transfer coefficient.As seen from ( 29), the energy equations for both phases are coupled by the interstitial convective heat transfer between the fluid and the solid.The concept of local thermal nonequilibrium and fluid thermal dispersion are well established in the theory of porous media.Examples on corresponding researches can be found in the works of Amiri and Vafai [8,92] and Alazmi and Vafai [93].In applications involving porous media of small sizes of both the pores and the solid particles and as illustrated in [8,92], Khanafer and Vafai [95] and Marafie and Vafai [96], local thermal equilibrium may serve as a good approximation for the temperature field.In these applications, the solid temperatures as well as fluid temperatures are the same and ( 29) reduces to the following equation: The second term on the left is responsible for heat transfer due to convection.The thermal conductivity tensors for isotropic materials are equal to the following:
Heat Transfer Enhancement Using Fluids with Large
Particles Suspensions.Huge number of investigations has been carried out in the past in order to seek for developing novel passive methods for enhancing the effective thermal conductivity of the fluid or increasing the convection heat transfer coefficient.One of the methods is introducing into the base liquid high thermally conductive particulate solids such as metals or metal oxides.Examples of these investigations according to Ding et al. [10] are seen in the works of Sohn and Chen [97], Ahuja [98,99] and Hetsroni and Rozenblit [100].These early investigations used suspensions of millimeter or micrometer sized particles.They showed some enhancement.However, they introduced problems to the thermal system such abrasion and channel clogging due to poor suspension stability especially in the case of mini-and/or microchannels.A new passive method developed by Choi [101] which is termed "nanofluids" has shown to resolve some disadvantages associated with the suspensions of large particles.
Heat Transfer Enhancement Using Nanofluids
3.4.1.Introduction.Nanofluids are fluids that contain suspensions of nanoparticles of high thermally conductive materials like carbon, metals, and metal oxides into heat transfer fluids to improve the overall thermal conductivity.These nanoparticles are usually of order 100 nm or less.Nanoparticles could be either spherical or cylindrical like carbon multiwalled nanotubes [102].The advantages of properly engineered nanofluids according to Ding et al. [10] include the following: (a) higher thermal conductivities than that predicted by currently available macroscopic models, (b) excellent stability, (c) little penalty due to an increase in pressure drop, and (d) little penalty due to an increase in pipe wall abrasion experienced by suspensions of millimeter or micrometer particles.
The enhancements in thermal conductivity of nanofluids are due to the fact that particles surface area to volume ratio increases as the diameter decreases.This effect tends to increase the overall exposed heat transfer surface area for a given concentration of particles as their diameters decreases.Further, the presence of nanoparticles suspensions in fluids tend to increase the mixing effects within the fluid which produce additional increase in the fluid's thermal conductivity due to thermal dispersion effects as discussed by Xuan and Li [122].
Nanofluids possess a large effective thermal conductivity for very low nanoparticles concentrations.For instance, the effective thermal conductivity of ethylene glycol is increased by up to 40% percent higher than that of the base fluid when a 0.3 volumetric percent of copper nanoparticles of mean diameter less than 10 nm are suspended in it [125].This enhancement is expected to be more as the flow speed increases resulting in an increase in the thermal dispersion effect [120]. Lee et al. [105] measured the effective thermal conductivity of Al 2 O 3 and CuO suspended nanoparticles in water and ethylene glycol.They found out that the effective thermal conductivity was enhanced by more than 20% when a 4% volume of CuO/ethylene glycol mixture was used.
Ding et al. [10] indicated that Xuan and Li [122] showed that the convection heat transfer coefficient was increased by ∼60% for an aqueous-based nanofluid of 2% Cu nanoparticles by volume, but the nanofluid only had an effective thermal conductivity approximately 12.5% higher than that of the base liquid.Also, they indicated that Wen and Ding [124] observed a ∼47% increase in the convective heat transfer coefficient of aqueous c-alumina nanofluids at x/D ∼ 60 for 1.6 vol.% nanoparticle loading and Re = 1600, which was much greater than that due to the enhancement of thermal conduction (<∼10%).Amazingly, Ding et al. [10] showed that nanofluids containing 0.5 wt.% of carbon nanotubes (CNT) produced enhancement in convection heat transfer which may be over 350% of that of the base liquid at Re = 800, and the maximum enhancement occurs at an axial distance of approximately 110 times the tube diameter.This increase is much greater than that due to the enhancement of thermal conduction (<∼40%).The observed large enhancement of the convective heat transfer coefficient is associated with the following reasons: (e) high aspect ratio of carbon nanotubes.
The thermal capacity of nanofluids (ρC p ) nf is equal to where the subscript nf, bf and p denote the nanofluid or the dispersive region, base fluid and the particles, respectively.The parameter φ is the nanoparticles volume fraction which represents the ratio of the nanoparticles volume to the total volume of the nanofluid.A nanofluid composed of pure water and copper nanoparticles suspensions with 2% volume fraction has a value of (ρC p ) nf equal to 99% that for the pure water which is almost the same as the thermal capacity of the pure fluid.
Effective Thermal Conductivity of Nanofluids.
One of the elementary models of the effective thermal conductivity of nanofluids is that of Xuan and Roetzel [120].They suggested the following mathematical model for the effective thermal conductivity of the nanofluid, k nf where C * is a constant depending on the diameter of the nanoparticle and its surface geometry.The constant (k nf ) o represents the effective thermal conductivity of the nanofluid under stagnant conditions where the bulk velocity u is equal to zero.This constant is proposed by Xuan and Roetzel [120] to be equal to that predicted from the Maxwell model for effective thermal conductivity of solid-liquid mixtures for micro or millimeter sized particles suspended in base fluids [126].It has the following form: where k p and k bf are the thermal conductivity of the nanoparticles and the base fluid, respectively.According to formula (34), 2.0% volume fraction of copper particles produces 8.0% increase in (k nf ) o when compared to the thermal conductivity of the pure fluid.However, Das et al. [114] demonstrated that the 1% particle volumetric concentration of CuO/water nanofluids, the thermal conductivity ratio increased from 6.5% to 29% over a temperature range of 21-51 • C. Liu et al. [127] synthesized a copper nanofluid which showed thermal conductivity enhancement of 23.8% for 0.1% volumetric concentration of copper particles in water.Therefore, new studies have been implemented for seeking new models for the effective thermal conductivity of nanofluids.
Of these studies is the study of Yu and Choi [128].They proposed a renovated Maxwell model which includes the effect of the effect of the nanolayer surrounding nanoparticles.They found that this nanolayer has a major role on the effective thermal conductivity of nanofluids when the particle diameter of less than 10 nm.The effects of the surface adsorption of nanoparticles on the thermal conductivity of the nanofluid were modelled by Wang et al. [110].They showed a good agreement between the model and their experiment for 50 nm CuO/deionized water of dilute concentration (<0.5%).Koo and Kleinstreuer [129,130] presented a thermal conductivity model for nanofluids comprising a static part and a dynamic part due to the Brownian motion of nanoparticles.The thermal conductivity model of nanofluids developed by Hamilton and Crosser [131] and the Bruggemen [132] model was found to be different by the thermal conductivity measurement data of Murshed et al. [133] by about 17% for a 5% particle volumetric concentration.
Researches have also took in their account the role of the effective thermal conductivity of the interfacial shell between the nanoparticle and the base fluid as can be seen in the work of Xue and Xu [134].The effective thermal conductivity model of nanofluids proposed by Chon et al. [135] was developed as a function of Prandtl number, particle Reynolds number based on the Brownian velocity, thermal conductivity of the particle and base fluid, volume fraction and particle size.Moreover, Prasher et al. [136] presented a Brownian motion based convective-conductive model for the effective thermal conductivity of nanofluids.The nanofluid thermal conductivity developed model of Jang and Choi [137] took into account the collision between base fluid molecules, thermal diffusion of nanoparticles in fluids, collision between nanoparticles and nanoconvection due to Brownian motion.A comprehensive review of experimental and theoretical investigations on the thermal conductivity of nanofluids by various researchers was compiled by Wang and Mujumdar [138] and Vajjha and Das [139].The previously developed models are presented in the works of Vajjha and Das [139].
It should be mentioned that Buongiorno [140] considered seven slip mechanisms that can produce a relative velocity between the nanoparticles and the base fluid: inertia, Brownian diffusion, thermophoresis, diffusiophoresis, Magnus effect, fluid drainage, and gravity.Of all of these International Journal of Chemical Engineering mechanisms, only Brownian diffusion and thermophoresis were found to be important.Buongiorno's [140] analysis consisted of a two-component equilibrium model for mass, momentum, and heat transport in nanofluids.He found that a nondimensional analysis of the equations implied that energy transfer by nanoparticle dispersion is negligible and cannot explain the abnormal heat transfer coefficient increases.That is the second term of (33) on the right side is negligible.Buongiorno suggests that the boundary layer has different properties because of the effect of temperature and thermophoresis.The viscosity may be decreasing in the boundary layer, which would lead to heat transfer enhancement according to the analysis of Buongiorno [140].
Although Buongiorno [140] found out that thermal dispersion mechanism for enhancing heat transfer in convective conditions is negligible, many researches still adopt the fact that it is a major mechanism.Example of these works is the recent work of Mokmeli and Saffar-Avval [141].
Recently, Vajjha and Das [139] developed a model for thermal conductivity of three nanofluids containing aluminum oxide, copper oxide, and zinc oxide nanoparticles dispersed in a base fluid of 60 : 40 (by mass) ethylene glycol and water mixture.The developed is a refinement of an existing model, which incorporates the classical Maxwell model and the Brownian motion effect to account for the thermal conductivity of nanofluids as a function of temperature, particle volumetric concentration, the properties of nanoparticles, and the base fluid.The developed model agrees well with the experimental data.The several existing models for thermal conductivity were compared with the experimental data obtained from these nanofluids, and they do not exhibit good agreement except for the model developed by Koo and Kleinstreuer [129].[139] Effective Thermal Conductivity Model for Nanofluids.The model of Vajjha and Das for the thermal conductivity model of nanofluids is presented in the following equation:
Vajjha and Das
where κ is the Boltzmann constant (κ = 1.381 × 10 −23 J/K), T o is a reference temperature (T o = 273 K), (C p ) bf is the specific heat of the base fluid, ρ bf is the base fluid density, ρ p is the density of the nanoparticle and d p is the nanoparticle diameter.Note that the base fluid is 60 : 40-by massethylene glycol and water mixture (Table 2).
Summary of Literature on Nanofluids.
All of the research on heat transfer in nanofluids reported increases in heat transfer due to the addition of nanoparticles in the base fluid.To what degree and by what mechanism is still debatable.However, the following trends were in general agreement with all researchers [142].
(i) There is an enhancement in the heat transfer coefficient with increasing Reynolds number.
(ii) The heat transfer coefficient enhancement increases with decreasing nanoparticle size.
(iii) The heat transfer coefficient enhancement increases with increasing fluid temperature (more than just the base fluid alone).
(iv) The heat transfer coefficient enhancement increases with increasing nanoparticle volume fraction.
Some nanofluid researches conflict.Below are some explanations as to why there might be such a discrepancy between results [142].
A. Aggregation.It has been shown that nanoparticles tend to aggregate quite quickly in nanofluids, which can impact the thermal conductivity and the viscosity of the nanofluid.Not all researchers account for this whether it is through experimental or numerical research.
B. Unknown Nanoparticle Size Distribution.
Researchers rarely report the size distribution of nanoparticles or aggregates-they only list one nanoparticle size-which could affect results.Many researchers do not measure the nanoparticles themselves, and rely on the manufacturer to report this information.
C. Differences in Theory.Researchers have not agreed upon which heat transfer mechanisms are important, dominate, and how they should be accounted for in calculations.The discrepancy leads to different analyses and different results.
D. Different Nanofluid Preparation Techniques.
Depending on how the nanofluids are made, for instance whether it is by a one-step of two-step method, the dispersion of the nanofluids could be effected.Some researchers coat the nanoparticles to inhibit agglomeration, while others do not.
Heat Transfer Enhancement Using Phase-Change Devices.
A heat pipe is an efficient compact device with a simple structure and no moving parts that allows the transfer of a large amount of heat from various engineering systems through a small surface area.It basically consists of a duct closed at both ends whose inside wall is covered with a layer of a porous wicking material saturated with the liquid phase of the working fluid while the vapor phase fills the central core of the duct.Heat is transferred from one end (the evaporator) of the pipe to the other (the condenser) by evaporation from the wick at the evaporator, flow of vapor through the core to the condenser, condensation on the wick in the condenser, and return flow of liquid by capillary action in the wick back to the evaporator.If the condenser is above the evaporator, then the liquid returns under gravity to the evaporator and the need for a wick can be avoided.A heat source and a heat sink usually differing by a small temperature difference are present at the ends of the heat pipe.The heat pipe performance depends on the flow rates of the vapor and liquid, generally requiring the pressure gradient in the vapor to be negative, and positive in the liquid as provided by the self-pumping ability of the wick material.
The heat transfer capability of a heat pipe is mainly related to the transport properties of selected working fluid, system operating pressure, and wick porosity characteristics.Since the earliest theoretical analysis of heat pipes was presented by Cotter [143], a lot of work of research has been conducted on heat pipes.Vasiliev [144] has reviewed and listed the heat pipe R&D work on conventional heat pipes, heat pipe panels, loop heat pipes, vapor-dynamic thermosyphons, micro/miniature heat pipes, and sorption heat pipes in different industrial applications.A novel approach is to utilize nanofluids to enhance the capabilities of heat pipes.Shafahi et al. [145] analyzed and modeled the influence of a nanofluid on the thermal performance of a cylindrical heat pipe.The authors reported that the nanoparticles within the liquid enhance the thermal performance of the heat pipe by reducing the thermal resistance while enhancing the maximum heat load it can carry.The existence of an optimum mass concentration for nanoparticles in maximizing the heat transfer limit and that the smaller particles have a more pronounced effect on the temperature gradient along the heat pipe was also established.Recently, Yau and Ahmadzadehtalatapeh [146] conducted a literature review on the application of horizontal heat pipe heat exchangers for air conditioning in tropical climates.Their work focused on the energy saving and dehumidification enhancement aspects of horizontal heat pipe heat exchangers.The related papers were grouped into three main categories and a summary of experimental and theoretical studies was presented.A variation of the heat pipe called the "microheat pipe" (MHP) mostly used in electronic cooling applications was first proposed by Cotter [147].Unlike conventional heat pipes, MHPs do not contain any wick structure, but instead consist of small noncircular (usually triangular) channels instead; the sharp-angled corner regions in these noncircular tubes serve as liquid return arteries.Microfluid flow channels in MHP have hydraulic diameters on the order of 10-500 μm.Smaller flow channels in MHP are desirable in order to achieve higher heat transfer coefficients and higher heat transfer surface area per unit flow volume.MHPs are also capable of removing large amounts of heat with the possibility of achieving extremely high heat fluxes near 1000 W/cm 2 .Much research has been carried out in recent years to predict the performance of MHPs as evident in the excellent review papers of Vasiliev [148] and Sobhan et al. [149] on MHP research and development work.
Phase-change is also used in many other applications to enhance heat transfer as illustrated in the work of Khadrawi and Al-Nimr [150], who proposed and then analytically investigated a novel technique for the cooling of intermittently working internal combustion engines.This technique utilizes a phase change material, which absorbs heat to melt itself, and thus cools the engine as it runs, while the same phase change material releases heat upon restarting of the engine.The main findings of this work show that as the melting temperature and the enthalpy of melting increases the operational time of the system increases, especially for large values of melting temperature.The surface temperature when using a phase change material is much lower than that of conventional air cooling case.
3.6.Heat Transfer Enhancement Using Flexible Seals.Single layered (SL) and double layered (DL) microchannels supported by flexible seals are analyzed in the woks of Vafai and Khaled [151].In their work, they related the deformation of the supporting seals to the average internal pressure by the theory of elasticity.This relation is coupled with the momentum equation which is solved numerically using an iterative implicit finite difference method.After solving these equations, they solved the energy equation.For the same flexible seals, they showed numerically that a flow that cause an expansion in the microchannel height by a factor of 1.5 causes a drop in the average surface temperature for SL by 53% times its value for rigid SL at the same pressure drop under the same constant heat flux.They showed that cooling effect due to hydrodynamic expansion increases as Prandtl number decreases.Further, their results show that SL flexible microchannel heat sinks mostly provide better cooling attributes compared to DL flexible microchannel heat sinks delivering the same coolant flow rate and having the same flexible seals.However, they showed that rigid DL microchannel heat sinks provides better cooling than rigid SL microchannel heat sinks when operated at the same pressure drop.Finally, they concluded that SL flexible microchannel heat sinks are preferred to be used for large pressure drop applications while DL flexible microchannel heat sinks are preferred to be utilized for applications involving low pressure drops along with stiff seals.Later on, Khaled [13] found out that the average temperature of the heated plate decreases as seals number (F n ) increases until F n reaches an optimum after which this temperature starts to increase with an increase in F n .
International Journal of Chemical Engineering 3.6.1.Models for Heat Transfer Correlations.Khaled [152] later analyzed analytically two dimensional flexible thin film channel that has a small and variable height h compared to its length B. The x-axis is taken along the coolant flow direction while y-axis is taken along its height.The width of the thin film channel, D, is assumed to be large enough such that twodimensional flow between the plates can be assumed.The height of the flexible thin film channel is considered to have the following generic form: where n, h e and h i are the power-law index, exit, and inlet heights, respectively.When n = 1.0, the inclination angle of the upper plate is uniform.As such, formula (36) with n = 1.0 models the height profile for flexible thin film channels having inflexible plates.However, each differential element of the upper plate will have a different slope when n / = 1.0.As such, formula (36) models height distributions of flexible thin film channels having flexible upper plates and fixed lower plates when n / = 1.0.The case when n = 0.25 mathematically represents the height profile when the upper plate stiffness is negligible compared to the seals stiffness.Recall that the seals stiffness is the applied tension force on the seal that is required to produce 1 m elongation in its thickness.
Khaled [152] showed that the maximum reduction in dimensionless heated plate temperature (θ W ) AVG associated with case n = 0.25 is 55% less than that for case with n = 1.0 (when H i = h i /h e = 3.0).In addition, the maximum increase in wall shear stress for case with n = 0.25 is 44% greater than that for case with n = 1.0.The former percentage is greater than the latter one which again demonstrates the superiority of flexible thin film channels with flexible plates over those with inflexible plates.Finally, Khaled [13] developed a correlation for design purposes that relates (θ W ) AVG to H i , n, and Peε for the following range: 1.0 < H i < 3.0, 1.0 < Peε < 50 and 0.1 < n < 2.0.
The maximum percentage error between the results of the correlation and the numerical results is about 10%.Where Note that u o is the average flow speed at exit.Note that Khaled [13] considered flexible thin films with heated lower plate with constant heat flux (q ) and insulated upper plate.
Heat Transfer Enhancement Using Flexible Complex
Seals.Khaled and Vafai [12] have also demonstrated that significant cooling inside flexible thin films including flexible microchannel heat sinks can be achieved if the supporting seals contain closed cavities which are in contact with the heated surface.They referred to this kind of sealing assembly as "flexible complex seals".For example, their results showed that expansion of microchannels of magnitude of 1.26 the initial height can cause a drop in the temperature of 16% the initial average heated plate temperature.Later on, Khaled [13] showed that the average temperature of the heated plate always decreases as thermal expansion parameter (F T ) increases if thermal entry region is to be considered.While, it decreases as F T increases until F T reaches a critical value after which that temperature starts to increase for developed flows.Further, he showed that the decrease in the heated plate temperature is significant at lower values of Re, Pr and microchannel aspect ratio., He considered a fixed lower plate of the thin film while the upper plate is flexible and separated from the lower plate by soft complex seals that allow a local expansion in the thin film heights due to both changes in internal pressure and the lower (heated) plate temperature.Similar effects are expected when the upper plate is a bimaterial plate separated from the lower plate via soft seals.He assumed that the thin film height varies linearly with local pressure and local lower plate temperature according to the following relationship: where F T1 is the thermal expansion parameter which is equal to The coefficient β is thermal expansion coefficient of the flexible complex seals.The parameter F T1 increases as the heating load q, the thermal expansion coefficient β and the reference thin film height increase while it decreases as the fluid thermal conductivity k decreases.The stiffness parameter S 1 is related to the elastic properties of the flexible complex seals.(Nu) AVG is defined as 3.8.Heat Transfer Enhancement Using Vortex Generators.Acharya et al. [155] conducted experiments using internally ribbed channel with cylindrical vortex generators placed above the ribs.They studied the effect of the spacing between the vortex generators and the ribs.They found that the heat and mass transfer depend on both the generator-rib spacing to rib height (s/e) ratio and the Reynolds number.They showed that at low Reynolds number (Re = 5000), the heat transfer enhancement was observed for all s/e ratios.However, at high Reynolds number (Re = 30, 000), the enhancement was observed only for the largest s/e ratio (s/e = 1.5).For this ratio, the generator wakes and rib shear layer interact with each other and promote mixing and thus, enhance heat transfer.For the smallest s/e ratio (s/e = 0.55), due to the smaller gap between the generator-ribs, at high Reynolds numbers the ribs act as a single element and prevent the redevelopment of the shear layer causing reduced heat transfer Lin and Jang [156] numerically studied the performance of a wave-type vortex generator installed in a fin-tube heat exchanger.They found that an increase in the length or height of the vortex generator increases the heat transfer, as well as the friction losses.They reported up to 120% increase in the heat transfer coefficient at a maximum area reduction of 20%, accompanied by a 48% increase in the friction factor.Tiwari et al. [157] numerically simulated the effect of the delta winglet type vortex generator on the flow and heat transfer in a rectangular duct with a built-in circular tube.They observed that the vortices induced by the vortex generator resulted in an increase in the span-averaged Nusselt number at the trailing edge of the vortex generator by a factor of 2.5 and the heat transfer enhancement of 230% in the near wake region.
Dupont et al. [158] investigated the flow in an industrial plate-fin heat exchanger with periodically arranged vortex generators for a range of Reynolds number varying from 1000 to 5000.They found that the vortex intensity increases with the Reynolds number.
O'Brien et al. [159] conducted experimental study in a narrow rectangular duct fitted with an elliptical tube inside a fin tube heat exchanger, for a range of Reynolds number varying from 500 to 6300.A pair of delta winglets was used as the vortex generator.They estimated the local surface heat transfer coefficient and pressure drop.They found that the addition of a single winglet pair could increase the heat transfer by 38%.They also found that the increase in the friction factor due to the addition of a winglet pair was less than 10% over the range of Reynolds numbers studied.
Tsay et al. [160] numerically investigated the heat transfer enhancement due to a vertical baffle in a backward-facing step flow channel.The effect of the baffle height, thickness, and the distance between the baffle and the backward facing step on the flow structure was studied in detail for a range of Reynolds number varying from 100 to 500.They found that an introduction of a baffle into the flow could increase the average Nusselt number by 190%.They also observed that the flow conditions and heat transfer characteristics are strong function of the baffle position Joardar and Jacobi [161] carried out experimental investigations to evaluate the effectiveness of delta-wing type vortex generators by full-scale wind-tunnel testing of a compact heat exchanger typical to those used in automotive systems.The mechanisms important to vortex enhancement methods are discussed, and a basis for selecting a deltawing design as a vortex generator is established.The heat transfer and pressure drop performance are assessed at full scale under both dry-and wet-surface conditions for a louvered-fin baseline and for a vortex-enhanced louveredfin heat exchanger.An average heat transfer increase over the baseline case of 21% for dry conditions and 23.4% for wet conditions was achieved with a pressure drop penalty smaller than 7%.Vortex generation is proven to provide an improved thermal-hydraulic performance in compact heat exchangers for automotive systems.
Heat transfer enhancement in a heat exchanger tube by installing a baffle is reported by Nasiruddin and Siddiqui [162].They conducted a detailed numerical investigation of the vortex generator design and its impact on the heat transfer enhancement in a heat exchanger tube.The effect of baffle size and orientation on the heat transfer enhancement was studied in detail.Three different baffle arrangements were considered.The results show that for the vertical baffle, an increase in the baffle height causes a substantial increase in the Nusselt number but the pressure loss is also very significant.For the inclined baffles, the results show that the Nusselt number enhancement is almost independent of the baffle inclination angle, with the maximum and average Nusselt number 120% and 70% higher than that for the case of no baffle, respectively.For a given baffle geometry, the Nusselt number enhancement is increased by more than a factor of two as the Reynolds number decreased from 20, 000 to 5000.Simulations were conducted by introducing another baffle to enhance heat transfer.The results show that the average Nusselt number for the two baffles case is 20% higher than the one baffle case and 82% higher than the no baffle case.The above results suggest that a significant heat transfer enhancement in a heat exchanger tube can be achieved by introducing a baffle inclined towards the downstream side, with the minimum pressure loss.Delta winglets are known to induce the formation of stream wise vortices and increase heat transfer between a working fluid and the surface on which the winglets are placed.Lawson and Thole [163] employed delta winglets to augment heat transfer on the tube surface of louvered fin heat exchangers.It is shown that delta winglets placed on louvered fins produce augmentations in heat transfer along the tube wall as high as 47% with a corresponding increase of 19% in pressure losses.Manufacturing constraints are considered in this study, whereby piercings in the louvered fins resulting from stamping the winglets into the louvered fins are simulated.Comparisons of measured heat transfer coefficients with and without piercings indicate that piercings reduce average heat transfer augmentations, but significant increases still occur with respect to no winglets present.
Air-side heat transfer and friction characteristics of five kinds of fin-and-tube heat exchangers, with the number of tube rows (N = 12) and the diameter of tubes (D o = 18 mm), have been experimentally investigated by Tang et al. [164].The test samples consist of five types of fin configurations: crimped spiral fin, plain fin, slit fin, fin with delta-wing longitudinal vortex generators (VGs), and mixed fin with front 6-row vortex-generator fin and rear 6-row slit fin.The heat transfer and friction factor correlations for different types of heat exchangers were obtained with the Reynolds numbers ranging from 4000 to 10000.It was found that crimped spiral fin provides higher heat transfer and pressure drop than the other four fins.The air-side performance of heat exchangers with the above five fins has been evaluated under three sets of criteria and it was shown that the heat exchanger with mixed fin (front vortex-generator fin and rear slit fin) has better performance than that with fin with delta-wing vortex generators, and the slit fin offers best heat transfer performance at high Reynolds numbers.Based on the correlations of numerical data, Genetic Algorithm optimization was carried out, and the optimization results indicated that the increase of VG attack angle or length, or decrease of VG height may enhance the performance of vortex-generator fin.The heat transfer performances for optimized vortex-generator fin and slit fin at hand have been compared with numerical method.
A systematic numerical study of the effects of heat transfer and pressure drop produced by vortex promoters of various shapes in a 2D, laminar flow in a microchannel have been presented by Meis et al. [168].The liquid is assumed to be water, with temperature dependent viscosity and thermal conductivity.It is intended to obtain useful design criteria of microcooling systems, taking into account that practical solutions should be both thermally efficient and not expensive in terms of the pumping power.Three reference cross sections, namely circular/elliptical, rectangular, and triangular, at various aspect ratios are considered.The effect of the blockage ratio, the Reynolds number, and the relative position and orientation of the obstacle are also studied.Some design guidelines based on two figures of merit (related to thermal efficiency and pressure drop, respectively), which could be used in an engineering environment are provided Sheik Ismail et al. [169] have presented a review of research and developments of compact offset and wavy platefin heat exchangers.The review has been summarized under three major sections.They are offset fin characteristics, wavy fin characteristics and nonuniformity of the inlet fluid flow.The various research aspects relating to internal single phase flow studied in offset and wavy fins by the researchers are compared and summarized.Further, the works done on the nonuniformity of this fluid flow at the inlet of the compact heat exchangers are addressed and the methods available to minimize these effects are compared.
Models for Heat Transfer and Friction Factor Correlations.
Recently, Eiamsa-ard et al. [170] have experimentally investigated the heat transfer, flow friction and thermal performance factor characteristics in a tube fitted with deltawinglet twisted tape, using water as working fluid.Influences of the oblique delta-winglet twisted tape (O-DWT) and straight delta-winglet twisted tape (S-DWT) arrangements are also described.The experiments are conducted using the tapes with three twist ratios (y/w = 3, 4 and 5) and three depth of wing cut ratios (DR = d/w = 0.11, 0.21 and 0.32) over a Reynolds number (Re) range of 3000-27, 000 in a uniform wall heat flux tube.Note that d, y, and w are the depth of wing cut, the twisted tape pitch and the tape width, respectively.The obtained results show that mean Nusselt number and mean friction factor in the tube with the deltawinglet twisted tape increase with decreasing twisted ratio (y/w) and increasing depth of wing cut ratio (DR).It is also observed that the O-DWT is more effective turbulator giving higher heat transfer coefficient than the S-DWT.Over the range considered, Nusselt number, friction factor and thermal performance factor in a tube with the O-DWT are, respectively, 1.04 to 1.64, 1.09 to 1.95, and 1.05 to 1.13 times of those in the tube with typical twisted tape (TT).
Empirical correlations for Nusselt number (Nu), friction factor (f ), thermal performance factor (η) are developed for the tube with delta-winglet twisted tape inserts in the range of Re between 3000 and 27,000, Pr = 4.91-5.57,twist ratio (y/w = 3, 4 and 5), and depth of wing cut ratios (DR = d/w = 0.11, 0.21 and 0.32) as follows.The predicted data are within ±10% for Nusselt number and ±10% for friction factor.They are the following for oblique deltawinglet twisted tapes: 3.9.Heat Transfer Enhancement Using Protrusions.The effect of repeated horizontal protrusions on the free-convection heat transfer in a vertical, asymmetrically heated, channel has been experimentally investigated by Tanda [171].The protrusions have a square section and are made of a lowthermal-conductivity material.Experiments were conducted by varying the number of the protrusions over the heated surface (whose height was held fixed) and the aspect ratio of the channel.The convective fluid was air and the wallto-ambient air temperature difference was set equal to 45 K.The local heat transfer coefficient was obtained by means of the schlieren optical technique.The protrusions were found to significantly alter the heat transfer distribution along the heated surface of the channel, especially in the vicinity of each obstacle.For the ranges of parameters studied, the addition of low-conductivity protrusions leads to a decrease in the average heat transfer coefficient, as compared to that for the smooth surface, in the 0-7% range for the largest channel aspect ratio and in the 18-43% for the smallest channel aspect ratio.Saidi and Sundén [172] have conducted a numerical analysis of the instantaneous flow and heat transfer has been carried out for offset strip fin geometries in selfsustained oscillatory flow.The analysis is based on the twodimensional solution of the governing equations of the fluid flow and heat transfer with the aid of appropriate computational fluid dynamics methods.Unsteady calculations have been carried out.The obtained time-dependent results are compared with previous numerical and experimental results in terms of mean values, as well as oscillation characteristics.The mechanisms of heat transfer enhancement are discussed and it has been shown that the fluctuating temperature and velocity second moments exhibit non-zero values over the fins.The creation processes of the temperature and velocity fluctuations have been studied and the dissimilarity between these has been proved.
Jubran et al. [173] investigated experimentally the effects of rectangular and noncubical obstacles of various lengths, widths and heights on pressure drop and heat transfer enhancements.They found that changes in obstacle size or shape can lead to Nusselt number increases as height as 40%.Sparrow et al. [165,166] found out that mass transfer enhancements of up to 100% can be obtained using perturbations of uniform arrays of square obstacles.An extensive investigation of the fluid and heat transfer in a parallel plate channel with a solid conducting obstacle is conducted by Young and Vafai [154].The rectangular obstacle was found to change the parabolic velocity field significantly resulting in recirculation zones both up-and downstream and a thermal boundary layer along the top face.Their results show that the shape and material of the obstacle have significant effects on the fluid flow and heat transfer.
3.9.1.Models for Heat Transfer Correlations.Young and Vafai [154] proposed Correlation for the obstacle mean Nusselt numbers were found to describe the numerical results with mean errors less than 6%.The correlations have the following functional form: where h m , H, k s , k f , and w are the average convection heat transfer coefficient over the whole obstacle surface, channel height, obstacle thermal conductivity, fluid thermal conductivity, and the obstacle length along the flow direction, respectively.The constants a, b, c and d are found from Table 3 along with the range of the parameters where the inlet length before the obstacle is long enough such that the flow before the obstacle is fully developed.The length of the channel after the obstacle is considered long enough such that the recirculation zones down stream after the obstacle reattached well ahead before the channel outlet.Note that u m is the mean flow velocity inside the channel.The correlation is based on insulated lower and upper channel boundaries and that the heat flux at the lower obstacle surface is constant (q ) where it is equal to q w = h m (2h + w)(T w − T e ).In another work, Young and Vafai [174] performed a comprehensive numerical investigation of fluid and thermal transport within two-dimensional channel containing large arrays of heated obstacles.They found that widely spaced obstacles can effectively transfer thermal energy into the fluid.They studied the effect of the periodicity of the obstacles on heat transfer by doubling the number of obstacles and evaluating the mean Nusslet numbers.The mean Nusselt number was found to reach 5% and 10% difference levels, referenced to ninth obstacle, at the eighth and the seventh obstacles, respectively.The case with porous inserts have been discussed in the works of Alkam et al. [175].
Heat Transfer Enhancement Using Ultra High Thermal
Conductivity Composite Materials.Composite materials have Fins inside tubes 2.0, [37] Microfins inside tubes 4.0 for laminar, [70] 1.4 for turbulent, [70] Porous media ≈ 12.0(k eff /k f ), [78] Nanofluids 3.5, [10] Flexible seals 2.0, [151] Flexible complex seals 3.0, [153] Vortex generators 2.5, [157] Protrusions 2.0, [165,166] Ultra high thermal conductivity composite materials 6, [167] been used primarily for structural applications.However, they have been found to be useful for heat dissipation especially in electronic devices.An example of such these material is the metal matrix composite (MMC).Typical MMCs that includes aluminum and copper matrix composites do not show substantial improvements in thermal conductivity except when reinforcing agent of vapor grown carbon fiber (VGCF) is used as shown in the work of Ting and Lake [176].For example, VGCF-reinforced aluminum matrix composite exhibits a thermal conductivity that can be 642 W/mK with a density of 2440 kg/m 3 using 36.5% of VGCF.However, all MMCs are electrically conductive.Chen and Teng [167] have shown that VGCF mat reinforced epoxy composites can have thermal conductivities larger 695 W/mK with density of 1480 kg/m 3 in addition of having an electrically insulating surface.This is with a reinforcement of 56% by volume of heat treated VGCF.Recently, Naito et al. [177] have shown that grafting of high thermal conductivity carbon nanotubes (CNTs) is very effective in improving the thermal conductivity of certain types of carbon fibers, which can reach to 47% improvement.
Conclusions
In this paper, the following heat transfer enhancers are described and reviewed: (a) extended surfaces including fins and microfins, (b) porous media, (c) large particles suspensions, (d) nanofluids, (e) phase-change devices, (f) flexible seals, (g) flexible complex seals, (h) vortex generators, (i) protrusions, and (j) ultra high thermal conductivity composite materials.Different research works about each one have been reviewed and many methods that assist their enhancement effects have been extracted from the literature.Among of these methods presented in the literature are using joint-fins, fin roots, fin net works, biconvections, permeable fins, porous fins, helical microfins, and using complicated designs of twisted tapes.It was concluded that more attention should be made towards single phase heat transfer augmented with microfins in order to alleviate the disagreements between the works of the different authors.Also, it was found that additional attention should be made towards uncovering the main mechanisms of heat transfer enhancements due to the presence of nanofluids.Moreover, we concluded that perhaps the successful modeling of flow and heat transfer inside porous media, which is a wellrecognized passive enhancement method, could help in well discovering the mechanism of heat transfer enhancements due to nanofluids.This is due to some similarities between both media.In addition, it is concluded that noticeable attentions from researchers are required towards further modeling flow and heat transfer inside convective media supported by flexible/flexible-complex seals in order too compute their levels of heat transfer enhancements.Eventually, many recent works related to passive augmentations of heat transfer using vortex generators, protrusions, and ultra high thermal conductivity composite material have been reviewed.Finally, the estimated maximum levels of the heat transfer enhancement due to each enhancer described in this report were presented in Table 4.
) b 1 = 1 .
0605 b 2 = 1.1424 b 3 = −1.6070b 4 = 4.7261 b 5 = 2.3293 b 6 = 0.3782 b 7 = 0.6104 b 8 = 0.4466 (a) enhancement of the thermal conductivity under the static conditions, (b) further enhancement on the thermal conduction under the dynamic conditions (shear induced), (c) reduction of the boundary layer thickness and delay in the boundary layer development, (d) particle re-arrangement due to non-uniform shearrate across the pipe cross-section, and
3. 7 . 1 .
Models for Heat Transfer Correlations.Khaled and Vafai [153] generated the following correlation are for average Nusselt number (Nu) AVG and the dimensionless average mean bulk temperature (θ m ) AVG for thin films supported by flexible complex seals with flexible upper plates for the specified range of parameters, 1.0 < S 1 < 10, 1.0 < Peε < 50 and 0 < F T1 < 1.
the following for straight delta-winglet twisted tapes: Nu = 0.184Re 0.675 Pr 0.4 y w
Table 4 :
Estimated the highest recorded heat transfer enhancement level due to each enhancer.Heat Transfer Enhancer TypeHeat transfer due presence of the enhancer Heat transfer in absence of the enhancer | 25,327.4 | 2010-09-20T00:00:00.000 | [
"Physics",
"Engineering"
] |
Sexually dimorphic neuronal inputs to the neuroendocrine dopaminergic system governing prolactin release
Abstract Prolactin (PRL) is a pleiotropic hormone that was identified in the context of maternal care and its release from the anterior pituitary is primarily controlled by neuroendocrine dopaminergic (NEDA) neurones of the arcuate nucleus of the hypothalamus. The sexually dimorphic nature of PRL physiology and associated behaviours is evident in mammals, even though the number and density of NEDA neurones is reported as not being sexually dimorphic in rats. However, the underlying circuits controlling NEDA neuronal activity and subsequent PRL release are largely uncharacterised. Thus, we mapped whole‐brain monosynaptic NEDA inputs in male and female mice. Accordingly, we employed a rabies virus based monosynaptic tracing system capable of retrogradely mapping inputs into genetically defined neuronal populations. To gain genetic access to NEDA neurones, we used the dopamine transporter promoter. Here, we unravel 59 brain regions that synapse onto NEDA neurones and reveal that male and female mice, despite monomorphic distribution of NEDA neurones in the arcuate nucleus of the hypothalamus, receive sexually dimorphic amount of inputs from the anterior hypothalamic nucleus, anteroventral periventricular nucleus, medial preoptic nucleus, paraventricular hypothalamic nucleus, posterior periventricular nucleus, supraoptic nucleus, suprachiasmatic nucleus, lateral supramammillary nucleus, tuberal nucleus and periaqueductal grey. Beyond highlighting the importance of considering sex as a biological variable when evaluating connectivity in the brain, these results illustrate a case where a neuronal population with similar anatomical distribution has a subjacent sexually dimorphic connectivity pattern, potentially capable of contributing to the sexually dimorphic nature of PRL release and function.
dimorphic neural circuits as a result of the differential organisational action of gonadal sex hormones. [1][2][3] Nonetheless, sex as a biological variable remains underexplored in neural circuit mapping studies. 4 Prolactin (PRL) is a non-gonadal pleiotropic peptide hormone primarily released by the anterior pituitary gland. 5 Originally named after its role in lactation, 6 PRL is released in response to innumerous external factors and physiological states. 7 Although some, such as stress, 8,9 are shared by both sexes, others are sexually dimorphic.
The latter ones include the release of PRL in response to nipple stimulation in lactating females, 5 the PRL circadian surges in naturally cycling females, 10,11 and PRL release during copulation in males. 12 In females, PRL is fundamental in organising a series of physiological and behavioural programmes that prepare individuals for motherhood: it decreases female receptivity after fertilisation, 13 and also promotes food intake 14 and maternal behaviour. 15,16 All of these programmes are fundamental to ensure the survival of the progeny.
Bycontrast,theroleofPRLinmale-specificbehavioursislesswell understood, although it has been proposed that PRL release during copulation regulates libido. 17 PRL is primarily produced and released into the bloodstream by specialised cells of the anterior pituitary, the lactotrophs. 18 Its receptor, the prolactin receptor (PRLr), can signal through a multitude of second messenger cascades when activated. 19 The PRLr has widespread expression in both the male and female mouse brain. 20,21 Although PRLr expression is concentrated in the rostral and mediobasal hypothalamus, extra-hypothalamic responses to PRL can also be detected in the medial amygdala, bed nucleus of the stria terminalis, lateral septum and others. 19,21 Some of these extra-hypothalamic regions are known to be important for the modulation of sex-specific behaviours. 22 Several inhibitory and stimulatory factors control the release of PRL, 22 although the most important site of regulation resides in the neuroendocrinedopaminergicneurones(NEDA)ofthemedialbasal hypothalamus, the majority of which are located in the arcuate nucleusofthehypothalamus(ARH). 18 NEDAneuronesinhibittheproduction and release of PRL by lactotrophs via dopamine transmission into the blood in response to PRL itself. Suppression of dopamine discharge by NEDA neurones leads to disinhibition of lactotrophs, which quickly release PRL into circulation, thus establishing a self-inhibitory feedback loop. 7,18,23 SeveralhypothesesexistregardingtheregulationofNEDAneural activity based on the behavioural and physiological conditions thatleadtoPRLreleaseandontheneurochemicalsthatNEDAneurones respond to in ex vivo brain slices. 7,24 Nevertheless, the sources ofnon-localbrain-derivedsignalsareyettobeuncovered.Anotable exception is the suprachiasmatic nucleus, whose neurones synapse ontoNEDA 25 neurones and influence their circadian activity in female rats. 26 or the neurohypophysis (tuberhypophyseal dopaminergic neurones). 28 Besides dopamine, NEDA neurones produce and release several other neurotransmitters and modulators, suggesting an even broader biological role for these hypothalamic neurones. 29,30 Given the role of NEDA in prolactin-related sex-specific physiology and behaviours, we set out to characterise: (i) the number anddistributionofNEDAneuronesinthedorso-medialarcuatenucleus of female and male mice and (ii) the brain regions contributing monosynapticinputstoNEDAneuronesinmaleandfemalemice.To genetically access NEDA neurones, we took advantage of the DAT promoter to first label and quantify NEDA neurones in both sexes.
Secondly, using the rabies virus (RV) monosynaptic tracing system, 31,32 we performed a whole-brain survey of regions that harbor neuronesdirectlysynapsingontoNEDAneurones.TheRVmonosynaptic tracing system consists of a recombinant RV and two different adeno-associatedviruses(AAV)thataredeliveredintoatransgenic mouse. The RV is modified in three important ways: (i) to confine the RV spread to a single synaptic jump, the RV envelope glycoprotein (Gprotein) necessary for the virus' transsynaptic spread is deleted and complemented in transbywayofafirsthelperAAVthatexpresses the G-protein in a Cre-dependent manner via the FLEx switch (ie, the G-protein gene is flanked by inverted loxP sites). The Cre recombinase is expressed from the genome of an appropriate transgenic mouse line. This modification renders the G-deleted RV capable of jumping transsynaptically exclusively from cells that were co-infected withtheG-proteincodingAAV.TheRVdoesnotspreadfurtherthan the first-order monosynaptic partners because these cells do not contain the G-protein necessary to assemble functional RV particles; (ii) to make the RV transfection cell type-specific, an envelope protein
| Animals
Animalswerekeptunderareversed12:12hdark/lightcycle(lights on 20.00 h) with access to food and water available ad lib. in temperature-controlled rooms (22-24°C). The animals were group housed until the first surgical procedure and isolated thereafter.
All mice used were sexually naive adults aged between 3-4 months. All procedures were reviewed and performed in accordance with the Champalimaud Welfare Body and the Champalimaud Foundation Ethics Committee guidelines, and were also approved by thePortugueseNationalAuthorityforAnimalHealth.
| Imaging and data analysis
Brain sections were imaged using an automated slide scanner (AxioScanZ1;CarlZeiss,Oberkochen,Germany).Thelocationsand numbers of labelled neurones were manually determined using the Allen Brain Atlas as a reference (http://atlas.brain-map.org/). All procedures were performed using zen, version 2.0 (Carl Zeiss) and fiji (https ://fiji.sc). To generate figures with representative images, brightness and contrast were adjusted using photoshop (Adobe Systems,SanJose,CA,USA).OnlyanimalstargetedtotheARHand displaying a clear GFP signal were considered (see Results). Manual cell counts were stored as csv files and the data analysis was performed using custom Pyhton2.7 scripts (available upon request). Our analysis is focused on brain regions that showed GFP-positive cells in at least three animals. The cell counts for each animal were normalised by dividing the number of neurones found in each region by the number of cells in the arcuate nucleus of the hypothalamus (ARH) (siteofinjection;seeResults).TheARHwassubsequentlyremoved fromanalysis.Givenoursamplesize(n females = 5, n males = 5), a single region sexual dimorphism is considered statistically significant at alpha = 0.05 when a Mann-Whitney U test resulted in a critical U value equal to or smaller than 2 (two-tailed test), and statistically significantatalpha=0.01whenaMann-WhitneyU test resulted in a critical U value equal to 0 (two-tailed test).
| Mouse arcuate nucleus NEDA populations are sexually monomorphic
The sexually dimorphic functions of PRL are exemplified by lactation as- Table S1). These 10 animals were selected not only because the RV injection site was located near the ARH, but also because thesebrainsdisplayedrobustARHsomaandMEaxonterminalssignal as well as consistent GFP signal in several other brain regions. Therefore, in our analysis, we only included brain regions where we observed GFP-labelled neurones in at least three animals ( Figure 3C).
Weidentified59regionsreliablycontributinginputstotheARH.
The cell counts are reported as average percentage input contribution
| Monosynaptic input patterns to NEDA are sexually dimorphic
To investigate the possibility that the distribution of monosynaptic inputs observed for male and female dat-cre mice is sexually dimorphic, we performed a Mann-Whitney U test on each region using
| D ISCUSS I ON
The present study reports for the first time whole-brain monosyn-apticinputstotheNEDAoftheARHthatcontrolthereleaseofthe pituitary hormone PRL.
| Distribution of NEDA neurones in mice is monomorphic
Recently, much effort has been dedicated to unravelling the neuronal circuitry underlying sexual dimorphic behaviours, as a result of the importance of these for the reproductive success of individuals and survival of the progeny. Dimorphisms in behaviour have been mostly associated with quantitative differences in cell numbers. 39 Bycontrast,here,wehaveshownthatthenumberanddistribution ofDat-positiveNEDAneuronesinmiceissimilarbetweenmalesand females, in accordance with previous studies, 37 implying that the dimorphism in PRL physiology, such as higher PRL concentration in the blood of females, 5 and associated behaviours must arise from other sources.Still,itislikelythattheoutputofmouseNEDAneuronesis sexuallydimorphic:intherat,forexample,femaleNEDAneurones produce higher levels of dopamine than their male counterparts. 34
| Identification of long-range input regions to NEDA of the ARH of male and female mice
TheARHisanextensivelystudiedareaofthebrainindiversecontexts and has a highly diverse cell population. 40 In our study, the ARH had consistently high numbers of GFP-positive only (nonstarter) cells, thus attesting to the high degree of intrinsic connectiv-itywithintheARH. 29 Wefocusedouranalysisoninputsoriginating outsidetheARHbecausetheRVsystememployedhereisnotadequate to perform local circuitry input mapping. To perfoem a local circuitry study, a different version of the RV system that employs a mutantTVAwithreducedtransfectionefficiencyisrecommended 41 to allow labelling of more sparse local inputs.
In female mice, the area contributing the most inputs to the ARHistheDMH,whereas,inmales,itisthePVH.Inbothsexes, both of these areas contribute a substantial amount of inputs to NEDA neurones. Neuronal activity in the DMH and the PVH has been implicated in lactation. 42 Weproposethatconcomitant activation of these brain regions might result in direct signalling ontoNEDAneuronesandthusaffectPRLrelease;thenatureof the neurotransmitter released by the DMH and PVH remains to be clarified. Regarding males, the role of these brain areas in the context of the regulation of NEDA neurones is currently a mystery.
In rodents, dimorphisms have been identified in the number of neurones within brain regions or their projections that are relevant for dimorphic social behaviours. 39 Indeed, in our study, we detected significant sexual dimorphisms in four of the six areas classically defined as the mammalian social brain network 22 in the present study are also PRL-responsive. 21 This suggests the existence of a feedback mechanism in the brain influencing the control of PRLreleasebyPRL-responsiveregionsthatcontactNEDAneurones,in additiontodirectPRLactionontoNEDAneurones.
| The role of PRL in male sexual behaviour
Even though PRL is paramount for the regulation of sexual behaviour and sex-specific behaviours, little is known about the role of this hormone in male physiology. Curiously, we detected inputs that consistently appeared in males but not in females. One of these input areas, the suprafascicular nucleus (SPF), has been reported as having direct projections to dynorphin-positive neurones of the ARH in non-lactating females 48 but not dopaminergic neurones.
Because the SPF has been implicated in the control of ejaculation, 44 it is tempting to speculate that the projection from the SPF onto NEDA neurones controls the release of PRL during copulation. PRL release during sexual behaviour could be involved in the priming of the male brain for paternity, as appears to be the case in females. 15 This view is in accordance with the observation that sexually experienced males have decreased rates of infanticide. 49
| Caveats of the monosynaptic rabies tracing technique and further work
Despite the specificity and potential of the rabies monosynaptic tracing system, 31 there are some limitations to this methodology, in particular, the completeness of input coverage to a given population of starter cells, as well as in the types of synapses the RV can traverse. 32 Therefore, we do not claim to have uncovered all the brain regionssynapsingontoDat-positiveNEDAneurones,butratherthat the regions we have uncovered are reliable monosynaptic inputs outsidetheARH.
The main challenge of the present study is the heterogeneity ob- Average input distribution of green fluorescent protein (GFP)-positive cells by sex. Each brain region was averaged across sex and converted to a percentage value of total inputs by sex. To detect significant differences between males and females in each brain region, we performed a two-tailedMann-WhitneyU test and accepted significance when U≤2(*α = 0.05) and U=0(**α = 0.01). Ten regions were found to significantly contribute more inputs in males than in females: the anterior hypothalamic nucleus, the anteroventral periventricular nucleus, the medial preoptic nucleus, the paraventricular hypothalamic nucleus, the posterior periventricular nucleus, the supraoptic nucleus, the suprachiasmatic nucleus, the lateral supramammillary nucleus, the tuberal nucleus and the periaqueductal grey To mitigate this issue, we only considered brain regions where we observed GFP-positive neurones in at least three of the animals.
In addition, the choice of Dat as marker might result in incomplete coverage of all dopaminergic neurones participating in PRL regulation. For example, a brain area reported in female rats that synapsesontotyrosinehydroxylase-positiveneuronesoftheARHis the intergeniculate leaflet of the lateral geniculate, 38 an area related to circadian regulation. However, in our study, we did not reliably find GFP-positive neurones in this area in females, only in two male animals (see Supporting information, Table S1). It is possible that this particulardiscrepancyisasaresultofthefactthatnotalloftheARH Th-positive neurones are Dat-positive. 37 Several experiments can be performed as a follow-up to ensure that the regions reported as monosynaptic inputs are indeed synaps-ingontoNEDAneurones,startingwiththeinjectionofanterograde traveling viral particles in the brain regions identified in this study.
| CON CLUS IONS
The present study has laid the necessary grounds for the future detailed investigation of PRL function in a broad variety of contexts, ranging from sexual behaviour to maternal care and drug development efforts adequate to both male and female physiology. Importantly, as far as we know, this is the first instance where the rabies monosynaptic tracing system was used to unveil the sexual dimorphism of synaptic inputs into a neuronal population, emphasising the importance of considering sex as a variable in studies of neuronal connectivity. Finally, despite the clear existence of dimorphisms in behaviour, there are very few cases where the underlying circuitry has been identified and examples of dimorphisms in connectivity onto a monomorphic population are rare. Therefore, besides the contribution to our knowledge regarding the regulation of PRL physiology and associated in behaviours, the present study also puts forward an interesting candidate circuit for interrogating basic questions related to sex-specific development and wiring of neuronal circuits.
CO N FLI C T O F I NTE R E S T S
The authors declare that they have no conflicts of interest. | 3,241 | 2019-08-16T00:00:00.000 | [
"Biology"
] |
Dynamical Stability of Cantilevered Pipe Conveying Fluid with Inerter-Based Dynamic Vibration Absorber
: Cantilevered pipe conveying fluid may become unstable and flutter instability would occur when the velocity of the fluid flow in the pipe exceeds a critical value. In the present study, the theoretical model of a cantilevered fluid-conveying pipe attached by an inerter-based dynamic vibration absorber (IDVA) is proposed and the stability of this dynamical system is explored. Based on linear governing equations of the pipe and the IDVA, the effects of damping coefficient, weight, inerter, location and spring stiffness of the IDVA on the critical flow velocities of the pipe system is examined. It is shown that the stability of the pipe may be significantly affected by the IDVA. In many cases, the stability of the cantilevered pipe can be enhanced by designing the parameter values of the IDVA. By solving nonlinear governing equations of the dynamical system, the nonlinear oscillations of the pipe with IDVA for sufficiently high flow velocity beyond the critical value are determined, showing that the oscillation amplitudes of the pipe can also be suppressed to some extent with a suitable design of the IDVA.
fluid [7][8][9][10][11][12][13][14][15][16][17], by using either passive or active methods. For instance, Tani et al. [12,13] applied a torsional moment to a certain position of the cantilevered fluid-conveying pipe, and then experimentally and numerically studied the suppression effect of the torque on the oscillation responses of the pipe. Yau et al. [2] added a piezoelectric layer to a certain position of the cantilevered pipe, and explored the effect of the mounting position and the length of the piezoelectric layer on the vibration responses of the pipe. Lin et al. [9,14,15] further demonstrated that the cantilevered pipe could obtain remarkable suppression effect when the voltage imposed on the piezoelectric layer was within a suitable range. Khajehpour et al. [16] utilized piezoelectric layers to control the vibrations of a rotating cantilever conveying fluid. Hussein et al. [17] analyzed the effect of hydraulic damper position, base width of hydraulic damper, damping and flow pressure on the dynamic responses of slender pipes by using state space technique.
Since active control methods for cantilevered pipes conveying fluid are always restricted by the reliability and service life of the controller, passive control methods [18][19][20][21][22] have attracted more attentions. For instance, Wang et al. [23] derived the governing equations of a pipe conveying fluid on elastic foundation and showed that an elastic foundation can increase the critical flow velocity for statical and dynamical instabilities of the pipe. DoarÉ et al. [24] compared the stability characteristics of finite-and infinite-length pipes conveying fluid on elastic foundations. Hiramoto et al. [20] improved the stability of cantilevered pipes conveying fluid by optimizing the outer diameter distribution of the pipe with a closed-loop device. Pisarski et al. [25] applied electromagnetic devices of a motional type to cantilevered pipes conveying fluid to improve the dynamical stability of the pipe system. It was demonstrated that the electromagnetic devices of the motional type can remarkably increase the critical flow velocity by fifty percent comparing to the same pipe but without the electromagnetic actuator.
At the year of 2013, Yang et al. [26] initiated to numerically investigate the nonlinear responses of simply supported pipes conveying fluid with an attached nonlinear energy sink (NES). A cubic spring linked with a mass was used to model the effect of NES on the pipe system. It was indicated that the vibrational energy of the simply supported pipe conveying fluid could be robustly absorbed by the NES. Based on the work of Yang et al. [26], Mamaghani et al. [27] used an attached NES to suppress the oscillation responses of clamped-clamped pipe conveying fluid subjected to an external harmonic force. It was demonstrated that the pipe could achieve excellent suppression effect by attaching the NES at the middle point of the pipe. Song et al. [28] explored the vibration control performance of a Pounding Tuned Mass Damper (PTMD) for pipe structures by installing a PTMD on an M-shaped pipeline, using both experimental and numerical methods. Rechenberger et al. [29] used Microsoft Excel spreadsheet calculations to establish a mathematical model of Tuned Mass Damper (TMD). Their studies were of a practical guidance on the design of TMD for suppressing the oscillations of pipeline structures. Zhou et al. [30] installed an NES attachment somewhere along the length of a cantilevered pipe conveying fluid to enhance the stability of the pipe. The effects of mass ratio, spring stiffness, damping and location of the NES on the stability and nonlinear responses of the pipe were explored. Very recently, Liu et al. [31] analyzed the dynamical stability of a cantilevered pipe with an additional linear dynamic vibration absorber (DVA) attachment. It was shown that the damping coefficient, spring stiffness, location and weight of the DVA can remarkably affect the dynamical behaviors of the pipe.
In the current work, we introduce an inerter-based dynamic vibration absorber (IDVA) [32][33][34][35] to adjust the dynamical stability of cantilevered pipes conveying fluid. Based on the proposed governing equations, the effect of several key parameters of the additional IDVA on the dynamics of the pipe are investigated. It will be shown that the IDVA can enhance the stability and suppress the oscillations of the pipe in many cases. Fig. 1 shows the schematic diagram of a cantilevered pipe conveying fluid with an additional IDVA. The spring-mass attachment is attached at x = x b ≤ L, where L is the pipe length. The pipe is horizontal and its motion is limited in a horizontal plane by embedded a steel strip in the pipe. In Fig. 1, the lateral displacement of the pipe along the y axis is denoted by W (s, t), with s being the curvilinear coordinate along the pipe length and t being the time. Before giving the nonlinear governing equations of the pipe system, several basic assumptions were made [36,37]: (1) The internal axial fluid is incompressible; (2) The centreline of the pipe is inextensible; (3) The Euler-Bernoulli beam theory is acceptable for the pipe; (4) The pipe's axial strain is sufficiently small, although its lateral deflection may be relatively large. Following the derivation of Semler [38] and Liu et al. [31] and considering the effect of IDVA, the equation of motion of the pipe may be written as
Governing Equations
in which the overdots and primes denote the derivative with respect to t and s, respectively; ψ is the Kelvin-Voigt damping coefficient of the pipe, EI is the flexural rigidity of the pipe; m is the mass of the empty pipe per unit length, U is the steady flow velocity, M is the mass of the internal fluid per unit length; W b is the lateral deflection of the pipe at the location of the IDVA attachment; C is the damping coefficient of the damper, K is the stiffness of the spring, I 0 denotes the inerter of the IDVA, V is the displacement of the additional mass; δ(s − s b ) is the Dirac delta function with s b being the location of the IDVA.
The governing equation of the IDVA takes the form where m 1 is the mass of the attached rigid body.
Introducing the following dimensionless quantities Eqs. (1) and (2) may be written in dimensionless forms as and where the prime and overdot on each variable now denote the derivative with respect to ξ and τ , respectively. The nonlinear term N(w) in Eq. (3) is given by
Galerkin Discretization
The governing equations for the pipe and IDVA are in partial differential forms, which can be discretized by using several effective methods including Galerkin approach [39][40][41] and differential quadrature method [42][43][44]. In the following calculations, the Galerkin approach is used to discretize the partial differential equations. Based on this method, the displacements of the pipe can be given by where ϕ r (ξ ) is the base eigenfunctions of a plain cantilevered beam, and q r (τ ) is the corresponding generalized coordinates; N is the number of base functions used in the discretization. Substituting expression (6) into Eqs. (3) and (4), multiplying by ϕ i (ξ ) and integrating from 0 to 1, the following ordinary differential equations can be obtained where the overdots now denote the total derivative with respect to dimensionless time τ . In Eq. (7), [M], [C] and [K] represent the mass, damping and stiffness matrices for the linear parts and f nonl denotes the nonlinear term associated with various nonlinearities of the pipe system. In the following calculations, a four-mode Galerkin approximation will be utilized (N = 4) because the instability of the pipe system is usually associated with the lowest several modes.
By neglecting the nonlinear terms in Eq. (7), the eigenvalues of the pipe with IDVA can be obtained. According to the obtained eigenvalues for each mode, the stability of the pipe with the IDVA can be evaluated. When the dimensionless inerter in Eq. (7) is set as θ = 0 and the nonlinear term f nonl is absent, the eigenvalues of the pipe with DVA may be obtained, referring to [31]. The nonlinear oscillations of the pipe with IDVA can be predicted by numerically solving the nonlinear governing equations via a fourth-order Runge-Kutta iteration algorithm.
Results
In this section, the effect of IDVA on the dynamical stability and nonlinear responses of the pipe system is explored. For that purpose, the evolution of eigenvalues for the pipe and the attached mass as a function of the flow velocity will be shown first. Based on the stability analysis of the linear system, the nonlinear oscillations of the pipe and the attached mass will be further studied. Results for the dynamical behaviors of the cantilevered pipe and the attached IDVA for various system parameters will be presented mainly in the form of Argand diagrams, bifurcation diagrams and displacement-time diagrams.
Model Validation and Simple Comparison
The Argand diagram of a cantilevered fluid-conveying pipe without IDVA for φ = 0.001 and β = 0.213 is reproduced first to check the correctness of our approximately analytical solutions. The evolution of lowest four eigenvalues of the pipe without IDVA with increasing dimensionless flow velocity, u, is illustrated in Fig. 2. In this figure, it should be noted that Re(ω) is the dimensionless oscillation frequency, while Im(ω) is related to the dimensionless damping of the whole system. It is obvious that the flutter instability of the pipe occurs in the second mode, at u cr ≈ 5.8. It is also seen that the results plotted in Fig. 2 agree well with those obtained by Gregory et al. [45] and Paidoussis et al. [46], demonstrating that the approximately analytical solutions obtained in this work are reliable.
In order to explore the effect of IDVA on the basic dynamics of the fluid-conveying cantilever, typical results of Argand diagrams for a cantilevered pipe conveying fluid with DVA and IDVA are shown in Figs. 3 and 4. It is seen from Fig. 3 that flutter instability of the pipe with DVA occurs at u cr ≈ 6.4. This dimensionless critical flow velocity is much larger than that shown in Fig. 2, indicating that the DVA can improve the stability of the pipe system. It is also noted from Fig. 3 that the present result for the pipe with DVA agrees well with that reported in [31]. Furthermore, as shown in Fig. 4, once the inerter is added to the spring-mass attachment, the dimensionless critical flow velocity of the pipe system would increase further to u cr ≈ 6.9.
It is seen that the critical flow velocity for the whole system is about u cr = 6.4 In order to make the calculation results easier to understand, the dimensional values of critical flow velocity for the pipe with IDVA or without IDVA are briefly discussed. Taking a silicone tube [47] as an example, several key physical and geometrical parameters are chosen as: the flexural rigidity of the pipe EI = 0.0217 Nm 2 , the mass of the empty pipe per unit length m = 0.19 kg/m, the density of the internal fluid per unit length ρ = 1000 kg/m 3
Effect of IDVA on the Critical Flow Velocity
In this subsection, the stability of the cantilevered fluid-conveying pipe for various system parameters of the IDVA will be investigated. The dimensionless critical flow velocities u cr of the pipe system as a function of dimensionless inerter and mass ratio of the IDVA for φ = 0.001, k = 28, β = 0.213, ξ b = 0.5 are indicated in Fig. 5. The results shown in this figure demonstrate that the critical flow velocities of the dynamical system increase with the increase of the damping coefficient. As shown in Figs. 5a-5c, with the increase of dimensionless mass ratio, a smaller value of dimensionless inerter of the IDVA is required to achieve higher crtitical flow velocity. Fig. 6 shows that the critical flow velocities of the pipe system as a function of dimensionless inerter and mass ratio for three different values of IDVA location (ξ b ). It is found that the critical flow velocities of the system in the case of ξ b = 0.75 change slightly only. The peak value of the critical flow velocity of the dynamical system occurs at ξ b ≈ 0.5, as can be seen in Figs. 6a-6c. The critical flow velocities of the pipe system as a function of dimensionless stiffness and mass ratio of the IDVA are shown in Fig. 8, for three given values of inerter (θ). By inspecting Figs. 8a-8c, a remarkable feature can be found: with the increase of the inerter of IDVA, the critical flow velocities of the pipe with IDVA would reduce. That is to say, a larger value of inerter is detrimental to the system's stability. Figs. 9a-9c plot the results of critical flow velocities for two independent parameters, stiffness and inerter, with three different values of IDVA location (ξ b ). It is found that when the IDVA location is closer to the free end of the pipe, the stiffness of the IDVA needs to be decreased to obtain higher critical flow velocity. Among the three cases shown in Figs. 9a-9c, it is noted that the maximum critical flow velocity appears at ξ b = 0.5.
Figs. 10a-10c show the critical flow velocities of the pipe with the IDVA being attached at ξ b = 0.5, for three different values of mass ratio (α). It is observed that with the increase of mass ratio, the stiffness of the IDVA needs to be increased to achieve higher critical flow velocity. Upon comparing the three diagrams of Fig. 10, again, it is found that higher critical flow velocity can be realized in the case of α = 0.1. In Fig. 11, the critical flow velocities as a function of dimensionless stiffness and damping coefficient of the IDVA for three different values of inerter are shown. Once again, it is found that the stability of the pipe can be better enhanced by using the IDVA with smaller values of inerter.
The critical flow velocities of the pipe system as a function of dimensionless stiffness and damping of the IDVA for φ = 0.001, θ = 0.02, β = 0.213, ξ b = 0.5 and three different values of mass ratio of the IDVA are plotted in Fig. 12. Among the three cases shown in Fig. 12, it is noted that with the increase of mass ratio, the spring stiffness of the IDVA needs to be increased to achieve higher critical flow velocity, and the peak value of the critical flow velocity of the system appears at α = 0.1.
Effect of IDVA on Nonlinear Oscillations of the Pipe
In this subsection, the nonlinear oscillations of the cantilevered pipe conveying fluid with IDVA will be studied when the flow velocity is successively increased. Some fascinating dynamical behaviors will be shown by analyzing this modified system. In order to illustrate the effect of IDVA on the pipe, nonlinear responses of the cantilevered pipe conveying fluid with and without IDVA are examined.
Before embarking some numerical calculations, it is recalled that the argand diagrams for the cantilevered pipe conveying fluid attached with DVA and IDVA show some difference (see Figs. 3 and 4). Therefore, it is expected that the nonlinear responses of the pipe with DVA and with IDVA are also different. To illustrate this, two bifurcation diagrams for the pipe with DVA and with IDVA are plotted in Fig. 13. It is immediately seen that the flutter instability of the pipe with DVA occurs at a higher flow velocity if compared with that of the pipe without IDVA. Furthermore, the pipe with IDVA shows a much higher critical flow velocity. These critical flow velocities for flutter instability based on nonlinear theory are consistent with the linear results shown in Figs. 3 and 4. It is noted that the oscillation amplitudes of the pipe with DVA or with IDVA are generally smaller than that of the plain pipe without any attachments, for a wide range of flow velocity. Interestingly, the pipe with a DVA loses instability at about u cr = 6.4, then regains stability at about u = 7.2, and finally becomes unstable with further increasing flow velocity. When the flow velocities are higher than u = 10, the oscillation amplitudes of the pipe with and without mass attachment has no obvious difference. The dynamic responses of the DVA and IDVA show similar behaviors as the pipe, which can be observed in Fig. 13b. When the value of the mass ratio of the IDVA is increased to α = 0.15, it is indicated that the dynamical behaviors of the pipe shown in Fig. 14 exhibit some difference from those given in Fig. 13 for α = 0.1. It is noted that the pipe with either DVA or IDVA becomes unstable at a critical flow velocity higher than the flow velocity for flutter instability of the pipe without IDVA. One can see that the pipe with DVA for α = 0.15 no longer switches between stable and unstable sates when the flow velocity is successively increased. Similar phenomenon can be observed in Fig. 14b for the dynamic responses of the IDVA.
In the case of α = 0.1 and θ = 0.04, the bifurcation diagrams for the pipe system with flow velocity as the variable parameter are plotted in Fig. 15. It is seen from this figure that the oscillation amplitudes of the pipe with IDVA at the location of the attachment are larger than the counterpart of the pipe with DVA for most flow velocities, while the oscillation amplitudes of the IDVA are generally smaller than that of the DVA. For both cases, the attachment mass can absorb some energy of the whole system, resulting in the decrease of the oscillation amplitudes of the cantilevered pipe. Fig. 16. In this case, the pipe with DVA has smaller oscillation amplitudes than that of the pipe with IDVA. This implies that the inerter has a negative effect on the vibration suppression of the pipe in such a case. The bifurcation diagrams for the pipe system with flow velocity as the variable parameter for k = 28, α = 0.1 and two different values of inerter are shown in Figs. 17 and 18. It is seen that the critical flow velocity of the pipe with IDVA for θ = 0.04 is slightly larger than that of the pipe with IDVA for θ = 0.02, indicating that the inerter has a positive effect on the stability of the system in this case. It is also observed that, with increasing flow velocity, the oscillation amplitudes of the pipe with IDVA (or DVA) at ξ = 0.5 would increase gradually to relatively large values and thereafter decrease to relatively small values. When the flow velocity is sufficiently high (e.g., u = 10.5), the oscillation amplitude of the pipe at ξ = 0.5 would tend to a constant value. Fig. 19 are for u = 6.2. It is obvious that the pipe without IDVA undergoes a periodic oscillation while the displacements of the pipe with DVA or IDVA is towards to zero. In this case, therefore, the pipe with IDVA or DVA becomes more stable than the pipe without IDVA. It is clearly seen from Fig. 20 that the pipe undergoes a periodic motion, either with DVA or without IDVA, while the pipe with IDVA keeps still for u = 6.8. Moreover, the oscillation amplitudes of the pipe without IDVA are much larger than the counterpart of the same pipe with DVA. In the case of u = 7.3, the displacement-time curves are shown in Fig. 21. It is fascinating that the pipe without and with IDVA occurs flutter instability while the pipe with DVA keeps stable. That is to say, the pipe with DVA shows a better stability performance than the same pipe but with IDVA in case of u = 7.3. When the flow velocity is up to u = 8, the result shown in Fig. 22 indicates that the pipe undergoes a periodic motion, even if the pipe is installed with IDVA or DVA. It is also noted that the oscillation amplitude of the pipe without IDVA is larger than that of the pipe with IDVA. Unfortunately, the oscillation amplitude of the pipe with IDVA is larger than that of the pipe with DVA.
Conclusions
The dynamical stability and nonlinear responses of a cantilevered pipe conveying fluid with an IDVA added somewhere along the pipe length are explored in the present study. For a plain pipe without IDVA, the dynamical system loses stability via flutter when the flow velocity exceeds a certain critical value. For the same pipe but with an IDVA, the flutter instability would occur at a higher critical flow velocity. By constructing Argand diagrams for eigenvalues of the dynamical system, it is found that the damping coefficient, stiffness, location, weight and inerter of the additional IDVA do have effect on the stability of the pipe. Under certain conditions, the critical flow velocity of the pipe can be remarkably increased by adding the IDVA, and hence the stability of the pipe can be enhanced. The underlying reason of the enhanced stability of the fluidconveying pipe is associated with the transfer of energy from the pipe to the IDVA. To evaluate the effect of IDVA on the nonlinear behaviors of the pipe, the oscillations of the pipe and the IDVA are also calculated based on nonlinear theories. It is shown that the oscillation amplitudes of the pipe with IDVA are sometimes smaller than that of the pipe with DVA, and the oscillation amplitudes of the IDVA are always larger than that of the DVA. Therefore, the results obtained in this paper may be expected to be useful for the design of energy absorbers (or energy transfer devices) of fluid-conveying pipes by adding IDVAs somewhere along the pipe length. However, this does not mean that the IDVA is always better than a DVA for enhancing the stability and suppressing the oscillations of cantilevered pipes conveying fluid. Within some ranges of flow velocity, indeed, the DVA has a better performance, as shown in Figs. 13-18. Furthermore, for a pipe with both ends positively supported, buckling is the preferred form of instability since it is a conservative system in the absence of dissipation. When an inerter-based DVA is added to the supported pipe, the linear spring of the IDVA could change the effective bending stiffness of the whole pipe system. For this reason, it can be foreseen that the IDVA can affect the stability of pipes with both ends supported.
Funding Statement:
The authors gratefully acknowledge the support provided by the National Natural Science Foundation of China (Nos. 11622216, 11672115 and 11972167). | 5,558 | 2020-01-01T00:00:00.000 | [
"Engineering",
"Physics"
] |
Evidence of soft bound behaviour in analogue memristive devices for neuromorphic computing
The development of devices that can modulate their conductance under the application of electrical stimuli constitutes a fundamental step towards the realization of synaptic connectivity in neural networks. Optimization of synaptic functionality requires the understanding of the analogue conductance update under different programming conditions. Moreover, properties of physical devices such as bounded conductance values and state-dependent modulation should be considered as they affect storage capacity and performance of the network. This work provides a study of the conductance dynamics produced by identical pulses as a function of the programming parameters in an HfO2 memristive device. The application of a phenomenological model that considers a soft approach to the conductance boundaries allows the identification of different operation regimes and to quantify conductance modulation in the analogue region. Device non-linear switching kinetics is recognized as the physical origin of the transition between different dynamics and motivates the crucial trade-off between degree of analog modulation and memory window. Different kinetics for the processes of conductance increase and decrease account for device programming asymmetry. The identification of programming trade-off together with an evaluation of device variations provide a guideline for the optimization of the analogue programming in view of hardware implementation of neural networks.
conductance levels and the multi-level capability should be considered in a probabilistic fashion 19,32 . Moreover, for specific synaptic applications, some functional non-idealities exist in physical devices that could affect the network performance, such as non linear weight update and asymmetric behaviour between the processes of conductance increase (potentiation) and decrease (depression) 33,34 . Several works concentrate on optimizing the material stack to improve the linearity of the conductance change 35,36 . However, the linear behaviour is usually displayed only up to a certain number of pulses and in a restricted memory window. Another class of researches analyzes the impact of device dynamics and number of conductance levels on the performance of simulated neuromorphic networks 34,37,38 . Network simulations give some useful hints on how the details of synaptic plasticity influence the learning process 39 , though only theoretical conductance changes or representative device results are routinely considered. Indeed, operating parameters strongly determine the device dynamics and a complete device characterization in a diverse range of programming conditions is necessary to optimize the analogue behaviour and identify the robustness of the analogue operation.
A major source of deviation from the ideal linear weight update in physical devices is the presence of limits to the maximum conductance span achievable for a given programming condition due to a finite memory window. Conductance saturation close to the boundaries causes a deviation from linearity of the conductance dynamics and introduces a state-dependent conductance update. While Ziegler et al. 40 identified the basic aspects of Hebbian learning in bounded memristive devices, the presence of boundaries to the maximum synaptic strengths imposes a constraint to the storage capability 41 and impacts the learning process and the decay of the stored memory due to ongoing plasticity 29 . On the other hand, in the most general case of unbalanced probabilities of synaptic potentiation and depression, Fusi and Abbott 42 noticed the advantage in terms of memory capacity of softly bounded synaptic weights, in which the weight boundaries are approached asymptotically as the weight update tends to zero.
In this work, we study with a systematic approach the gradual evolution of conductance dynamics of filamentary HfO 2 -based resistive RAM upon train of identical programming pulses and as a function of a wide range of pulse voltage and time width parameters. We show that the soft bound phenomenological model identified by Fusi and Abbot 42 , however simple, captures the salient features of the experimental conductance change related to a state-dependent update and to a soft approach to the conductance boundary values. The model is then applied as metrics to identify the degree of analogue operation in the different programming regimes, namely no switching, gradual conductance change stimulated by train of identical pulses and conductance change achieved by a single programming pulse (binary regime). Further, we discuss the relationship between the experimental results, the proposed model and the device physics. Device non-linear switching kinetics is recognized as the origin of the transition between different dynamics and motivates the crucial trade-off between the degree of analogue modulation, programming symmetry of the potentiation/depression operations and memory window, as a general feature of filamentary-type resistance switching devices. Finally, evaluating the variability and reproducibility around the average resistance evolution allows to give a quantitative estimation of device performances in the different regimes of operation. While several publications highlight the relative immunity to device variations of spiking neural networks due to their intrinsic properties 25,37 , the reported analysis provides some insight on the actual variation and robustness of analogue operation.
Results
Representative features of conductance dynamics. The analysed devices present a TiN/HfO 2 /Ti/TiN structure and their operation is based on a filamentary switching mechanism, according to previous papers 16,43,44 , initiated by a forming process as described in the Methods section. Conductance increase (potentiation) and decrease (depression) dynamics is characterized by pulsed measurements in a wide space of the programming parameters varying the time width and voltage amplitude (Δt and ΔV) of the applied pulses while keeping Figure 1. (a) Schematic analogy between ionic motion in a biological synapse (e.g. Na + , Ca 2+ ions), in between two neurons, and in an RRAM device (e.g. metal or oxygen ions) connecting two electronic neuron units. (b) Examples of conductance series of potentiation (top) and depression (bottom) operations obtained for 300 identical pulses with Δt = 10 μs. Each colour corresponds to a different series of 300 pulses starting from a similar initial state and operated with increasing absolute potential between 0.55 V (left) and 0.9 V (right) with a step of 50 mV.
Scientific REPORTS | (2018) 8:7178 | DOI:10.1038/s41598-018-25376-x constant the initial state of the memory cell. The aim is to identify the operating regions for increasing and decreasing connectivity strength as a function of the programming conditions of the incoming pulses.
Trains of identical pulses were applied to the device and its conductance was continuously monitored by a reading operation after each pulse. This procedure was repeated after reinitializing the device to a similar initial state and varying the pulse voltage or time width for each series. Further details can be found in the Methods Section. An example of the conductance sequences resulting from 300 identical pulses repeated with increasing ΔV and fixed Δt = 10 μs can be found in Fig. 1b for both potentiation and depression operations. The two reported examples already anticipate some of the main experimental findings. For low enough ΔV, i.e. in the first few sequences up to the dashed line, no switching occurs and the conductance value does not change during the pulse sequence since the effect of each pulse is so low to result in no visible change of the cumulative conductance. For pulse amplitudes in between the dashed and the dotted lines, the conductance modification produced by each pulse becomes appreciable and sums up to produce a gradual analogue potentiating or depressing trend. Increasing ΔV further, the effect of the first pulse becomes gradually dominant and eventually it produces most of the overall conductance change (right side of the dotted line for potentiation), resulting in a digital switching behaviour. This is particularly evident in the last potentiation series of Fig. 1b, for which only two conductance levels can be distinguished. To observe a similar digital behaviour for the depression series, Δt should be further increased as reported in the Supplementary Fig. S1.
In summary, three main switching regimes can be identified as separated by two thresholds roughly identified by dashed and dotted lines in Fig. 1b. The first threshold (in the following referred to as switching threshold) marks the transitions from no switching to the activation of gradual switching, while the second threshold indicates the onset of digital switching. The different position of the switching thresholds for potentiation and depression, though roughly placed in this figure just as a guide for the eye, already remarks the asymmetry in terms of conductance dynamics of filamentary RRAMs. In the following, a complete experimental analysis will allow the quantitative positioning of the switching thresholds in relation with the kinetics of the switching process and the demonstration of the bounded nature of the conductance dynamics.
Bounded nature of the cumulative conductance change. Figure 2 shows representative conductance trends produced by sequences of 300 identical pulses. In each frame one programming condition, namely ΔV or Δt, has been fixed letting the other parameter vary. The conductance dynamics evolves among the three regions of no substantial modification, gradual switching and abrupt switching (from dark to clear colour tones) increasing (Fig. 2c,d). A complete set of plots containing all the acquired curves can be found in the Supplementary Fig. S2.
A significant feature common to all the conductance curves is the attainment of conductance saturation for a sufficiently high number of pulses. In particular, even when curves look almost linear for a limited pulse number, increasing the number of pulses will eventually drive the device in saturation and the overall trend will flatten and deviate from linearity. The non-linear behaviour is linked to the uneven conductance changes induced by each pulse. In particular, the effect produced by a single pulse gets smaller and smaller as the saturation level is asymptotically approached. This suggests that conductance modulation is a function of the current device state.
In Fig. 2 it can also be noticed that increasing the strength of the programming conditions (i.e. |ΔV| or Δt) the conductance window expands accordingly at the expense of the smoothness of the conductance change.
Features of conductance dynamics. Figure 3 emphasizes the experimental asymmetries between conductance evolution for potentiation and depression operations, which rise as a consequence of the inherent differences in the corresponding switching kinetics. It is demonstrated that providing the device with the same strength of the stimulus (same Δt and absolute ΔV), either qualitatively different smoothness of the conductance change ( Fig. 3a) or unmatched conductance windows (Fig. 3b) are obtained comparing potentiation and depression. Analogue modulation for both operations is achieved if an unbalanced ΔV is introduced, as in Fig. 3c.
The source of such asymmetries lies in the inherently different switching kinetics (Fig. 3d,e) of the processes responsible for conductance potentiation and depression, i.e. filament formation and dissolution. The memory window (defined as the conductance ratio G max /G min after 300 pulses) is portrayed as a function of the pulse programming parameters. The memory window is close to 1 for low ΔV and Δt, meaning that no switching occurs, while increasing both programming parameters a switching threshold is crossed (dashed line in Fig. 3d,e) which marks the onset of the conductance change. Above threshold, higher programming values correspond to higher memory windows. The switching threshold is arbitrary considered as the minimum voltage (or time width) that has to be applied to produce a 10% conductance change and we will demonstrate in the following that it corresponds to the threshold for gradual conductance change. The logarithm of the switching time follows a straight line implying an exponential relation between ΔV and Δt 32 . Such exponential dependence has been referred to as an extension of the voltage -time dilemma 45 and descends from the non-linear switching kinetics typical of valence change memory cells, as illustrated in ref. 46 . Different slopes of the lines identifying the switching kinetics can be seen in potentiation and depression, in general agreement with results in the literature [46][47][48][49] . In fact, the voltage -time slope of 50 mV/decade evaluated from Fig. 3d for the potentiation process corresponds to the one reported for devices with similar composition 46 . On the other hand, the steeper slope of 130 mV/decade estimated from Fig. 3e highlights the slower switching kinetics of the depression process 47,48 . This finding evidences that an inherent asymmetry of the switching mechanisms associated with potentiation and depression operations exists Figure 3. (a-c) Examples of potentiation (in blue) and depression (in red) operations to evidence the device asymmetry. Δt is fixed at 100 μs in all cases. The absolute ΔV is set to 0.8 V in (a); 0.7 V in (b); 0.7 V for potentiation and 0.9 V for depression in (c). (d,e) Memory windows, defined as the conductance ratio after 300 identical pulses, as a function of Δt and ΔV for the potentiation (d) and depression (e) processes. A dashed line is added to highlight the switching threshold, defined for a minimum variation of 10%. and is reflected into the conductance dynamics. For instance, it is immediately evident that the main effect of the slower kinetics of depression results in an overall lower memory window when compared to potentiation for the same applied absolute values of the voltage. A quantitative assessment of the characteristic features reported in the latter two paragraphs will be provided in the following.
Analysis of the conductance update.
In the previous section, several characteristics of the device conductance dynamics have been introduced. The main features to take into consideration when modeling the device are the presence of limits to the cumulative conductance modulation and a non-constant conductance change that progressively decreases as the conductance boundaries are approached. Moreover different device dynamics, which can be divided into three main switching regimes, can be observed. Therefore, a device model should correctly reproduce all these salient features. A general formulation of a synaptic device displaying maximum and minimum weight boundaries approached with smaller and smaller steps was proposed by Fusi and Abbott 42 as a multiplicative update rule with a weight-dependent weight update. When the synaptic weight is comprised for simplicity between 0 and 1, the following generalized soft bound equations (1) and (2) were proposed for the incremental (δw + (w)) and decremental (δw − (w)) weight change in potentiating and depressing events, respectively.
In this model α is a multiplicative parameter which determines the magnitude of modification induced on the synaptic strength by a plasticity event, thus 1/α is proportional to the number of discrete accessible synaptic states. To add more generality to the update rule, a second positive parameter γ is added at the exponent of the recursive sequence, adapting the dependency of the weight update on the synaptic state. It is worth mentioning that a constant weight change, resulting in a linear weight dynamics, is described by a value γ = 0. A first formulation of weight dependent update rule is the one neglecting the parameter γ (or equivalently γ = 1). Letting the parameter γ varying above unity constitutes a general formulation of a weight dependent update rule. The description of the influence of α and γ parameters on the weight dynamics is reported in the Supplementary Fig. S3.
In the following, we show that this simple synaptic model can be applied to a physical HfO 2 -based device and is able to capture the conductance dynamics in a complete set of regimes of operation, from no switching, to gradual analogue switching, to digital deterministic switching.
The experimental conductance curves previously discussed are fitted according to the generalized soft bound model (further details are reported in the experimental section) and the fitting lines are visible in Figs 1b and 2 (see also Supplementary Fig. S2) as black solid lines superposed to the measured data points. The resulting fitting parameters are mapped into the pulse time width and voltage space in Fig. 4a-d for potentiation and depression, respectively. For low voltage and short pulses (top left corner of the maps), the pulses produce no appreciable conductance variation and the flat response can be well reproduced by α ≈ 0, since only one stable effective state exists in the system. At the opposite, for high values of ΔV or Δt where the device exhibits a digital behaviour, the system has only two stable states since one single pulse is sufficient to produce the maximum conductance change and the conductance trend can be reproduced by α ≈ 1. In between these two extremes, the device shows a gradual variation of the conductance before reaching saturation, and α assumes values 0 < α < 1. Two dashed lines have been inserted in the 2D maps to highlight the separation between the three switching regions. The leftmost lines correspond to the switching thresholds extracted from Fig. 3d,e and separate the region of no significant conductance variation (no switching region) from the analogue switching region, while the rightmost lines are positioned at α values close to 1 and signal the onset of the digital switching region.
It should be stressed that a fit to a simple soft bound model (γ = 1) is roughly able to replicate the different regimes. This would be equivalent to fit the cumulative conductance with an exponential law. However, in particular for intermediate states slightly above the switching threshold where the device shows a richer multi-state behaviour, this can not be accurately reproduced by the simple one-parameter law. To this end, the introduction of the second parameter γ intervenes to round off and adjust the dynamics closer to the experimental trend. While the value of γ has no real influence when approaching the two extremes of no switching and digital switching and γ ≈ 1 can well fit the data, in the region with analogue modulation γ assumes values above 1. A summary of the values of γ extracted from fit is reported in Fig. 4b,d for potentiation and depression, respectively.
In potentiation, α increases rapidly above the switching threshold and eventually comes close to 1 in a large portion of the inspected (ΔV, Δt) space (Fig. 4a). The region with accessible analogue behaviour, where the fitting procedure locates the highest number of levels, can be best visualized by the region with γ above 1 in Fig. 4b. This region has a width of approximately 200 mV and follows the general slope of 50 mV/dec previously identified for the switching dynamics, so that a careful choice of ΔV should be made to achieve analogue programming. Unlike potentiation, during depression the device goes in an almost digital (2 states) behaviour, which is represented by α ≈ 1, only in the bottom right corner of Fig. 4c. This happens since in depression the threshold identifying the digital behaviour is shifted to higher pulse programming values than in potentiation.
As discussed above, in the time -voltage plot α and γ follow the same trend of the memory window. This becomes apparent by comparing Figs 3d,e and 4. Indicatively, the best analogue operation can be achieved slightly above the switching threshold. This leads to a fundamental consideration about the necessary trade-off between multi-level behaviour and window of operation. For the considered device, in the area of analogue switching roughly identified in Fig. 4, the memory window does not exceed a value of 4 in potentiation and 3 in depression, above which digital switching prevails. From general considerations based on mean-field simulations, Fusi and Abbott 42 demonstrate that in a learning network, in the general case of unbalanced rate of potentiation and depression events, hard weight boundaries (e.g. linear weight update and a hard cut-off at the boundaries) result in a sub-linear relation between storage capacity and number of weight levels (1/α). Conversely, softly bounded synapses with γ > 0 ensure the memory capacity to always scale linearly with the inverse step size 1/α, thus improving the storage efficiency of the network. Moreover, large γ values above 1 would guarantee a better memory capacity, even if it comes at the price of higher dispersion of the memory weights 42 . In summary, even when markedly non linear, the analysed device realizes a soft approach to the boundary, thus improving the memory performance in terms of capacity for adaptive learning under changing information input.
Variability. When the device is conveniently operated, the conductance of the device can be modulated through hundreds of values (∝1/α). However, a fine consideration on the actually available conductance levels should take into account the overlapping variability. In order to analyse variability, the conductance change induced by each single pulse (dG) is plotted as a function of the conductance value (G) in Fig. 5 for different pulse voltages, and a fit of the renormalized soft bound law is superposed to the experimental data. Regarding potentiation (Fig. 5a), the conductance change caused by one pulse is large at low conductance values and becomes negligible and immersed in the variability as the conductance value increases. The opposite trend, together with opposite sign, is visible for depression in Fig. 5b. This evidence demonstrates the gradual approach of potentiation and depression evolutions towards the conductance boundaries. As the pulse voltage increases, dG also increases at low initial conductance value for potentiation and high initial conductance value for depression. Correspondingly, high voltage pulses (|ΔV| = 0.95 V) define only two conductance values, which is a clear indication of digital operation. The lower the voltage, the lower dG for each initial conductance value and the narrower the covered conductance range.
Even though the pulse series are acquired above the switching threshold, only the dG corresponding to the first pulse stands far from the x axis, while the next points follow a small trend at intermediate conductance levels which is highlighted by the soft bound law, then they accumulate along the x axis. From the scattering of dG around the x axis close to accumulation, the maximum pulse-to-pulse variability can be estimated at about 50 μS. As a rule of thumb, two distinct device levels can be effectively distinguished only if their relative distance lies above the pulse variability. As an example, considering the maximum window of 1 mS obtained in the analogue region, only about 20 distinct levels can be effectively distinguished in a deterministic manner if the maximum pulse variability is considered in this device. However, it should be stressed that this is only a lower limit estimation.
Another source of variability comes from cycle repeatability. Once the device has been characterized for various programming conditions, the different switching regimes are known and any (ΔV, Δt) couple is associated with a specific memory window and pulse dynamics. However, repeating the same pulse series numerous times will result in slightly different dynamics.
In order to probe the cycling variability, special care should be paid in choosing similar memory windows for both operations according to Fig. 3 to avoid a drift of the overall conductance for repeated potentiation and depression sequences 50 . This happens since for long enough pulse series the cumulative conductance tends to saturate to a conductance level that depends on the specific programming conditions. Moreover, as previously discussed, the device dynamics as well as the switching threshold differ between potentiation and depression operations. For this reason, in general asymmetric ΔV and/or Δt should be applied to obtain analogue cumulative behaviours for both operations.
A subset of 10 depression/potentiation over a total of 200 cycles with programming parameters chosen within the analogue region can be found in Fig. 6a. In this region, the device exhibits multiple levels with a maximum memory window of about 3. Figure 6a shows the first ten cycles programmed with Δt = 10 μs; ΔV = +0.9 V for depression and ΔV = −0.7 V for potentiation. 500 consecutive identical pulses were performed alternatively for each operation to ensure reaching the conductance saturation. In black line, a fit to data of the generalized soft bound law is reported (see Supplementary Fig. S4 for details). In order to better evidence the impact of cycling variability, in Fig. 6b the evolution of the average conductance as a function of the number of pulses is plotted with the associated standard deviation, in grey. A logarithmic scale is chosen for the x-axis to highlight the effect of the first pulses, which produce the greatest variation in conductance. In this plot, the dispersion appears fairly constant with an average value of 65 μS, even though a decreasing trend towards the conductance boundaries can be detected (Supplementary Fig. S5).
The fit of the repeated cycles to the soft bound model allows to extract the variability of the parameters describing the conductance dynamics. The distribution of the α parameter, proportional to the inverse number of accessible conductance states, is reported in Fig. 6c. Quite large distribution widths of ~0.2 and ~0.3 for potentiation and depression respectively can be observed, which highlights the fairly high dispersion usually encountered in filamentary-type resistive switching devices. Nevertheless, the values of α clearly indicates stable and reproducible multi-state behaviour. Notably, a clear asymmetry in terms of analogue levels can be identified between depression and potentiation in Fig. 6c. This unbalance is commonly encountered for analogue switching in various resistive switching devices and has been reported previously for different systems and programming conditions 51,52 . The reason lies in the narrower range of ΔV at equal Δt available for analogue potentiation with respect to depression for a given memory window. Indeed, it is possible to achieve symmetric operations, as visualized in the second example reported in Fig. 6d and highlighted by the α distribution in Fig. 6f, but a compromise has to be paid in terms of memory window. The standard deviation around the average conductance trend exhibits a mean value of about 65 μS, comparable to the one detected in the previous case, as shown in Fig. 6e. However, the reduced memory window increases the incidence of conductance variation from 6% to 15% of the overall conductance span.
Discussion
In summary, we explored the switching dynamics of HfO 2 -based resistive memory cells by sequences of identical programming pulses with the aim of enhancing the comprehension of analogue operation for improving the storage capacity in hardware neural networks. Based on the salient features of the cumulative conductance series, a fitting model was applied to provide a quantification of the degree of analogue modulation as a function of the programming pulse parameters Δt and ΔV.
As a first observation, three main regions can be identified according to the memory window and the number of analogue states. One threshold can be defined that separates the region in which the programming pulses produce no appreciable conductance change (no switching region) from the region of analogue modulation. At higher Δt or ΔV, a second threshold separates the analogue region from the region of digital switching. This analysis allows to estimate the programming extent of the region of analogue modulation and its dependence on It is worth pointing out that also the retention of the conductance states of the devices is a critical point for the realization of networks able to learn and to record the training process and is a matter of investigation of the very recent literature 53,54 . Representative results attesting the retention at short time-scales of the present device are reported in the Supplementary Fig. S6. The long-term retention of the boundary conductance levels up to 10 years at elevated temperatures has been demonstrated in similar devices by the same authors in previous works 15,16 .
A conductance dynamics exists in the analogue region since multiple pulses are necessary to produce the maximum conductance span for the specific programming parameters, as opposed to the digital region. If the switching time is defined as the pulse Δt necessary to achieve the maximum conductance span for a given ΔV, applying shorter pulses allows to sample different device states producing a cumulative conductance behaviour. However, the highest number of multi-state conductance levels is observed for the shortest pulses close to the switching threshold, where the memory window is also the lowest. This leads to an unavoidable compromise between number of analogue levels and programming window. This constraint is not unique to the investigated device and can be generalized to many memristive systems. In fact, even if not explicitly quantified, the same inverse relation between degree of analogue modulation and memory window can be identified in a lower current regime 23 and in other resistive switching devices based on different materials 19,55 and in phase change materials due to unavoidable constraints produced by the non-linear switching kinetics typical of memristive systems 46,56 .
An indication of the non linear switching kinetics in the inspected device can be identified from the voltagetime dependence of the thresholds separating the switching regimes. Indeed, an exponential dependence is identified between the voltage amplitude and the time width of the applied pulses with a slope of about 50 mV/dec in potentiation and 130 mV/dec in depression that follows previous estimations on similar material systems 46,48 . The identification for both processes of a single time-voltage slope covering almost 4 orders of magnitude of Δt is the indication of a single process prevailing in the time range covered by the experiments. In other systems, e.g. in electrochemical metallization cells, two or even three distinct slopes were observed due to different limiting processes prevailing in different time-voltage regions 46 . The presence of only one time-voltage slope indicates that, throughout the entire investigated time range, the limiting process is always the same and it is usually identified with the ionic motion inside the dielectric materials which requires high local electric fields and temperatures 30,46,57 . Despite a large literature exists which deals with the switching kinetics of resistive memory devices 31,[45][46][47][57][58][59][60][61] , the relation between kinetics and analog dynamics is not analysed in details. Such relation is investigated in the present paper and it allows elevating all the discussion to a level of generality that goes beyond the secondary factors influencing the switching process, e.g. space charge layers, mobility or chemical inhomogeneities (due to local and nanoscale deviations from stoichiometry and uniform ion concentration) 46,62,63 .
The asymmetric kinetics observed for the potentiation and depression processes can be explained if thermal and electro-chemical effects are considered in the system 31,51 . It was previously reported that a positive feedback loop establishes during potentiation due to a significant joule heating which accelerates the switching behaviour, leading to the steep time-voltage slope and thus to the high voltage dependence identified for potentiation 64 . At the opposite, a negative feedback loop during depression gives rise to the more gradual switching behaviour 65 . The different switching kinetics reflects also on the higher programming parameters necessary for depression with respect to potentiation to achieve the same memory window. Based on the analysis summarized in Fig. 4, the programming space available for symmetric analogue operations can be obtained by intersecting the analogue regions of the two processes while also considering similar memory windows to avoid one operation to be overwhelming with respect to the other. While a device asymmetry may not be a big concern by itself since it can be translated into an unbalance of probabilities for synapses being potentiated or depressed, the different time-voltage dependence of the analogue region restricts the space of available symmetric programming and can influence the network performance 66 .
A second major observation in the inspected conductance dynamics is the occurrence of a bounded maximum conductance span dependent on the specific programming pulses. This behaviour occurs naturally in a physical device since the maximum memory window is also limited by the switching kinetics of the system as discussed above. The main consequence is that the spacing between levels tends to diminish as the conductance limits are approached. The presence of boundaries for the maximum synaptic strength was discussed theoretically in the literature in the framework of adaptive networks 41 and the introduction of a simple cut-off (hard bound) which sets a sudden flattening of the cumulative conductance was ruled out as optimal choice for synaptic behaviour 42 , while a gradual approach to the conductance boundaries as the one observed in the inspected device can maximize the memory capacity of the network.
Besides theoretical considerations, an approximation of evenly spaced levels, or equivalently a linear cumulative behaviour, can be obtained only in a restricted pulse interval far from conductance saturation. In a practical hardware implementation this would require additional circuitry to limit the conductance update, complicating the design of the network. On the other hand, recent works focus on material stack optimization to improve the linearity of conductance change as a benchmark for good synaptic characteristics 36,67 . This in practice corresponds to increase the number of analogue levels so that conductance saturation is pushed at higher pulse number and for practical applications the synaptic weight can be considered almost linear in the initial part of the cumulative curve. In this respect, based on the proposed analysis, the figure of merit is 1/α, proportional to the inverse number of levels. Considering as a benchmark the recognition of MNIST handwritten digits, Querlioz et al. 37 demonstrated by network simulations that the recognition rate of the digits remains almost unvaried up to α . 0 05, then it declines rapidly for higher values, meaning that at least dozens of separate levels are necessary for this specific task. Even if the exact number of levels also depends on the specific update rule, the magnitude of this parameter can serve as a benchmark to compare different devices and different programming conditions.
Finally, for a full account of the analogue programming space, it is necessary to include also the superposed device variability since two levels may not be effectively distinguishable if the variability is greater than the level separation. Several works in the literature demonstrate the high robustness of neural network implementations to variability 37,66 . However, a gap exists between network simulations and device characterization. In this work, two types of device variability which could influence the conductance dynamics are scrutinized: pulse-to-pulse and cycling variability. Both types of variability affect the effective number of conductance levels covered by the device depending on the memory window of operation and must be taken into account to evaluate the performances of the device. In summary, the device characterization and analysis proposed in this work allow to systematically quantify the degree of analogue operation in different regimes and serves as a starting point for a reasoned evaluation of the device performance and, secondly, for the identification of specific programming conditions for practical applications. Moreover, the present work may serve as a reference for the future optimization of synaptic devices, e.g. displaying at the same time low values of the α parameters for both potentiation and depression, good symmetry and large conductance windows. It must be cited that only very recent works deal with the optimization of some of these features, through the use of alternative materials or combination of materials and possibly moving towards an interface dominated switching process 35,52,53,68 .
Conclusions
In this work, we analyse the analogue conductance dynamics of TiN/HfO 2 /Ti/TiN memristive devices based on a filamentary switching mechanism. The main features of the experimental cumulative conductance change, namely the presence of conductance saturation and a slow approach to the conductance boundaries, can be reproduced by a soft bound model, which was recognized in literature as a way to improve memory capacity in the network in presence of bounded synapses. The application of this simple phenomenological model allows to clearly identify three regions of operation: no switching, gradual analog modulation and digital (almost two-level) switching as a function of the programming pulse parameters. It is found that both the demarcation lines separating the switching regimes and the memory window follow a trend in terms of pulse amplitude -timewidth that can be related to the physics of the switching kinetics in memristive devices. From this observation stems the necessary compromise between degree of analog modulation and memory window. Moreover, the different kinetics for the processes of conductance increase and decrease explain the asymmetry in terms of analogue programming space between the two operations. Finally, pulse variation and cycling repeatability are evaluated to quantify the variability superposed to the average conductance variation affecting the effective number of conductance levels. The results highlight the possibility to operate the device with robust analogue modulation and symmetric conductance processes. We develop an approach that takes into account the salient features of the device and, based on the device physics, can determine the device performance and the best operating regions and trade-offs. This analysis is fundamental for practical device implementation in hardware neural networks. Electrical characterization. Electrical characterization was performed using a device parameter analyzer (B1500A, Keysight Technologies) equipped with high resolution source measurement units (HR-SMU) and a semiconductor pulse generator unit (SPGU, B1525A from Keysight Technologies). During DC operation, the potential was applied to the top electrode through a SMU unit, while the bottom electrode was held grounded through a second SMU unit. For pulse measurements, the SPGU unit was connected to the top electrode, while the bottom electrode was grounded through a SMU unit which provided current monitoring. An external custom switch board was built to provide a convenient selection of one of the two measurement configurations (DC or pulsed characterization) 26 .
Methods
Device initialization. For bipolar switching operations, the devices require an initial electroforming operation. This step was carried out with a current-controlled sweep up to 300 μA in order to limit the maximum current passing through the device. After the initial forming step, a few DC cycles were performed by voltage sweeps between −1 V and +1 V with 1 mA maximum current limitation during potentiation and no maximum current limitation during depression operations (Supplementary Fig. S7).
DC operations were applied to provide two well defined initial states within the resistance distributions of the high resistance state (HRS) and low resistance state (LRS). In case that the cell state lies outside the HRS or LRS resistance levels, a few DC cycles are sufficient to bring the device resistance within one of these resistance levels. Moreover, DC characteristics provide an indication of the final conductance achieved for a given maximum applied voltage. I-V curves and resistance distributions can be found in the Supplementary Fig. S7.
Pulsed characterization. Starting from the initial state defined by DC operations within the resistance distribution of either the LRS or HRS, a pulsed characterization was carried out with a sequence of 300 identical pulses with a period of 80 ms and a reading operation at 100 mV after each pulse. Potentiation curves were obtained starting from the HRS, while depression curves were started from the LRS. In order to evaluate the effect of the programming conditions, i.e. Δt and ΔV, after each sequence the cell was re-initialized by DC operations and a new sequence was performed with slightly modified programming parameters. ΔV of the applied pulses was span between 0.3 (−0.3 V) and +1 V (−1 V) with 50 mV step for depression (potentiation) operations. With the intention to avoid any irreversible damage to the memory cell, the maximum applied voltage was limited to the maximum value applied during comparatively slow DC operations. Δt was varied between 100 ns and 300 μs with two series per decade for both operations with a fixed pulse rise and fall time of 40 ns. It is worth to emphasize that no external current limitation was applied during pulsed characterization.
Fitting procedure. Conductance series were fitted according to the generalized soft bound model. Even if equations (1) and (2) represent discrete recursive sequences, in first approximation the elemental step δw/δn can be integrated over δn to derive equations (3) and (4) for the cumulative evolution of the weight w, which is the quantity actually measured, as a function of the number of pulses n. 1 1 Equations (3) and (4) need then to be renormalized between G min and G max , which set the initial conductance value after DC initialization and the maximum (or minimum) saturation value as extracted from fitting. During fitting procedure, α was initialized close to lowest expected value (1 × 10 −3 ) and the fit was let increasing this value up to 1, while γ was initialized close to 1 and its value was free to increase up to 10. A robust linear least-squares fitting method with least absolute residual method was applied to avoid excessive dependence on outliers given the not negligible pulse-to-pulse variability. In order to rescale the fitting equation to the actual conductance interval, the initial conductance at n = 0 was fixed from the pulse series, while the maximum (or minimum) conductance at the end of the pulse series was free to adjust from fit up to double of the value measured at the end of the pulse sequence (n = 300). Data availability. The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request. | 9,482.2 | 2018-05-08T00:00:00.000 | [
"Computer Science",
"Engineering",
"Physics"
] |
Ion finite Larmor radius effects on the interchange instability in an open system
A particle simulation of an interchange instability was performed by taking into account the ion finite Larmor radius (FLR) effects. It is found that the interchange instability with large FLR grows in two phases, that is, linearly growing phase and the nonlinear phase subsequent to the linear phase, where the instability grows exponentially in both phases. The linear growth rates observed in the simulation agree well with the theoretical calculation. The effects of FLR are usually taken in the fluid simulation through the gyroviscosity, the effects of which are verified in the particle simulation with large FLR regime. The gyroviscous cancellation phenomenon observed in the particle simulation causes the drifts in the direction of ion diamagnetic drifts.
Ion finite Larmor radius effects on the interchange instability in an open system I. Katanuma, a) S. Sato, Y. Okuyama, S. Kato, and R. Kubota Plasma Research Center, University of Tsukuba, Tsukuba, Ibaraki 305-8577, Japan (Received 10 September 2013; accepted 24 October 2013; published online 14 November 2013) A particle simulation of an interchange instability was performed by taking into account the ion finite Larmor radius (FLR) effects.It is found that the interchange instability with large FLR grows in two phases, that is, linearly growing phase and the nonlinear phase subsequent to the linear phase, where the instability grows exponentially in both phases.The linear growth rates observed in the simulation agree well with the theoretical calculation.The effects of FLR are usually taken in the fluid simulation through the gyroviscosity, the effects of which are verified in the particle simulation with large FLR regime.The gyroviscous cancellation phenomenon observed in the particle simulation causes the drifts in the direction of ion diamagnetic drifts.V C 2013 Author(s).All article content, except where otherwise noted, is licensed under a Creative Commons Attribution 3.0 Unported License.[http://dx.doi.org/10.1063/1.4829682] The interchange instability is perhaps the most fundamental and basic magnetohydrodynamic (MHD) instability for magnetically confined plasma.There are some methods to stabilize the interchange instability.One is to make use of the line tying effects. 1,2The magnetic surfaces created in the torus, 3 for example, stabilize the interchange instability with the rotation of the magnetic field lines lying on the magnetic surface, which is the results of line tying effects.
The ion finite Larmor radius (FLR) effects are also expected to stabilize the interchange instability.The first work of the FLR effects on an interchange instability was seen in Ref. 4, where Rosenbluth, Krall, and Rostoker have found the stability effects of FLR with the help of Vlasov equation.Roberts and Taylor derived the same stability condition as Ref. 4 by using the extended MHD equations. 5ere, they used the generalized Ohm's law and they included the viscosity in the equation of motion.The extended MHD equations have been widely used to study interchange instability, because they are able to be applied to the complicated magnetic confined closed systems. 6,7[10][11][12] The generalized Ohm's law and gyroviscosity in the extended MHD equation were derived theoretically. 5,13ecently, it has been reported that the complete FLR stabilization of the interchange mode may not be attainable by the gyroviscosity or generalized Ohm's law alone in the frame work of extended MHD. 7The stability boundary of an interchange instability with FLR was determined by the kinetic analysis. 4,14Thus, it is worth verifying with the help of a particle simulation that the interchange instability can be really stabilized by FLR effects completely.
The traditional electrostatic particle-in-cell code (explicit PIC code) uses the equation of ion and electron motions with the Poisson equation. 15,16The mesh interval D and the time step Dt have to be taken smaller than the Debye length k De and the inverse of the plasma oscillation x À1 pe and electron cyclotron frequency X À1 ce , respectively, because all of the electrostatic oscillations are included in the explicit code.
The interchange instability by using the explicit PIC code was carried out by Goede, Humanic, and Dawson,17 where two-dimensional spatial 64 Â 64 grid was used.In order to follow such a slow time scale of interchange instability, they used the very small ion/electron mass ratio M i /M e ¼ 1.The linear growth rate of an interchange instability observed in the simulation did not compare well with the growth rate derived from the Vlasov equation in a local approximation, although it showed the stabilization due to FLR effects.Recently, the interchange instabilities mainly in the magnetotail are simulated by the electromagnetic explicit PIC code. 18,19However, the comparison of the interchange instabilities itself observed in the simulation with those derived theoretically has not been made.
The increases of the time step Dt and the grid interval D enable to simulate a low frequency phenomena in the plasma on a large scale.The implicit time integration scheme has a potential to increase Dt and D keeping the numerical stability of the simulation.1][22] Watanabe et al. have described the implicit algorithm and applied to the two-dimensional plasma with external magnetic field, where ion and electron cross-field motions are assumed only E Â B drifts. 23Barnes et al. have developed the implicit algorithm which can be able to applied to the two-dimensional plasma with the external magnetic field directly, and they demonstrated the simulation on an interchange instability (without FLR). 24his paper uses the uniform gravitational field g ¼ gê x shown in Fig. 1, where êx is the unit vector along x-axis.Here, the centrifugal force due to the non-zero magnetic field line curvature is replaced by the gravitational force.The uniform external magnetic field B ¼ Bê z is applied along z-axis.The FLR effects on the interchange instability are investigated in the geometry of Fig. 1.The PIC code used in this paper has adopted the implicit algorithm by Barnes et al. 24
II. LINEAR GROWTH RATES OF AN INTERCHANGE INSTABILITY WITH FLR
Rosenbluth and Simon derived the dispersion relation of an interchange instability with the FLR effects 14 where x is the wave frequency, k y is the wave number, E y is the perturbed electric field of the interchange instability, and c is the light speed.The steady state electric field E 0 , gravitational acceleration g and equilibrium density gradient dq=dx are assumed to be a function of x and their vectors are in the x direction.The uniform external magnetic field B is assumed to be applied in the z direction.The coefficients T and S in Eq. ( 1) are defined as where x pi ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi 4pn i e 2 =M i p is the ion plasma frequency, X ci eB=M i c is the ion cyclotron frequency, e is the unit charge, M i is the ion mass, and n i is the ion number density.The mass density q and pressure P of ions are given as Here, f i is the ion distribution function in the equilibrium state as The distribution in the equilibrium state should be a function of constant of motion, so that X is a canonical momentum in the y direction which is the same as the guiding center position in the case of uniform external magnetic field, i.e., X ¼ x þ v y =X ci , where x is the real position of ion in the x direction.Henceforth, the ion guiding center density profile n g (X) is assumed to be where h(x) is the Heaviside step function [h(x) ¼ 0 for x < 0 and h(x) ¼ 1 for x > 0].The guiding center density n g (X) of Eq. ( 5) lead to the real density n i (x) of where q i ffiffiffiffiffiffiffiffiffiffiffi ffi Figure 2 plots the linear growth rates of an interchange instability with FLR in the case of k y g=X 2 ci ¼ 10 À4 , which FIG. 1.Initial ion guiding center positions, gravitational acceleration vector (g), and external magnetic field.Vertical (horizontal) axis is the x axis (y axis) with the scale in mesh numbers.
FIG. 2. Dispersion relation of the interchange instabilities with FLR. (a)
is the linear growth rates c as a function of X ci =x pi , where each number in (a) denotes the magnitude of k y q i .(b) is the linear growth rates as a function k y q i , where each number in (b) denotes the magnitude of X ci =x pi .
was obtained by solving Eq. ( 1) in the range of 0 k y x 2p with the boundary condition that w ¼ 0 at k y x ¼ 0 and w ¼ 0 at k y x ¼ 2p and k y L H ¼ p.The vertical axis c in Fig. 2 is the linear growth rate of the interchange instability, i.e., x ¼ x r þ ic, where i ffiffiffiffiffiffi ffi À1 p .The linear dispersion relation in the geometry of Fig. 1 for k y q i ¼ 0 is given by 25 which is plotted in Fig. 2(a) in case of k y q i ¼ 0. Figure 2 indicates that the interchange instability is stabilized more strongly by FLR for the smaller X ci =x pi and larger k y q i .
III. ELECTROSTATIC PIC CODE
The electrostatic PIC code is used in order to research the stabilization effects of FLR on an interchange instability.The PIC code used in this paper adopts the implicit algorithm, 1,2,24 so as to remove the unnecessary high frequency electrostatic oscillations such as the electron cyclotron waves.
The equation of motion makes use of the modified leapfrog differential scheme where the superscript n means the time step nDt and the subscript a indicates the particle species.The electrostatic potential / n is solved by the following equations: where the second equation in Eq. ( 10) is the Poisson equation.
The velocity v nþc 0 j in the right hand side of Eq. ( 9) is defined by If c 0 ¼ 0 and / n instead of / n are used, Eq. ( 9) becomes the normal centered leap-frog scheme with the second order accuracy of time step Dt and is numerically unstable if jX ca jDt > 1.The detailed algorithm of the implicit PIC code is given in Ref. 24.
In this paper, c 0 ¼ 0.1 has been adopted for electrons and the time step is chosen x pe Dt ¼ 2 for the simulation of an interchange instability, where x pe ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi 4pn e e 2 =M e p is the electron plasma frequency.However, c 0 ¼ 0 is chosen for ions so that the ion Larmor motion is followed with the second order accuracy of time step Dt.
IV. SIMULATION RESULTS
The code uses the two-dimensional 128 Â 128 spatial meshes in xy and three velocity components v x , v y , and v z .The (4 Â 128) 2 ¼ 262, 144 ions, and electrons each are included in the simulation.The geometry used in the simulation and analysis is plotted in Fig. 1.The ion guiding centers are uniformly distributed in the region x < 64D, where D is the spatial mesh interval, at t ¼ 0 and electron real positions are distributed at each ion real position.The uniform magnetic field B is applied in the z direction and gravitational acceleration g is in the x direction (g ¼ gê x ), as shown in Fig. 1.Henceforth, x pi ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 4phn i ie 2 =M i p and x pe ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 4phn i ie 2 =M e p are used as ion and electron plasma frequencies, where hn i i ¼ n 0 =2 is the ion (electron) density averaged over the entire simulation space.
The interchange instability is observed in the simulation because the geometry in Fig. 1 is unstable to the interchange modes.Figure 3 plots the linear growth rates of interchange instability with (1, 1) mode measured in the linearly growing phase of the simulation.Here the (1, 1) mode means the one with the wave numbers k y ¼ 2p=128D and k x ¼ 2p=128D.The solid circles in Fig. 3 are the results without FLR, where c 0 ¼ 0.1 has been chosen for ions in Eqs. ( 9) and (11).The solid line arrowed by a symbol s A is the theoretically calculated linear growth rate Eq. ( 8).
The solid squares in Fig. 3 are the simulation results with FLR, where c 0 ¼ 0 is set for ions in Eqs. ( 9) and ( 11).The solid line arrowed by a symbol s B is the theoretically calculated linear growth rate in case of k y q i ¼ 0:16 in Fig. 2(a), where k y q i =ðk y g=X 2 ci Þ 1=4 ¼ 1:6.The simulation parameters are that k y hq i i ¼ 2:845 Â 10 À1 ; X ci =x pe ¼ 2:2155 Â 10 À3 ; k y g=X 2 ci ¼ 1:0 Â 10 À3 , where k y ¼ 2p=128D and Note that all quantities in Eqs. ( 1) and (2) are normalized as where x E k y cE 0 =B, although E 0 ¼ 0 is assumed throughout this paper.The solid squares in Fig. 3 changing the ion mass M i and the initial ion temperature T i in the simulation.The agreements between theory and simulation are good in both cases with and without FLR in Fig. 3.It is found that the flute instability is stabilized at X ci =x pi ' 0:5 in Fig. 3. Figure 4 plots the simulation results with parameters that M i =M e ¼ 1800; k y g=X 2 ci ¼ 9:94 Â 10 À4 ; X ci =x pi ¼ 0:943; X ce =x pe ¼ 40.The ion thermal Larmor radius hq i i in Fig. 4 is normalized by the spatial mesh interval D. The solid circles which are the linear growth rate observed in the simulation are obtained by changing initial ion temperature T i .The solid line in Fig. 4 is obtained by solving Eq. ( 1) with X ci =x pi ¼ 0:943.The agreements between the linear growth rates obtained analytically and by using simulation are good.There is the apparent systematic (but small) discrepancy observed in Fig. 4 at small values of the ion Larmor radius hq i i < 2. The linear growth rates (solid circles) in Fig. 4 have been measured from the wave form of field energy as shown in Fig. 5. Thus, the discrepancy is beyond the measuring error, but we do not know the reason.This systematic discrepancy can be seen in Fig. 3 at hq i i ¼ 0, which is consistent with those in Fig. 4.
The time evolutions of field energy j/ k j 2 of (1, 1) mode are plotted in Fig. 5.The dotted straight lines in the figure were used to obtain the linear growth rates (solid circles) in Fig. 4. Here, each origin of field energy has been shifted upward (or downward) at the amount of arbitral magnitudes in the figure, so that the relative magnitudes of j/ k j 2 with different hq i i are meaningless in Fig. 5.
Figure 6 plots the ion and electron real positions at x pet ¼ 11 000, which is the time when the interchange instability enters the nonlinear growing phase from the linear phase as is seen in Fig. 5.The ion real positions are plotted in the upper figures and the electrons are in the lower figures, respectively.It is found that the quasi-neutral condition is satisfied very much in all cases of hq i i.Here, Fig. 6(a) is the simulation result with hq i i ¼ 0:5, Fig. 6(b) is hq i i ¼ 1:5, Fig. 6(c) is hq i i ¼ 3:0, and Fig. 6(d) is hq i i ¼ 6:0, respectively.When the thermal ion Larmor radius hq i i, where normalized by a spatial mesh interval D, is small, there can be seen many interchange instabilities with high mode numbers, while in the case of large hq i i an interchange instability with only the lowest k y is seen.The interchange instability is almost stabilized by FLR for hq i i ¼ 6:0.The ions begin to behave like a viscous fluid in the presence of the magnetic viscosity in Figs.6(c) and 6(d).Equation (8), which is the linear growth rate in case q i ¼ 0, indicates that the linear growth rate is c / ffiffiffiffiffiffiffi jk y j p .In the simulation with hq i i 6 ¼ 0, ions and electrons have the initial Maxwellian velocities, so that the thermal fluctuations excite the interchange modes with various k y .In the case of hq i i Շ 2 the interchange mode with high k y are more unstable than that with low k y , so that many saturated high k y interchange instabilities with mushroom-shaped front 1,6 are observed in Figs.6(a) and 6(b).On the other hand, the high k y interchange modes are stabilized by the FLR effects in case of hq i i տ 2 and so the only interchange instability with the lowest k y is observed in Figs.6(c) and 6(d).
Figure 7 plots the time evolution of j/ k j 2 of (1, 1) mode in the case of k y g=X 2 ci ¼ 10 À3 ; X ci =x pi ¼ 1:0; hq i i ' 5:796.It is found that an interchange instability grows with different growth rates in the linear phase (solid straight line) and in the nonlinear phase (dotted broken straight line).Here the growth rate (solid straight line) in the linear growing phase has been calculated by the linear theory Eq. ( 1).The growth rates in the nonlinear phase are slower than those in the linear phase in Figs. 5 and 7.
As is seen in Fig. 8, in the linear phase (x pe t ¼ 4000 $ 10 000 in Fig. 7) the boundary between ions and vacuum has a sinusoidal shape.However, in the nonlinear phase, the shape of the boundary is distorted from the sinusoidal curve greatly.Here, Figs.8(a)-8(d) plot the equi-contour lines of ion density at x pe t ¼ 8000, 11 000, 14 000, and 17 000, respectively.Figures 8 and 7 are the results of the same simulation.As is seen in Fig. 7, the time 7000 Շ x pe t Շ 12 000 is the linearly growing phase of the interchange instability, so that the equi-contour lines (boundary between plasma and FIG. 4. Dispersion relation of the interchange instabilities in theory and simulation.The solid circles are the growth rates of the (1, 1) mode measured in the linearly growing phase of the interchange instability in the simulation with FLR.The solid line is the theoretical linear growth rates with FLR.FIG. 5. Time evolution of field energy j/ k j 2 with (1, 1) mode.The dashed straight lines in the figure are drawn in order to determine the linear growth rates in the simulation.The numbers in the figure denote the magnitudes of thermal ion Larmor radius hq i i, which are normalized by the mesh interval D. Here, each origin of field energy j/ k j 2 has been shifted upward (or downward) in order to display the growing phase clearly, so that the relative magnitudes of j/ k j 2 with different hq i i are meaningless in Fig. 5. vacuum) in Figs.8(a) and 8(b) can be approximate sinusoidal function well.On the other hand the time 12 000 Շx pe t Շ18 000 is the nonlinearly growing phase of the instability, so that the equi-contour lines (boundary between plasma and vacuum) in Figs.8(c) and 8(d) deviate from the sinusoidal function greatly.
The remarkable feature in Fig. 8(d) is that the front of the flute instability drifts in the -y direction, which is the ion diamagnetic direction.It is known that this drifts result from the gyroviscosity.The equation of motion in the extended MHD is written to be 5,7 The last term r Á P i is the viscosity term containing the twofluid effects and ion gyroviscosity effects.Frequently, the gyroviscous stress is approximated by 6,26 r Á P i ' Àqu à Á ru: Here, u à is often assumed to be the ion diamagnetic drift velocity in the uniform magnetic field, i.e., Moving the gyroviscosity term to the left hand in Eq. ( 13) in the approximation of Eq. ( 14) yields which is known as the "gyroviscous cancellation."The simulation has not solve the time evolution of the magnetic field because the electrostatic PIC code are used.That is, the uniform magnetic field remains until the end of the simulation.The equilibrium ion drift velocity, therefore, comes from the gravitational drift which is given by u g ¼ cM i g  B=qB 2 ¼ À4:5  10 À4 x pe Dê y in the parameters used to obtain Fig. 8. Here, Fig. 8 reveals that the interchange instability drifts in the -y direction.The drift speed u f ront of wave front of the interchange instability during 14 000 Շ x pe t Շ 17 000 in Fig. 8 It is found ju à j ) ju f ront j ) ju g j, although the wave front of the interchange instability drifts in the direction of the ion diamagnetic drift (that is, the gyroviscous cancellation phenomenon has been observed in this particle simulation with uniform magnetic field).
V. SUMMARY
The interchange instabilities in the geometry of Fig. 1 were investigated by using the electrostatic implicit PIC code.The growth rates of the interchange instability in the linearly growing phase in the particle simulation agree well with the theoretical linear growth rates with FLR which were obtained by solving Eq. (1).It is found that the interchange instability with the large FLR grows in two phases, that is, the linearly growing phase and the subsequent nonlinearly growing phase.The growth rate in the nonlinear phase is slower than that in the linear phase, and the interchange instability grows exponentially in both phases.The wave front of an interchange instability deviates from a sinusoidal shape in the nonlinear phase.The effects of gyroviscosity on the interchange instability seems to play an important role in its growth.The gyroviscous cancellation phenomenon has been observed in the particle simulation.The drift speed of wave front of the interchange instability, however, is much slower than the ion diamagnetic drift velocity.
The simulations were performed in the parameter of k y hq i i < 1, which has been assumed in all theoretical works with FLR. 4,5,8,9,11,12,14The particle simulation has introduced no approximations in the basic laws of mechanics and electricity.So the FLR is expected to stabilize the interchange instability completely in the real magnetically confined plasma.In the future, the particle simulations with k y hq i iտ1 will be performed to research the FLR effects on an interchange instability in the range X ci =x pi ) 1.
FIG.
FIG.Contour plots of ion number density.Here, the figure is the simulation result in the case k y g=X 2 ci ¼ 10 À3 ; X ci =x pi ¼ 1:0; hq i i ' 5:796.(a) is the contour plot of the ion density at x pe t ¼ 8000, (b) is at x pe t ¼ 11 000, (c) is x pe t ¼ 14 000, and (d) is x pe t ¼ 17 000, respectively.
The time evolution of field energy j/ k j 2 with (1, 1) mode.The solid straight line denotes the linear growth rate which was calculated analytically by using Eq.(1).The dashed straight line is drawn to fit the growth rate to the wave form in the nonlinear growing phase of j/ k j 2 .Here, the figure is the simulation result in the case k y g=X 2 ci ¼ 10 À3 ; X ci =x pi ¼ 1:0; hq i i ' 5:796. | 5,545.4 | 2013-11-14T00:00:00.000 | [
"Physics"
] |
Synthesis of molecular metallic barium superhydride: pseudocubic BaH12
Following the discovery of high-temperature superconductivity in the La–H system, we studied the formation of new chemical compounds in the barium-hydrogen system at pressures from 75 to 173 GPa. Using in situ generation of hydrogen from NH3BH3, we synthesized previously unknown superhydride BaH12 with a pseudocubic (fcc) Ba sublattice in four independent experiments. Density functional theory calculations indicate close agreement between the theoretical and experimental equations of state. In addition, we identified previously known P6/mmm-BaH2 and possibly BaH10 and BaH6 as impurities in the samples. Ab initio calculations show that newly discovered semimetallic BaH12 contains H2 and H3– molecular units and detached H12 chains which are formed as a result of a Peierls-type distortion of the cubic cage structure. Barium dodecahydride is a unique molecular hydride with metallic conductivity that demonstrates the superconducting transition around 20 K at 140 GPa. Metallization of pure hydrogen via overlapping of electronic bands requires high pressure above 3 Mbar. Here the authors study the Ba-H system and discover a unique superhydride BaH12 that contains molecular hydrogen, which demonstrates metallic properties and superconductivity below 1.5 Mbar.
I n recent years, the search for new hydride superconductors with T C close to room temperature attracts great attention of researchers in the field of high-pressure materials science. Variation of pressure opens prospects of synthesis of novel functional materials with unexpected properties 1 . For example, according to theoretical models [2][3][4][5] , compression of molecular hydrogen over 500 GPa should lead to the formation of an atomic metallic modification with T C near room temperature. Pressures of 420-480 GPa were achieved in experiments with toroidal diamond anvil cells 6 ; however, for conventional high-pressure cells with a four-electrode electric setup, pressures above 200 GPa remain challenging.
The neighbor of lanthanum, barium is a promising element for superhydride synthesis. The calculated maximum T C is only about 30-38 K 19,20 for predicted P4/mmm-BaH 6 stable at 100 GPa, which has a hydrogen sublattice consisting of H 2 molecules and Hanions 19 . Lower barium hydride, BaH 2 , well-known for its extraordinarily anionic (H -) conductivity 21 , exists in Pnma modification below 2.3 GPa, whereas above 2.3 GPa it undergoes a transition to hexagonal Ni 2 In-type P6 3 /mmc phase 22 . At pressures above 41 GPa, BaH 2 transforms into P6/mmm modification, which metallizes at over 44 GPa, but its superconducting T C is close to zero 23 . So far, no relevant experiments at pressures above 50 GPa have been reported.
In this work we experimentally and theoretically investigate the chemistry of the barium-hydrogen system at pressures from 75 to 173 GPa filling the gap of previous studies. We discover new pseudocubic BaH 12 that has molecular structure with H 2 and H 3molecular units and detached H 12 chains formed due to Peierlstype distortion. These structural features lead to metallic conductivity of unique molecular hydride and to the superconducting transition around 20 K at 140 GPa.
Results
Synthesis at 160 GPa and Stability of BaH 12 . To investigate the formation of new chemical compounds in the Ba-H system at high pressures, we loaded four high-pressure diamond anvil cells (DACs #B0-B3) with sublimated ammonia borane NH 3 BH 3 (AB), used as both a source of hydrogen and a pressure transmitting medium. A tungsten foil with a thickness of about 20 μm was used as a gasket. Additional parameters of the high-pressure diamond anvil cells are given in Supplementary Table S1.
The first attempt of the experimental synthesis was made in DAC #B1 heated to 1700 K by an infrared laser pulse with a duration of~0.5 s at a pressure of 160 GPa. During heating, the Ba particle underwent significant expansion and remained nontransparent. The obtained synchrotron X-ray diffraction pattern (XRD, λ = 0.62 Å, Fig. 1a) consists of a series of strong reflections specific to cubic crystals. Decreasing the pressure in DAC #B1 to 119 GPa (Fig. 1b) gave a series of diffraction patterns that can mostly be indexed by a slightly distorted face-centered cubic structure (e.g., pseudocubic Cmc2 1 , Fig. 1a). Recently, similar cubic diffraction patterns have been observed at pressures above 150 GPa for the La-H (fcc-LaH 10 ) 17,18 and Th-H (fcc-ThH 10 ) 14 systems. By analogy with the La-H system, and considering the lack of previously predicted cubic superhydrides BaH x 19-21 , we used the USPEX code [24][25][26] to perform theoretical crystal structure evolutionary searches, both variable-and fixedcomposition, for stable Ba-H compounds at pressures of 100-200 GPa and temperatures of 0-2000 K. According to the USPEX calculations, P6/mmm-BaH 2 remains stable up to 150-200 GPa (Fig. 1c; Supplementary Tables S7-S12, Supplementary Figs. S2 and S3). This compound was experimentally detected in DAC #B0 at 173-130 GPa with the cell volume~3% smaller than theoretically predicted (Supplementary Table S14). At 100-200 GPa, several new barium polyhydrides lying on or near the convex hulls were found: BaH 6 , BaH 10 , and BaH 12 with the unit cell Ba 4 H 48 and Ba 8 H 96 (Fig. 1c). In subsequent experiments at 142 and 154-173 GPa we have detected a series of reflections that can be indexed by BaH 6 and BaH 10 with the unit cell volumes close to the calculated ones (see Supporting Information, p. S25-27). However, the main phase in almost all diffraction patterns is the pseudocubic barium superhydride which will be described below.
The analysis of the experimental data within space group Fm 3m ( Fig. 1b and Supplementary Table S3) of Ba-sublattice and its comparison with density functional theory (DFT) calculations show that the stoichiometry of barium hydride synthesized in DAC #B1 is close to BaH 12 . Examining the results of the fixedcomposition search, we found that an ideal Fm 3m-BaH 12 (similar to fcc-YB 12 ) is unstable and cannot exist, while pseudocubic P2 1 -BaH 12 , whose predicted diffraction pattern is similar to the experimental one, lies on the convex hull at 100-150 GPa. There are also pseudocubic P1-Ba 8 H 96 , located very close to the convex hull at 150 GPa, and Cmc2 1 -BaH 12 (= Ba 4 H 48 ) with a similar Xray diffraction (XRD) pattern, lying a bit farther. Above 190 GPa the P2 1 -BaH 12 transforms to other possible candidate, orthorhombic Immm-BaH 12 , which stabilizes between 150 and 200 GPa, but does not correspond to the experimental XRD pattern (Fig. 1a, Supplementary Fig. S1) and is not considered further.
The computed equation of state of Fm 3m-BaH 12 (Fig. 1d) corresponds well to the experimental volume-pressure dependence above 100 GPa. However, the DFT calculations show that the ideal Fm 3m barium sublattice is unstable (it is > 0.19 eV/ atom above the convex hull, Supplementary Fig. S4) both thermodynamically and dynamically, and transforms spontaneously to Cmc2 1 or P2 1 via distortion (Fig. 2). Studying the temperature dependence of the Gibbs free energy (Fig. 2a), we found that P2 1 -BaH 12 is the most stable modification at 0-2000 K and 100-150 GPa. Moreover, high-symmetry cubic phases cannot explain the weak reflections at 8.9-9.4°, 14.5, 16, 19.5, and 20.6°p resent in many XRD patterns (Fig. 1a, b).
To clarify the question of dynamical stability of pseudocubic structures, we calculated a series of phonon densities of states for different modifications of BaH 12 (Fig. 2b). Within the harmonic approach, symmetric and corresponding to the experimental data Cmc2 1 -BaH 12 has a number of imaginary phonon modes. Its distortion to much more stable P2 1 -BaH 12 leads to the disappearance of many of the imaginary phonon modes and deepening of the pseudogap (Fig. 2c) in the electronic density of states N(E). The subsequent distortion of P2 1 to P1 converts BaH 12 to a semiconductor with a bandgap exceeding 0.5 eV. However, the experimental data show that BaH 12 The comparative analysis of Cmc2 1 , P2 1 , and P1 structures of BaH 12 shows that semimetallic Cmc2 1 explains well the experimental results of X-ray diffraction (see Supplementary Fig. S1, Supporting Information) and lies closer to the convex hull than Fm 3m or I4/mmm modifications. P1-BaH 12 shows a complex picture of splitting of the diffraction signals, both P1-BaH 12 and P2 1 -BaH 12 have a bandgap above 0.5 eV at 100 GPa (Supplementary Fig. S38b) which does not correspond to the experimental data. Therefore, pseudocubic Cmc2 1 -BaH 12 , whose cell volume is near that of the close-packed Fm 3m-BaH 12 , is the appropriate explanation of the experimental results despite the presence of a few imaginary phonon modes.
The molecular dynamics simulation of Cmc2 1 -BaH 12 and P2 1 -BaH 12 at 10-1500 K, after averaging the coordinates, both lead to a distorted pseudocubic P1-BaH 12 with the similar XRD pattern.
However, all structures retrieved by molecular dynamics are less stable both dynamically and thermodynamically than P1-BaH 12 , P2 1 -BaH 12 , and Cmc2 1 -BaH 12 found by USPEX. More accurate analysis accounting for the anharmonic nature of hydrogen oscillations 27 , which is actually beyond the scope of this work, may help to explain the experimental stability of highersymmetry BaH 12 modifications compared to lower-symmetry P1-BaH 12 .
Synthesis of BaH 12 at 146 GPa. Similar X-ray diffraction patterns were obtained in the next experiment (DAC #B2) where the Ba sample was heated at an initial pressure of 146 GPa, which led to a decrease in pressure to 140 GPa. During the heating and subsequent unloading of the cell, the sample remained opaque down to~40 GPa. Unlike the synthesis at high pressure (cell #B1, 160 GPa, Fig. 1a, b), in this experiment we observed many more side phases and corresponding side reflections than before ( Fig. 3 and Supporting Information). Calc.
Diff. a Experimental X-ray diffraction pattern from DAC #B1 at 160 GPa and the Le Bail refinement of the pseudocubic Cmc2 1 -BaH 12 phase. The experimental data, fitted line, and residues are shown in red, black, and green, respectively. Unidentified reflections are indicated by asterisks. b X-ray diffraction patterns at pressures of 119 to 160 GPa. The inset shows the projection of the Cmc2 1 structure to the (ac) plane. The hydrogen network is shown by light blue lines. c Convex hulls of the Ba-H system at 100, 150, and 200 GPa calculated with zero-point energy (ZPE) contribution. d Calculated equations of state for different possible crystal modifications of BaH 12 (fcc, I4/mmm, and Cmc2 1 ) and Ba+nH 2 . The experimental data are shown by hollow squares.
BaH
Similar to the experiment with DAC #B1, five reflections from the pseudocubic Ba sublattice dominate in a wide range of pressures (65-140 GPa), whereas side reflections change their intensities and, at some pressures, almost disappear ( Fig. 3d and Supplementary Figs. S33 and S35). The diffraction circles corresponding to the ideal cubic barium sublattice have pronounced granularity ( Fig. 3e-g, Supplementary Fig. S34), which suggests that all "cubic" reflections belong to the same phase.
At pressures below 65 GPa, it is no longer possible to refine the cell parameters of pseudocubic BaH 12 . The parameters of the Cmc2 1 -BaH 12 unit cell, refined to the experimental data, are presented in Supplementary Table S6. Fitting this pressurevolume data in the pressure range from 75 to 173 GPa by the third-order Birch-Murnaghan equation of state 28 gives the cell volume V 100 = 45.47 ± 0.13 Å 3 , bulk modulus K 100 = 305 ± 8.5 GPa, and its derivative with respect to pressure K 0 100 = 3.8 ± 0.48 (the index 100 designates values at 100 GPa). Fitting the theoretical data yields similar values: V 100 = 46.0 Å 3 , K 100 = 315.9 GPa, and K 0 100 = 2.94.
Synthesis of BaH 12 at 90 GPa. In the experiment with DAC #B3, we investigated the possibility to synthesize BaH 12 at pressures below 100 GPa. After the laser heating of Ba/AB to 1600 K, the pressure in the cell decreased from 90 to 84 GPa. The observed diffraction pattern is generally similar to those in the previous experiments with DAC #B1, except the presence of the impurity, h-BaH~1 2 , whose reflections may be indexed by hexagonal space groups P6 3 /mmc or P6 3 mc (a = 3.955(7) Å, c = 7.650(7) Å, V = 51.84 Å 3 at 78 GPa). For the main set of reflections, slightly distorted cubic BaH 12 is the best solution (Fig. 4). The refined cell parameters of BaH 12 (Supplementary Table S4) agree well with the results obtained previously with DACs #B1 and B2. When the pressure was reduced to 78 GPa, barium dodecahydride began to decompose, and subsequent diffraction patterns (e.g., at 68 GPa, see Supporting Information) show a complex image of broad reflections that confirms the lower experimental bound of BaH 12 stability of~75 GPa mentioned above.
Discussion
Electronic properties of BaH 12 . BaH 12 is the first known metal hydride with such a high hydrogen content that is stable at such low pressures (~75 GPa). We further investigated its electronic structure and the charge state of the hydrogen and barium atoms. The electron localization function (ELF) analysis 29 ( Fig. 4e- Fig. 4). Bader charge analysis of Cmc2 1 -BaH 12 , performed in accordance with our previous experience 31,32 (Supplementary Table S18), shows that the Ba atoms serve as a source of electrons for the hydrogen sublattice. The charge of the barium atoms in BaH 12 is +1.15 at 150 GPa, whereas most of the hydrogen atoms have a negative charge. In the H 3 fragments, the charge of the end atoms is close to -0.2 and -0.27, while the H bridge has a small positive charge of +0.06 (Fig. 4e-g). In general, H 3anion, similar to one found in the structure of NaH 7 30 , has a total charge of -0.4 | e | , whereas molecular fragments H 2 (d H-H = 0.78 Å) have a charge of only -0.1 | e | . Therefore, the Ba-H bonds in BaH 12 have substantial ionic character, whereas the H-H bonds are mainly covalent.
The low electronic density of states N(E) in semimetallic Cmc2 1 -BaH 12 looks typical for one-dimensional …H-H-H… chains (Fig. 4h, Supplementary Figs. S36 and S38) which are divided into H 2 , H 3 fragments due to the Peierls-type distortion 33 . In fact, all of the discussed structures of BaH 12 can be viewed as a result of Peierls-type distortion. The main contribution to N(E F ), 83% at 150 GPa, comes from hydrogen (Fig. 4h), and ¾ of this contribution is related to s orbitals. At 150 GPa, barium in BaH 12 exhibits the properties of a d-block element, and its bonding orbitals have a significant d-character (Fig. 4i). Electrical conductivity is localized in the H layers consisting of quasi-onedimensional …H-H-H… chains which are interconnected in non-trivial way ( Fig. 4e-g, Supplementary Table S2 for crystal structure). Thus, barium dodecahydride is the first known molecular superhydride with metallic conductivity embedded in layers and one-dimensional chains of molecular hydrogen. Fig. 5a). This DAC #E5 was assembled with an 80 µm diamond anvil culet, c-BN/epoxy insulating gasket, 45 × 32 µm Ba piece, and sputtered 0.5 µm thick Mo electrodes. After the laser heating at 1600 K and 140 GPa, the Ba/AB sample demonstrated the superconducting transition at around 20 K (Fig. 5a). When we tried to change the pressure, the cell collapsed and pressure dropped to 65 GPa. The obtained data together with the measured Raman spectra and optical microscopy exclude low-symmetry BaH 12 semiconducting structures, leaving for consideration only metallic and semimetallic modifications ( Supplementary Figs. S39 and S40).
One of the roles of metal atoms in superhydrides is to donate electrons to antibonding orbitals of the H 2 molecules and weaken the H-H bonds. In BaH 12 , each H atom accepts few electrons, on average 0.16 electrons. As a result, H 2 and H 3 groups are still present in the structure, and we have a rather low T С . We think that at high pressures, due to dissociation of molecular groups, BaH 12 may have a network of weak H-H bonds (rather than discrete H 2 and H 3 -groups) and, as a result, a much higher T С . Increasing the pressure will also facilitate further metallization of BaH 12 and symmetrization of the hydrogen sublattice, increasing N(E F ). To estimate the possible improvement, we calculated at synthesized novel barium superhydride BaH 12 with a pseudocubic crystal structure, stabilized in the pressure range of 75-173 GPa. The compound was obtained by laser-heating metallic barium with an excess of ammonia borane compressed to 173, 160, 146, and 90 GPa. The Ba sublattice structure of BaH 12 was resolved using the synchrotron XRD, evolutionary structure prediction, and several postprocessing Python scripts, including an XRD matching algorithm. Discovered BaH 12 has unique metallic conductivity, localized in the layers of molecular hydrogen, and the highest hydrogen content (>92 mol%) among all metal hydrides synthesized so far. The experimentally established lower limit of stability of barium dodecahydride is 75 GPa. The thirdorder Birch-Murnaghan equation of state and unit cell parameters of BaH 12 were found in the pressure range of 75-173 GPa: V 100 = 45.47 ± 0.13 Å 3 , K 100 = 305 ± 8.5 GPa, and K 0 100 = 3.8 ± 0.48. The ab initio calculations confirm a small distortion of the ideal fccbarium sublattice to space group Cmc2 1 or P2 1 , determined by the presence of additional weak reflections in the diffraction patterns. The impurity phase analysis indicates possible presence of BaH 6 and BaH 10 . According to the theoretical calculations and experimental measurements, BaH 12 exhibits metallic and superconducting properties, with T C = 20 K at 140 GPa, and its crystal structure contains H 2 and H 3groups. The results of these experiments confirm that the comparative stability of superhydrides increases with the increase of the period number of a hydride-forming element in the periodic table 20 . Our work opens prospects for the synthesis of even more hydrogen rich compounds like predicted LaH 16 34 and ErH 15 20 , and new ternary high-T C polyhydrides in such systems as Ba-Y-H and Ba-La-H.
Methods
Experimental details. The barium metal samples with a purity of 99.99% were purchased from Alfa Aesar. All diamond anvil cells (100 μm and 50 μm culets) were loaded with a metallic Ba sample and sublimated ammonia borane (AB) in an argon glove box. The tungsten gasket had a thickness of 20 ± 2 μm. The heating was carried out by 2-3 pulses of an infrared laser (1.07 μm, Nd:YAG), each pulse had a duration of 0.3-0.5 s. The temperature was determined using the decay of blackbody radiation within the Planck formula. The applied pressure was measured by the edge of diamond Raman signal 35 using the Horiba LabRAM HR800 Ev spectrometer with an exposure time of 10 s. The XRD patterns from samples in diamond anvil cells (DACs) were recorded on the BL15U1 synchrotron beamline at the Shanghai Synchrotron Research Facility (SSRF, China) using a focused (5 × 12 μm) monochromatic X-ray beam with a linear polarization (20 keV, 0.6199 Å). Mar165 CCD was used as a detector.
The experiment with DAC #B0 was carried out at the Advanced Photon Source, Argonne, U.S. The loaded particle was successively heated up to 2100 K using millisecond-long pulses (4 × 0.04 s) of a 1064 nm Yb-doped fiber laser. We used this pulsed laser heating mode to avoid the premature breakage of a diamond. The synchrotron XRD measurements (the X-ray wavelength was 0.2952 Å) were performed at the GSECARS of the Advanced Photon Source 36 with about 3 × 4 µm X-ray beam spot.
The experimental XRD images were analyzed and integrated using Dioptas software package (version 0.5) 37 . The full profile analysis of the diffraction patterns and the calculation of the unit cell parameters were performed in the Materials studio 38 and JANA2006 39 using the Le Bail method 40 .
To investigate the electrical resistivity of barium polyhydrides, we performed 5 runs of measurements in Cu-Be DACs #E1-5 using the four-probe technique. The preparation of all cells was similar. Tungsten gasket with initial thickness of 250 μm was precompressed to about 25 GPa. Then a hole with a diameter of 20% bigger than the culet diameter was drilled using pulse laser (532 nm). Cubic boron nitride (c-BN) powder mixed with epoxy was used as an insulating layer. We filled the chamber with MgO and compressed it to about 5 GPa. Then, in the obtained transparent MgO layer, a hole with diameter about 40μm was drilled by laser. UV lithography was used to prepare four electrodes on the diamond culet. We deposited the 500 nm thick Mo layer by magnetron sputtering (field of 200 V at 300 K) and removed the excess of metal by acid etching. Four deposited Mo electrodes were extended by platinum foil. The chamber was filled with the sublimated ammonia borane (AB) and a small piece of Ba was placed on the culet of upper diamond with the four electrodes. All preparations were made in an argon glove box (O 2 < 0.1 ppm, H 2 O < 0.01 ppm). After that, the DACs were closed and compressed to a required pressure. We used 1.07 µm infrared pulse (~0.1 s, 1600 K) laser to heat the Ba/AB samples. Electrical resistance of the samples was studied in a cryostat (1.5-300 K, JANIS Research Company Inc.; in magnetic fields 0-9 T, Cryomagnetics Inc.) with applied current of 1 mA. More details about DACs #E1-5 are given in the Table S20.
Computational details. The study is based on the structural search for stable compounds in the Ba-H system using the USPEX code, for pressures of 50, 100, 150, 200, and 300 GPa, with a variable-composition evolutionary search from 0 to 24 atoms of each type (Ba, H). The first generation of the search (120 structures) was created using a random symmetric generator, all subsequent generations (100 structures) contained 20% of random structures and 80% of those created using the heredity, soft mutation, and transmutation operators. The results contain the files extended_convex_hull and extended_convex_hull_POSCARS, which were postprocessed using the Python scripts change_pressure.py, split_CIFs.py and xr_screening.py (see Scripts for XRD Postprocessing with USPEX section). The postprocessing script change_pressure.py performs an isotropic deformation of the unit cell of structures predicted by USPEX, bringing them to approximately the experimental pressure. All three lattice constants of the structures are multiplied by a factor k, calculated under the assumption of validity of the Birch-Murnaghan equation of state 28 with the bulk modulus K 0 = 300 GPa and its first derivative K' = 3. This approach is a quick alternative to the script that uses a crude DFT reoptimization of a set of theoretically possible structures, bringing them to the experimental pressure. The script split_CIFs.py converts the set of POSCARS recorded in the extended_convex_hull_POSCARS file into a set of CIF files, simultaneously symmetrizing the unit cells and sorting the files by ascending fitness (the distance from the convex hull). The CIF files created in such a way can be directly analyzed using Dioptas 37 and translated to the experimental pressure that exhibit a high similarity between the experimental and predicted XRD patterns (the latter are obtained using pymatgen 41 Python library). The analysis of complex mixtures consisted of two steps: first, we searched for the main component having the most intense reflections, then the already explained reflections were excluded to analyze the side phases.
To calculate the equations of state (EoS) of BaH 12 , we performed structure relaxations of phases at various pressures using the density functional theory (DFT) 42,43 within the generalized gradient approximation (Perdew-Burke-Ernzerhof functional) 44 and the projector augmented wave method [45][46][47][48][49] as implemented in the VASP code [15][16][17] . The plane wave kinetic energy cutoff was set to 1000 eV, and the Brillouin zone was sampled using the Γ-centered k-points meshes with a resolution of 2π × 0.05 Å −1 . The obtained dependences of the unit cell volume on the pressure were fitted by the Birch-Murnaghan equation 28 to determine the main parameters of the EoSthe equilibrium volume V 0 , bulk modulus K 0 , and its derivative with respect to pressure K'using EOSfit7 software 50 . We also calculated the phonon densities of states for the studied materials using the finite displacement method (VASP and PHONOPY) 51,52 .
The calculations of the phonons, electron-phonon coupling, and superconducting T C were carried out with QUANTUM ESPRESSO (QE) package 53,54 using the density functional perturbation theory 55 , employing the plane-wave generalized gradient approximation with Perdew-Burke-Ernzerhof functional 44 . In our ab initio calculations of the electron-phonon coupling (EPC) parameter λ of Cmc2 1 -Ba 4 H 48 , the first Brillouin zone was sampled by 2 × 2 × 2 qpoints mesh and 4 × 4 × 4 or 8 × 8 × 8 k-points meshes with the smearing σ = 0.005-0.05 Ry that approximates the zero-width limits in the calculation of λ. The critical temperature T C was calculated using the Allen-Dynes equations 56 .
Bader charges were calculated using Critic2 57,58 software with the atomic partition generated using the YT 59 method. The electron localization function (ELF) of BaH 12 and isosurface are shown for an isovalue of 0.12. The lattice planes (100) and 11 1 ð Þ are shown at distances from the origin of 0 and 2.289 Å, respectively. We projected the ELF on these planes to show H 3 and H 2 bonding.
The average structure of BaH 12 was analyzed using the ab initio molecular dynamics (AIMD) simulations within the general gradient approximation 44 and using the augmented plane wave method 45,[47][48][49] implemented in VASP software [30][31][32] . The total number of atoms in the model was 52, including 48 hydrogen atoms and 4 barium atoms (Cmc2 1 -Ba 4 H 48 ). The positions of the barium atoms were fixed during the simulation. The energy cutoff was set to 400 eV. The behavior of the hydrogen atoms in the BaH 12 crystal structure was studied upon annealing from 1500 to 10 K using the Nosé-Hoover thermostat 60,61 . The total simulation time was 10 ps with the time step of 1 fs. | 6,195 | 2021-01-11T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Preservation of primordial signatures of water in highly-shocked ancient lunar rocks
a School of Physical Sciences, The Open University, Walton Hall, Milton Keynes, MK7 6AA, United Kingdom b Centre for Applied Planetary Mineralogy, Department of Natural History, Royal Ontario Museum, Toronto, M5S 2C6, Canada c Department of Earth Sciences, University of Toronto, Toronto, M5S 3B1, Canada d Department of Earth Sciences, The Natural History Museum, London, SW7 5BD, United Kingdom e University of Portsmouth, School of the Environment, Geography and Geosciences, Burnaby Road, Portsmouth, PO1 3QL, United Kingdom
Water in apatite has been thoroughly investigated in most types of lunar basalts ranging from high-Ti and low-Ti basalts (e.g. to high-Al and KREEP-rich varieties, where KREEP stands for potassium (K), the rare earth elements (REEs), and phosphorus (P) (e.g., Greenwood et al., 2011;Tartèse et al., 2014b;Treiman et al., 2016). However, apatite in most mare basaltic samples crystallized at a late stage (>95% crystallization), by which time their parental melts were likely to have undergone significant H 2 -degassing during ascent and upon eruption. This degassing could have induced isotopic fractionation of hydrogen (H) from deuterium (D) (e.g., . The primary lunar crust, formed directly through Lunar Magma Ocean (LMO) solidification, is predominantly composed of anorthosites that do not contain apatite. Subsequently, incompatible-element-rich and Mgrich magmas were emplaced into the overlying anorthositic lunar crust as deep crustal cumulates giving rise to Mg-suite rocks (summarized in Shearer et al., 2015). These rocks are thought to have better preserved the H isotopic composition of their parent melts (Barnes et al., , 2014. Thus, the Mg-suite rocks are ideal targets for ascertaining the H isotopic composition of indigenous lunar water. We remain cognisant of the fact that some models suggest an impact origin for at least a portion of Mg-suite rocks and references therein), most recently for the troctolite 76535 (White et al., 2020). Altogether, the impact models suggest the cumulates could have crystallized in insulated, slowcooling large melt sheets or pooled beneath the anorthositic crust (summarized in Shearer et al., 2015). However, no research has yet been carried out to investigate the effect of the large-scale impact melting on indigenous lunar volatile signature, such as might be the case for Mg-suite rocks, if they indeed crystallized from large melt sheets.
Regardless of their petrogenetic mode of formation, another important aspect of Mg-suite samples is that they have experienced certain levels of shock deformation after their initial crystallization. Experimental studies at elevated temperatures have shown H diffusion in apatite to be much faster than for all other elements, including Pb (Higashi et al., 2017), which would be relevant for apatite exposed to shock metamorphism. Apart from investigation of water in variably shocked anorthosite (Hui et al., 2017), this important aspect of lunar samples' history remains underexplored. To our knowledge, no similar efforts to understand the effect of shock deformation on volatiles in lunar samples have yet been undertaken, even though a clear need for this approach has been advocated . This is reinforced by observations of impact-associated volatile mobility within martian (Howarth et al., 2015) and terrestrial (Kenny et al., 2020) apatite.
Prior to this study, apatite water contents and H-isotopic compositions have only been investigated in three samples of Mg-suite lithologies (Barnes et al., 2014). These authors report water contents from <100 ppm to ∼2000 ppm, associated with mostly terrestrial-like H isotopic composition. Based on this study, early-formed crustal rocks on the Moon were interpreted to contain water that shares a common origin with water on Earth. The potential sources of this water are further narrowed down to various types of chondritic materials. The most likely candidates are CM and CI-type carbonaceous chondrites, as they provide the closest fit to the Moon's bulk H and N compositions (Alexander et al., 2012;Alexander, 2017;Hallis, 2017, and references therein). Other constraints require that a significant proportion of the CI/CM material must have been accreted to Earth prior to the Moon-forming impact (Greenwood et al., 2018).
To investigate the influence of shock-induced deformation, metamorphism and impact-melting on apatite water abundance and H isotopic composition, we conducted a study on a suite of variably shocked rock-fragments and impact-melt breccias from the Apollo 17 collection. The texture, microstructure and petrological context of apatite is a critical parameter in understanding the severity of shock deformation and were characterized by detailed scanning electron microscope (SEM) imaging and electron backscatter diffraction (EBSD) analysis of each grain prior to measuring its water abundance and D/H ratio using a NanoSIMS.
Sample selection
The criteria for sample selection included geochemical pristinity, paucity of data on volatiles in Mg-suite apatites, and a sample set covering a range in level of shock deformation. The pristinity level of the samples (Warren, 1993) was taken into account with the aim of measuring the water content incorporated into the apatite lattice during its crystallization. All the selected samples have been previously characterized as compositionally pristine (Warren, 1993), meaning their bulk compositions and, in particular, highly siderophile elements (HSE) contents are representative of individual, unmixed, endogenous igneous rocks. At the same time, however, we targeted lithologies exposed to different levels of shock metamorphism.
Seven different lunar samples (nine specific thin-sections) of Mg-suite norite, troctolite and gabbro from the Apollo 17 collection were selected for this study. We previously determined the shock stage of some of the samples based on plagioclase shock-barometry (Černok et al., 2019 and references therein) and the shock microstructures of individual apatite grains using EBSD. These samples include troctolite 76535,51 (shock stage 1, S1), anorthositic troctolite 76335,60 (S2), Civet Cat norite fragment from the 72255,100 impact-melt breccia (S3-4), and heavily shocked norites 78235,43 and 78236,22 (S5 and S6). It is worth noting that the cooling history of the accessory mineral baddeleyite within the troctolite 76535 has recently been interpreted in the context of a large impact-melt sheet (White et al., 2020), despite not showing any compositional evidence for projectile contamination (Warren, 1993). The troctolite, however, does not display any shock features formed and has thus been assigned the deformation stage S1.
By applying the same criteria as in Č ernok et al. (2019) we here assess shock-stage deformation of apatite in troctolite 73235,136, as well as in three sections containing different lithologies inside an impact-melt breccia 76255 (norite 76255,68; gabbro 76255,71 and troctolite 76255,71). EBSD data acquired from these apatites can be found in Supplementary Material (SM, Supplementary Fig. 1). A list of studied samples, their cosmic ray exposure (CRE) ages, and all analyzed apatite grains with their interpreted shock-deformation stages is provided in Table 1. Thus, the selection of samples covers all representative lunar Mg-suite lithologies and shock histories, including coherent rock specimens, lithic fragments included within polymict impact breccia, and impactmelt matrix. Detailed sample descriptions, including associated
Analytical techniques
Polished thin-sections of Apollo samples were used in this study. All sections were prepared at NASA Johnson Space Center using a water-free medium, and mounted onto a glass slide using araldite epoxy (see for further details).
Scanning Electron Microscopy and Electron Backscatter Diffraction (EBSD)
The studied polished thin-sections were carbon-coated and examined using a Quanta 3D Focused Ion Beam Scanning Electron Microscope (FIB-SEM) at The Open University, equipped with an Oxford Instruments INCA energy dispersive X-ray detector. The electron beam was generated with an acceleration voltage of 15-20 kV and a current of 0.52 to 0.62 nA. A working distance of 15 mm was used for the generation of secondary electron (SE) and backscatter electron (BSE) images, as well for the acquisition of energy dispersive spectra (EDS). Micro-to nano-scale structural analysis was conducted by EBSD using an Oxford Instruments Nordlys EBSD detector mounted on a Zeiss EVO MA10 LaB 6 -SEM housed at the University of Portsmouth (see SM and Supplementary Table 1), following earlier protocols (Darling et al., 2016;Č ernok et al., 2019).
H measurements using NanoSIMS
Following EBSD analyses, apatites were selected based on their textural context and microstructural features for in-situ measurements of H content (expressed as equivalent H 2 O) and associated H isotopic composition (expressed using δD notation). We used a CAMECA NanoSIMS 50L at The Open University, following the protocol reported by Barnes et al. (2013) and applied in several subsequent studies (Barnes et al., 2014;Barrett et al., 2016;. Thin-sections of the samples were covered by a 20-30 nm thick layer of carbon coat, kept for approximately one week in a vacuum oven at 40 • C, and then placed into the instrument airlock under the vacuum of ∼8 × 10 −8 Torr or better for 2-3 days before the start of each analytical session. An indium block containing the apatite standards Ap003, Ap004 and Ap018 (McCubbin et al., 2012) and the nominally anhydrous San Carlos (SC) olivine, were simultaneously carbon coated with the samples and kept in the vacuum along with the samples. The apatite standards and the SC olivine are used for calibration and background corrections, respectively. Details on instrumental parameters, data reduction, analytical background and spallation corrections are provided in the SM, Supplementary
Electron Probe Microanalysis
Electron probe microanalysis (EPMA) was performed on all apatite grains after analysis by NanoSIMS, to avoid any volatile loss by exposure to intense electron beam of the EPMA. Analyses were undertaken using the CAMECA SX100 instrument at The Open University, with protocols that were slightly modified after previous studies in the same laboratory (e.g., Barnes et al., 2013;. Analytical conditions are documented in SM and in Supplementary Table 4.
Textural context and shock-induced microtexture of apatite
In the studied selection of coherent rock fragments and impactmelt breccias, we observed two distinct types of apatite based on their nanostructure. The primary apatites show variable degrees of internal nanostructural complexities induced by shock-induced deformation, ranging from stage S1 to S6 (Fig. 1). They were found in all coherent rocks and within lithic clasts inside the breccia, in contact with another primary, magmatic mineral assemblage (Fig. 1). The secondary apatites show no nanostructural complexities and were exclusively associated with impact melt in the matrix of all three 76255 breccia samples (Fig. 2).
Primary apatite is less abundant (a few grains per thin section) and is comparably small, with the largest grains reaching up to ∼50 to 100 μm in their longest dimension. The microtexture and nanostructure of apatite from samples 76535, 76335, 72255, 78235 and 78236 were reported in a previous study (Černok et al., 2019). Following the criteria used in Č ernok et al. (2019), we assess the extent of shock-deformation stages in all other primary apatite grains in samples 76255 and 73235 based upon nanostructural data presented in Supplementary Fig. 1. In brief, the criteria used include the extent of subgrain size (defined by shape or low angle grain boundaries of <20 • ), their density and total misorientation across the grain surface, which is a useful measure of crystal-plastic deformation (CPD). The observed features were derived from band contrast (BC) images, texture component (TC) maps and associated pole figures ( Supplementary Fig. 1). We previously established that the internal nanostructure of primary apatite shows progressively higher levels of deformation with increasing shock level, ranging from no discernible nanostructural complexities at S1, to the maximum observed CPD at S5 as recorded by ∼25 • of total misorientation across apatite grains (Černok et al., 2019). Apatite within lithic fragments in all 76255 sections typically contain large subgrains and low degrees of CPD, which is comparable to apatite from S2 stage of deformation (Černok et al., 2019). More complex subgrain formation, along with 15-20 • of total misorientation, was observed in apatite from troctolite 73235,136 and one apatite grain in 76255,71 (Ap10, Fig. 1 and Supplementary Fig. 1) that occurs at the contact of a large pyroxene clast and the fine-grained impact melt. These features are comparable to apatite from S3-S4 stages of shock deformation (Černok et al., 2019).
All secondary apatites were found in sample 76255, in impactmelt portions of all three thin sections (68, 71 and 75). This textural type is very abundant (several dozen per thin section) and forms, on average, larger grains that reach up to ∼200 μm in the longest dimension. Most secondary apatites appear to be individual, anhedral (A) mineral grains within an impact melt with scalloped grain outlines ( Fig. 2A and B), but a few euhedral (E) crystals were also observed (Fig. 2D). All secondary apatites, regardless of their shape, show strong diffraction patterns with unique orientation throughout the interior that are easily indexed with a good fit to the predefined apatite structure. They yield band contrast images (Fig. 2B) and inverse pole figures representative of single, undeformed crystals (Fig. 2C). They demonstrate a lack of sub-grain formation or any other evidence of internal structural complexities (Fig. 2B, C). The anhedral grains, with scalloped outlines, are possibly xenocrysts that recrystallized in contact with the impact melt ( Fig. 2A, B). However, considering the sub-euhedral to anhedral shape of fine-grained plagioclase and pyroxene from the impact melt in contact with this apatite ( Fig. 2A, B, D), we cannot exclude a possibility that these anhedral grains crystallized directly from the melt. The euhedral apatite unambiguously grew directly from the impact melt (Fig. 2D). In summary, NanoSIMS analyses were performed on twenty-eight individual apatite grains previously characterized by EBSD. Eighteen apatite grains are of primary origin (S1-S6) while another ten are secondary in origin, of either anhedral (A) or euhedral (E) shape.
Composition of Mg-suite apatite
The measured composition of Mg-suite apatite studied here (Fig. 3) is comparable to that of previously reported data from highlands apatites (e.g. Barnes et al., 2014;. All grains are fluorapatite (F > Cl > OH), most of which have Cl-content varying from 0.5 to 1.2 wt%. Apatites in troctolites 76535 and 76335, as well as in the norites 78236 and 72255 show the highest Cl abundance (>0.8 wt%). All studied grains within breccia 76255 contain ∼0.6 wt% Cl, except for the Ap3 in 76255,75 which contains only 0.2 wt% Cl. Similarly, the sole apatite grain in 73235 troctolite contains 0.2 wt% Cl. Due to unreliable F-measurement using EPMA protocol without TDI-correction, the F-content (Fig. 3) was calculated by subtracting OH content, determined by NanoSIMS, and Cl content, determined by EPMA (see Supplementary Table 5 for exact composition and stoichiometry). The obtained difference between the estimated and measured F content do not show significant discrepancies for most of the studied grains (Supplementary Table 5). The most common impu- rities in all studied apatites are Fe, Mg, and Si. The Cl-rich apatites (troctolites 76535 and 76335; norites 78235 and 78236) contain a negligible amount of Ce, Nd, and Y compared with the Cl-poor apatite in impact melt breccia 76255, Civet Cat norite 72255, and troctolite 73235. No obvious difference in major element composition is observed between primary and secondary apatite in the breccia 76255.
H 2 O abundance and δD composition
Apatite in shocked Apollo 17 rocks shows a wide range in water concentrations and in δD values (Table 1 and Fig. 4). As a number of apatite grains contain low water abundance (i.e., <100 ppm H 2 O), consideration must be given to spallogenic D and H that significantly contribute towards the measured H abundance, and more drastically, to D/H isotopic ratios (e.g., Füri et al., 2017;Saal et al., 2013). We report spallation corrected data using both available D-production rates ( (76255, 68; 76255,71 and 76255,75) contain between 36 ppm and 964 ppm H 2 O, with δD values varying from −475 ± 144 to −59 ± 175 . Weighted average δD value of 20 apatites from all three sections is −218 ± 45 , with that of 10 secondary apatites being indistinguishable from it (−217 ± 44 ). Across the individual sections, grains in the troctolitebearing (76255,75) clast have the highest water content of all analyzed apatites (533-964 ppm) but show similar δD values (−177 ± 45 ). The section with noritic clasts in impact melt (76255,68) contain less than ∼270 ppm H 2 O, with an exception of an individual spot (Ap10) that has ∼500 ppm water, and a weighted δD value of −248 ± 84 . Apatite in the section containing the gabbro clast (76255,71) has among the lowest water contents, with less than 90 ppm H 2 O, apart from one individual spot (Ap10, 679 ppm) and an associated δD value −230 ± 77 that is comparable to other samples. In summary, in the breccia 76255 no obvious difference in water content or the associated δD is observed between clast-hosted (primary) and melt-hosted (secondary, i.e., either recrystallized or grown from impact melt) apatites. (Robinson et al., 2016;Barnes et al., 2014), we exclude this sample from any further discussions on interpreting the volatiles data on lunar crustal samples and consider it as an 'anomalous' sample in this context (see further discussion in SM). However, a good internal consistency of D/H measured in control sample 78235 (Barnes et al., 2014) provides confidence in the reproducibility of our results.
H 2 O and δD of Mg-suite apatite in the context of its nanostructure and shock-deformation
The measured water contents are well within the range of values previously reported for apatite in different highlands lithologies (Fig. 5), including evolved rocks such as felsite, quartzmonzodiorites, and granite clasts in breccias, but also the Mgand alkali-suite samples (Greenwood et al., 2011;Barnes et al., 2014;Robinson et al., 2016). Similar abundances were measured in apatite from KREEP and high-Al basalts (Greenwood et al., 2011;Tartèse et al., 2014a,b). The water-rich apatites measured here contain water in abundances also comparable to that of the lower-end values reported for apatite in low-Ti and high-Ti basalts ( Fig. 5; Barnes et al., 2013;Tartèse et al., , 2014aPernet-Fisher et al., 2014;Treiman et al., 2016). Our measurements do not show any particular lithology-specific variabilities in water abundances, as recorded in apatite, but do show that not all troctolite-hosted apatite grains are water-poor, as previously inferred (e.g., Barnes et al., 2014). δD values of the Mg-suite apatite, ranging from −535 ± 56 to +147 ± 224 (2σ ), reveal a pattern in which mostly the water-poor apatite (<100 ppm H 2 O) are depleted in D relative to the water-rich apatite. This range of δD values is within that mea- of all analyzed spots. B) Different shock stages of primary (S1-S6), secondary anhedral (A) and secondary euhedral (E) apatite, using correction after Merlivat et al. (1976). The overall H 2 O-δD trend observed in this study does not seem to correlate with the level of shock deformation experienced by individual apatite grains (Fig. 4b). For instance, apatites in the two most heavily-deformed norites, 72255 Civet Cat and 78236, plot at the opposite ends of the observed H 2 O-δD range. Results from impact-melt breccia 76255, which contains primary apatite inside lithic clasts (norite, gabbro and troctolite) and secondary apatite in the surrounding fine-grained impact melt, span across the entire H 2 O-δD range seen in this study. However, apatites from each individual clast and the surrounding impact melt have distinct H 2 O-δD values. For example, secondary apatite in 76255,68 contains similar amount of water as primary apatite inside the norite clast (Table 1), but in most cases the content is different than in the troctolite and the gabbro sections. This could indicate that the water abundance in the impact melt was not well homogenized or that the variable water abundances in the crystallizing apatite are being dictated by complex partitioning behavior of F, Cl and OH in the melt. Lithic clasts incorporated in the impact melt are shockdeformed at pressures and temperatures (S2) insufficient for their incipient melting, so the impact melt was not derived from those exact clasts. Instead, they were incorporated as coherent, unmelted clasts inside an already formed impact melt. However, the source rock must have had similar composition and contained apatites of comparable volatile signature. The source rock apatites that recrystallized in contact with impact melt, possibly acted as nuclei for further growth of secondary, anhedral apatite. Source apatite that melted in contact with impact melt was the main phosphorous and volatile source to form euhedral secondary apatite within the impact melt. Indistinguishable mineral chemistry of primary and secondary apatite associated with indistinguishable δD values, despite their obvious textural and microstructural differences, further strengthens the hypothesis of similarity with the source rock. These results strongly suggest that no discernible water loss and no preferential loss of H over D is recorded in apatite as a result of the impact.
This result is surprising, taking into account very high Hdiffusivity in apatite, which is expected to be even higher at temperatures relevant to impact melt (e.g. Higashi et al., 2017). Nevertheless, shock-experimental study on plagioclase (Minitti et al., 2008a) and in situ studies of Apollo highlands samples (Hui et al., 2017) have reported that this nominally anhydrous mineral behaves in a similar way to apatite, as observed in this study, despite an expected increase in H-diffusion at high temperatures (Johnson and Rossman, 2013). However, other minerals show very different behavior. For example, meteorite-relevant amphiboles (Minitti et al., 2008b(Minitti et al., , 2008a experience slight water-enrichment, followed by an increase in δD when studied in the same shock-experiment as plagioclase. On the other hand, D-heavy signature of martian atmosphere is seen in numerous shock-affected minerals found in martian meteorites, implying that shock-induced textures can enhance mixing of different reservoirs. Furthermore, individual grains of martian phosphates have been observed to undergo devolatalization during an impact event (Howarth et al., 2015) or due to shock-induced phase transitions (Adcock et al., 2017). In contrast, ringwoodite, formed as a high-pressure polymorph of water-poor olivine in martian meteorite Tissint , was observed to incorporate more water, aided by the shock-induced phase transition. This resulted in a considerably higher δD signature of ringwoodite, arising from incorporation of D-heavy martian atmosphere into the mineral.
In summary, the current understanding of shock-induced changes in volatile signatures is seemingly ambiguous, given that both water-bearing and nominally anhydrous minerals have been observed to be affected but also entirely unaltered by shockdeformation processes. Based on this comprehensive study, reporting the largest dataset currently available, of water abundance and H isotopic composition from systematically analyzed shocked apatite in lunar samples, we conclude that relatively water-rich apatite in Mg-suite rocks are not affected by shock processes in terms of their water inventory and δD signatures -a finding that is likely to be of relevance to other shock-related apatite occurrences, particularly originating from airless bodies (e.g., 4Vesta).
Grains Ap3 and Ap4 in 76255 show a large intragranular inhomogeneity, with both grains showing spatially resolved, distinct δD values. Due to this fact, we exclude the possibility that our samples record a primitive low-δD reservoir, as inferred by previous studies (Robinson et al., 2016), and conclude that the source of low-δD must be of secondary origin. Direct implantation of solar hydrogen into apatite, found in these samples, is unlikely because solar wind can only penetrate <1 μm under the surface of directly exposed grains (e.g., Robinson et al., 2016). The main low-δD (i.e., H-rich) reservoir on the Moon is the surface regolith (e.g., Liu et al., 2012;Treiman et al., 2016), containing fine grains with long exposure to surface implantation of solar-wind hydrogen. Treiman et al. (2016) suggested that the D-poor signature of apatite they measured in basalts, reflects mixing with a regolith component. The physical mechanism for introduction of hydrogen into apatite is likely to be thermal degassing of the D-poor regolith, induced by the heat of a nearby impact or, in case of mare basalts, the heat of basalts themselves (Treiman et al., 2016 and references therein). Similar regolith-interaction processes were recognized in high-Al basalt 14053 (Greenwood et al., 2011;Taylor et al., 2004) and olivine-hosted melt inclusions in basalts (Singer et al., 2017). Traces of trapped solar noble gasses within the sample 76255 were reported early on (Bogard et al., 1975, and references therein), however, they seem to be absent in 72255 breccia (Leich et al., 1975). In addition, impact-melt breccia 15405 was reported to contain apatite with variable δD composition (Barnes et al., 2014). Robinson et al. (2016) excluded regolith as a source of D-poor water following a different line of arguments; they calculated that even the maximum amount of regolith which could have been assimilated by the impact melt, produced by the formation of Aristillus crater, the possible source of the Apollo 15 samples they studied, could not have supplied enough hydrogen to alter the D/H ratio of the most water-poor apatite. Their reasoning is further strengthened by the fact that all apatite analyzed in those samples had similarly low-δD values. However, in our study we observe that low-δD apatite is heterogeneous and is either directly affected by impact melt (secondary apatite) or is located inside a small clast (primary apatite) surrounded by impact melt, implying that they most certainly experienced thermal processing. Furthermore, all primary low-δD apatites also show very complex nanostructures with sub-grain formation as a result of shock-deformation. Due to the textural context and the nanostructure of the low-δD apatite observed in our study, we conclude that networks of microto nanoscale grain boundaries could have acted as open pathways to enhance diffusion of hydrogen from D-poor regolith into the apatite grain, lowering its overall δD value, and resulting in its heterogeneous distribution. The regolith could have been either incorporated directly into the impact melt, or experienced thermal degassing caused by the heat of the impact melt, or both. Unfortunately, we cannot estimate the amount of regolith that was assimilated in the impact melts, produced during large basinforming impacts (e.g. Imbrium or Serenitatis), which are thought to be the likely sources of the breccias 72255 and 76255 (Thiessen et al., 2017). Although it is expected that the large basin-forming impacts would vaporize much of the regolith (e.g., Cintala, 1992), these breccias could have been formed as ejecta material further away from the impact crater where they incorporated or affected the local regolith. No further thermal processing of these samples had taken place after the impact event by which the breccias were formed. Furthermore, this study also reveals that the extent to which δD values can be affected during an impact largely depends on the initial water abundance of the host apatite. Regardless of the level of shock, only the water-poor apatite, exposed to shock deformation, appears to show extremely low-δD values. The apatites with extremely low-δD values are found within the same samples that also contain apatite with higher-δD values, with the overall microtextural setting strongly suggesting that they are of coeval origin, and thus unlikely to have sampled two separate δD reservoirs. On the other hand, apatite that contains more than ∼100 ppm H 2 O is unlikely to change its isotopic composition significantly, thus more likely to have preserved its primordial H isotopic composition.
Implications for source and timing of delivery of lunar water
Numerous petrogenetic models have been proposed to account for the contrasting primitive and evolved characteristics of the Mgsuite lithologies (summarized in Shearer et al., 2015). In brief, most models hypothesize partial melts of early LMO cumulates intruding into the overlying anorthositic crust, with a KREEP component either present in their source regions at the time of melting or being assimilated during their ascent. On the other hand, a few models have suggested involvement of large-scale (i.e., basin forming) impact events in producing at least a portion of the Mg-suite rocks (Shearer et al., 2015 and references therein), most recently of the troctolite 76535 (White et al., 2020). New research would be needed to understand the effect of the large-scale impact melting on volatile degassing of a large melt sheet and, consequently, on indigenous lunar volatile signature in cumulates crystalized within it. However, the impact-related models infer that the cumulates crystallized in insulated, slow-cooling large melt sheets or pooled beneath the anorthositic crust (summarized in Shearer et al., 2015), thus experiencing minimal degassing. Regardless of the exact mode of formation of Mg-suite rocks, their 'deep-seated' origin is universally accepted, suggesting that apatite crystallized in most of Mg-suite rocks would have experienced comparatively minimal, if any, H-isotope fractionation through degassing compared to those formed in mare basalts (e.g., Tartèse et al., 2014a,b). Thus most Mg-suite rocks are believed to directly record the primordial H isotopic composition of the lunar water (Barnes et al., , 2014. The lunar mantle derivatives seem to record variable δD values. The weighted average δD value of apatite in the Mg-suite obtained in this study (−209 ± 47 ) coincides with the weighted aver-age δD value estimated for lunar KREEPy samples (McCubbin and Barnes, 2019). In particular, the obtained δD value is similar to that of apatite in other plutonic rocks (Barnes et al., 2014), but also to that of basaltic lithologies in which apatite crystallized before any substantial degassing, such as in KREEP basalts (Tartèse et al., 2014a,b). Furthermore, it closely matches the values recently obtained for olivine-hosted melt inclusions in mare basalts (Singer et al., 2017). In contrast, anorthositic rocks (Hui et al., 2017;Greenwood et al., 2011) and olivine-hosted melt inclusions in volcanic glasses (Saal et al., 2013) record δD values of up to +350 . Hui et al. (2017) attempted to explain the difference in the observed δD values between anorthosites and Mg-suite rocks by using a mixing model. However, the mixing model failed to reconcile the two different values and instead it was suggested that the two distinct sources could have originated from a layered LMO, in which surface anorthosites were affected by LMO degassing and, thus, recorded increased δD-values. The cumulates in the lower portion of the LMO, such as Mg-suite rocks, sample primordial δD that was largely unaffected by magmatic degassing. The δD value obtained in the current study (−209 ± 47 ), indistinguishable from other KREEPy lunar rocks (McCubbin and Barnes, 2019), suggests that early lunar mantle, or deeper layers of LMO, have δD strikingly similar to terrestrial deep mantle (Hallis et al., 2015). This further strengthens the existing hypothesis of a common origin of water on Earth and Moon.
Based on recent FTIR measurements in ferroan anorthosite plagioclase (Hui et al., 2013(Hui et al., , 2017 and some other studies (Chen et al., 2015; it has been estimated that the LMO contained ∼136 ppm of water. Models suggest that this amount of water was likely to be too high to survive the Moonforming impact (Elkins-Tanton and Grove, 2011). Instead, most of the water in lunar interior was suggested to have accumulated during the protracted evolution of the LMO sometime between 10 and 200 Ma after the formation of the Moon (Tartèse and Anand, 2013 and references therein), with only up to 25% of terrestrial water contributing towards the entire lunar budget . In a recent study, Greenwood et al. (2018) suggested that most of the terrestrial water was formed during planetary accretion, before the Moon-forming impact occurred. They further set the constraints on composition of the Late Veneer based on the oxygen isotopes, that most likely CM and water-poor enstatite chondrites delivered water to the Earth. On the other hand, the composition of the Late Veneer determined by the water abundance in lunar samples concluded that other types of carbonaceous chondrites in addition to CM -CO, CI, and possibly CV -acted as a source of water . The similarities in the mantle values of terrestrial and lunar water, followed by the assumption that most of terrestrial water was delivered before Moon formed, dictate that the chondrites accreted on Earth prior to the Giant Impact must have been of comparable composition to those accreted on both planets during the Late Veneer, and, thus, during the time when most of Mg-suite apatite crystallized (10-200 Ma after the Giant Impact). Furthermore, the current study reveals that the H isotopic composition of apatites within the early lunar crustal rocks (≥4.2 Ga; 76335; 78235, 78236) and impact-melt breccias formed at ∼3.9 Ga (72255, 73235 and 76255) are similar. Based on this fact, it can be hypothesized that throughout the early bombardment history of the Moon, impactors delivered water of the composition similar to indigenous lunar and terrestrial water (i.e., impactors did not vary in composition significantly, at least in their H isotopic composition; e.g., CM, CO, CI, and possibly CV) or that they had low water abundances (e.g., enstatite chondrites) and hence no capacity to affect the H isotopic composition of pre-existing water in the primary lunar rocks.
Conclusions
This study reports on the chemistry, water abundance and H isotopic composition of Mg-suite apatite from troctolite, gabbro and norite lithologies that show variable degrees of shockdeformation. Petrographic context, microtextural and nanostructural inspection reveal two distinct types of apatite: primary (with shock stages S1-S6) and secondary (recrystallized or grown from impact melt). No obvious difference in mineral composition, H 2 O abundance and δD was observed between the two textural types. Mg-suite apatite studied here contains up to ∼960 ppm of H 2 O and shows δD values (−535 ± 134 to +147 ± 194 ) within the range of terrestrial and carbonaceous chondrite values, with the weighted average δD values of −209 ± 47 being indistinguishable from the Earth's deeper mantle. H 2 O-δD systematics of apatite do not correlate with the level of shock each apatite grain has experienced, but it is apparent that individual water-poor apatite grains were influenced by a process, which resulted in the lowering of their original δD values. The grains with the lowest δD values are either in contact with impact melt or have severely complex nanostructures induced by shock-deformation. Locally, on a level of an individual grain, the network of grain boundaries could enhance reaction with D-poor regolith, which is thermally processed by a hot impact-melt, resulting in D-poor signature of apatite. Water-rich primary apatites (above ∼100 ppm H 2 O) that contain complex microtextures do not show any evidence of significant water-loss or alteration in their δD value. Thus, water-rich apatites in some of the oldest lunar crustal materials act as a reliable recorder of the primordial H isotopic signature of the Moon, despite experiencing variable degrees of shock-deformation at a later stage in lunar history.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 7,981.4 | 2020-08-15T00:00:00.000 | [
"Geology"
] |
Aeolian Creep Transport: Theory and Experiment
Aeolian sand transport drives geophysical phenomena, such as bedform evolution and desertification. Creep plays a crucial, yet poorly understood, role in this process. We present a model for aeolian creep, making quantitative predictions for creep fluxes, which we verify experimentally. We discover that the creep transport rate scales like the Shields number to the power 5/2, clearly different from the laws known for saltation. We derive this 5/2 power scaling law from our theory and confirm it with meticulous wind tunnel experiments. We calculate the creep flux and layer thickness in steady state exactly and for the first time study the relaxation of the flux toward saturation, obtaining an analytic expression for the relaxation time.
Introduction
Aeolian sand transport is responsible for desertification processes and bed topography evolution. Therefore, the prediction of the sand transport rate under turbulent wind conditions is fundamental for the management of land resources and sustainable environmental development (Durán et al., 2011;Kok et al., 2012;Pähtz et al., 2020;Shao, 2008). From a physical point of view, sand transport is the dynamic response of a granular bed under the shearing of turbulent wind. The two main types of aeolian sand transport are saltation (grains move by successive jumps over the bed) and creep (rolling and sliding over the bed surface) (Bagnold, 1941;Pähtz et al., 2015). Saltation has been extensively investigated in the past decades (Andreotti, 2004;Ho et al., 2011;Iversen & Rasmussen, 1999;Martin & Kok, 2017;Sørensen, 2004). In contrast to saltation, creep still remains virtually unexplored.
In previous studies, mainly bed traps were used to measure creep transport rates (Bagnold, 1941;, and only very limited theoretical approaches for creep have been reported up to the present (Wang & Zheng, 2004). All proposed creep transport laws are either empirical or semiempirical, and lack a solid physical basis. Therefore, a solid theoretical framework describing creep precisely is badly required. Here we will propose for the first time a theoretical model for aeolian creep, finding a 5/2 power scaling with the Shields number for the creep transport rate. This new law was validated through wind tunnel experiments. Furthermore, we will explore the relaxation of creep transport to steady state. saltation layer is a dilute rapid granular flow, in which the saltating grains are accelerated by the wind before they impact the creep layer. In the creep layer, a dense shear flow of sand is driven by the shear stress τ 0 applied on the surface of the creep layer.
Saltation Layer
To describe flow in the saltation layer, one can use a continuum saltation model based on mass and momentum conservation (Sauermann et al., 2001) where ρ sal and u sal are the density and velocity of the sand, in the saltation layer, respectively. Г is the erosion rate, and f d and f b denote the drag force and friction force acting on a volume element of the saltation layer. The erosion rate Г, which is the difference between the vertical flux of impacting and splashed grains, can be written as (Parteli et al., 2007;Sauermann et al., 2001) where l is the average saltation length, γ is a model parameter which determines how fast the system reaches its steady state. τ g0 is the grain-borne shear stress at the ground, and τ t is the threshold shear stress. A detailed derivation of the drag and friction forces f d and f b can be found in Parteli et al. (2007) or Sauermann et al. (2001). Finally, combining mass and momentum conservation Equations 1 and 2, the erosion rate Equation 3, and the drag and friction forces f d and f b , the closed model for sand transport in the saltation layer can be established (Durán & Herrmann, 2006;Parteli et al., 2007;Sauermann et al., 2001), and the following expression can be obtained for the steady state saltation flux Q s at saturation.
where z m and z 0 denote the mean saltation height and the roughness height, respectively. z 1 is a reference height, between z 0 and z 1 , α is the splash parameter, and C d is the drag coefficient.
If modifications of the total shear stress by suspended grains are neglected, τ can be assumed to be constant at any height above the ground. According to Owen (1964), the total shear stress τ can be split into the airborne shear stress τ a and the grain-borne shear stress τ g , and we obtain τ = τ a + τ g = const.
Creep Layer
The creep layer is a thin dense sheared sheet of flowing grains driven by the surface shear stress τ 0 . Because the creep layer is dense, the air flow will not directly affect the movement of grains inside the creep layer. At the upper surface of the creep layer ( Figure 1) one has τ a (z = s) ≡ τ a0 and τ g (z = s) ≡ τ g0 . τ a0 is applied directly to the upper surface of the creep layer, and τ g0 is the momentum transfer from the saltating grains to the creep layer during the impacts. Since we are formulating a continuum theory, the discrete impacts from saltating grains are here described by a momentum transfer. Therefore, they together drive the creep layer with τ a0 þ τ g0 ¼ τ 0 ¼ ρ a u 2 * . We define a Cartesian coordinate system Oxz, with the x axis pointing in the streamwise and the z axis in the vertical direction, respectively ( Figure 1). Let s(x, t) be the upper surface of the creep layer and b(x, t) its base. Hence, the thickness of the creep layer is given by h(x, t) = s(x, t) − b(x, t). If variations in the bulk density are neglected, the creep can be considered as incompressible. The continuity equation then becomes The momentum balance equation is where u and w are the velocity components in the x and z directions, respectively. The transverse velocity in y direction is not considered here. σ and τ are the normal stress and internal shear stress, respectively. ρ = ϕρ s is the density where ϕ is the sand volume fraction of the creep layer and ρ s the sand density.
Based on Sayed and Savage (1983), the constitutive relation for the creep layer can be expressed by a nonlinear viscous law which is related to pressure and strain rate In Equation 7, τ y is the Coulombic yield stress which is proportional to normal stress σ, and τ v denotes the viscous stress, which is the non-Newtonian stress caused by particle collisions and is proportional to the square of the shear rate _ γ ¼ ∂u=∂z. d is the grain diameter, χ is a model parameter, g denotes gravity, and μ 0 is the coefficient of dynamic friction. Using kinematic boundary conditions at the upper surface and at the base of the creep layer (Iverson, 2012), we obtain E s and E b are the entrainment rate at the upper surface and the erosion rate at the base of the creep layer, respectively. Additionally, if there is no slip at the base of the creep layer, the dynamic boundary conditions along the upper surface and the base of the creep layer become where the superscripts "s" and "b" denote the upper surface and the base, respectively. It is important to mention that these equations can equally well describe rippled surfaces or rugged terrain just by setting the apropriate boundary condition on b(x, t).
Steady State Solution
In the steady state of creep motion, that is, under uniform and steady conditions, the time and space derivatives in Equations 5 and 6 vanish. In this case, combining the constitutive Equation 7 and the boundary 10.1029/2020GL088644
Geophysical Research Letters
conditions, the exact solution for the equilibrium creep motion can be derived. The velocity profile in the creep layer is given as u where h eq = τ 0 /ρgμ 0 is the depth of the creep layer in steady state. Then, the creep flux in steady state can be obtained as This means that the creep flux increases with the shear stress to the power 5/2, and defining the Shields This is the first time to our knowledge that such a law is proposed for creep transport rate.
General Case
Even not being in steady state, the continuity Equation 5 and the momentum balance Equation 6 can be integrated over z from the base to the surface of the creep layer following Leibniz' rule. The depth-integrated continuity and momentum equations are then obtained using the kinematic and no-slip boundary conditions at the base and the upper surface of Equations 8 and 9 where overbars denote depth-averaged quantities defined as φ ¼ 1=h ð Þ∫ s b φdz for any variable φ. E s is the entrainment rate at the upper surface of the creep layer which is determined by the erosion rate Г of the saltation layer through E s = Γ/ρ s . In order to calculate the three unknown fields, namely, the depth h(x, t), the surface elevation s(x, t), and the average velocity u x; t ð Þ, we need to supplement an independent equation. For this purpose we use the depth-integrated energy flux equation, which was obtained from a weighted residual method (Capart et al., 2015;Richard & Gavrilyuk, 2013;Steffler & Jin, 1993). The weight function selects the velocity u(x, t) using Leibniz' rule and the boundary conditions. After the integration of the momentum equation with this weighting function, the following equation can be obtained ∂ ∂t Assuming that the velocity profile away from steady state has the same form as the velocity profile at steady state, we can obtain the average of the square and the cube of the velocity as u 2 ¼ 25=16 ð Þ u 2 and u 3 ¼ Under these assumptions, the governing Equations 14 and 15 for the creep layer become With the energy flux Equation 15, together with the continuity and momentum Equations 13 and 14, one can therefore obtain the depth h(x, t), the surface elevation s(x, t), and the average velocity u x; t ð Þ for aeolian creep.
Geophysical Research Letters
WANG ET AL.
In principle, the time-dependent equations for the saltation and creep layers could be solved numerically to calculate the transient behavior of aeolian sand transport. In order to simplify Equations 16 and 17, we will assume that saltation has reached steady state, and thus only the creep motion remains in a transient state. In this case, E s = 0, and ∂h=∂x ¼ ∂ u=∂x ¼ 0. Then the governing equations in the creep layer may be written as We have calculated the depth h eq and averaged velocity u eq of steady uniform creep motion in the previous section. In order to reduce the number of parameters and simplify the analysis, we first rewrite Equations 18 and 19 in dimensionless variables given by e h ¼ h=h eq , e u ¼ u= u eq , and e t ¼ t=t Assuming that slope effects are small, that is, The creep model of Equations 20 and 21 is a system of two coupled nonlinear ordinary differential equations. The relaxation process toward the steady state of creep transport can be described by the flow diagram of Figure 2 obtained numerically from Equations 20 and 21. Under different initial conditions, the system trajectories evolve along curved paths away from unstable initial points until they reach the stable Steady State A (1, 1). In steady state, the relationship between thickness and average velocity of the creep layer is given by Equation 10. After normalization, the dimensionless relation for the steady state becomes e u ¼ e h 3=2 , which is the black line shown in Figure 2.
In order to further understand the creep dynamics when the system is not in steady state, we will now consider the transient from one steady state to another. Starting from creeping motion at a lower velocity in the Steady State B (0.5, 0.63), the shear stress τ 0 at the surface of the creep layer is suddenly increased, and consequently, the thickness and average velocity of the creep layer increase to a new Steady State A e h; e u e t→∞ ¼ 1; 1 ð Þ, illustrated by a thick solid blue trajectory in Figure 2. Similarly, the creep flux e q ¼ e u Figure S1 (see the supporting information). It can be seen that the transient of e h, e u, and e q first increases and then slowly relaxes toward steady values. In addition, we see that e q relaxes slower toward saturation than e u and e h. This means that the height and grain velocity first reach saturation, and the flux is the dominant mechanism controlling the transient to creep saturation, because the relaxation is limited by the slowest process. Note that the corresponding time scales reflect complicated processes involving the whole system dynamics. To find analytically the time scales involved in the transient behavior of creep transport, we search for the time needed to relax to the Steady State Q c given by Equation 11. Using Equations 18 and 19, we derive the relaxation equation for the creep flux We see that the creep flux q approaches the steady state exponentially with characteristic time T c According to Equation 23, T c scales as which characterizes the time scale of the response. T c increases with wind shear velocity as ∼ u 3 * and decreases with grain diameter as ∼ d −1 . To the best of our knowledge, this is the first time that the transient behavior of creep transport has been studied. Further work is required to experimentally confirm this transient behavior.
Experiments
Wind tunnel experiments were carried out to validate our theoretical results. The experiments were conducted in a multifunctional wind tunnel located at the Key Laboratory of Mechanics on Western Disaster
Geophysical Research Letters
and Environment, at Lanzhou University, with an experimental setup similar to that of Zhu et al. (2019). For a schematic design see Figure 3a. The experimental section of the wind tunnel was 22 m long, and its cross section was 1.3 m in width and 1.45 m in height. The incoming flow speed could be continuously adjusted in the range between 4 and 40 m/s. Roughness elements and a wedge were placed in front of the working section to accelerate the development of the boundary layer. A bed of about 10 m in length, 0.6 m in width, and 6 cm in thickness was filled with sand from the Tengger Desert (ρ s = 2,650kg/m 3 ) in China, which was sieved into five different categories (Zhu et al., 2019). In our experiment, we used sand with a median particle diameter of 251 μm. The wind velocity was measured at four different heights above the sand bed (4.3, 8.4, 13.3, and 20.5 cm) with I-type hot-wire probes (DANTEC 55P11) located near the end of the working section, which were connected to a constant-temperature hot-wire anemometer. The wind velocity profile satisfied the classical logarithmic law for turbulent boundary layers: u x z ð Þ ¼ u * κ ln z z 0 , and the wind shear velocity u * was obtained after fitting the logarithmic law to the averaged data (κ is the von Kármán constant of about 0.41). To measure the saltation and creep mass fluxes, a special sand trap located at the downwind end of the working section was implemented. The sand trap consisted of an upper and a lower part (Figure 3b). The upper part was 60 cm in height and 2 cm in width, divided into 30 bins, each bin having a vertical opening of 2 cm, and it was used to collect saltating sand grains. The lower part was used to collect creeping sand grains. To avoid that saltating particles enter the trap for the creeping ones, we designed a special device (Figure 3b). The principle of this device was based on the fact that over a flat surface saltating grains typically impact at angles between 9°and 16° (Bagnold, 1936;Nalpanis et al., 1993;Swann & Sherman, 2013;Willetts & Rice, 1989). To catch these grains, the height of the lower edge of the saltation trap was chosen to be 2 mm above the opening of the creep trap. The creep trap was positioned 15 mm downwind from the saltation trap having its opening at the height of the sand bed (see Figure 3b). Therefore, saltating particles could not enter the creep trap.
Experimental Results and Discussion
As shown in Figure 4, the dimensionless creep transport rate grows with the Shields number, meaning that the creep flux increases rapidly with the wind velocity. The solid line in Figure 4 of Q c = ρ s d ffiffiffiffiffi gd p À Á versus the Shields number Θ = τ 0 /(ρ s gd) was obtained from Equation 11. As shown in Figure 4a, theoretical values from Equation 11 are in quantitative agreement with the experimental data. And the creep transport rate Q c is found to scale as Θ 5/2 , confirming Equation 12. Moreover, in Figure 4b, we compare our theory with previous experimental results . The symbols show the dimensionless creep transport rate obtained from experiments for d = 0.152 mm and d = 0.382 mm under different Shields numbers, and the colored lines are the theoretical prediction of the form Q c ∝ Θ 5/2 . We observe very good agreement between the experimental data and our theory. Next, we will present also our results for the saltation layer (see Figure S2 in the supporting information). There is excellent agreement between the theory and experiment, indicating that Equation 4 accurately predicts the saltation transport rate. Typically, the sand transport rate can be expressed as a 1.5th order polynomial in the Shields number as seen in Equation 4. Recently also, a linear asymptotic relation between the sand transport rate Q s and the Shields number Θ (or u 2 * ) was suggested (Durán et al., 2012;Ho et al., 2011). As shown in Figure S2, a linear fit of the form Q s ∝ (Θ − Θ t ) is also compatible with the experimental data.
In Figure 5, we see both our experimental and theoretical results for the contribution of creep to the total transport as functions of the Shields number. The theoretical values come from Equations 4 and 11. Figure 5 also compares our theoretical prediction to field measurement (Sherman et al., 2019) and previous wind tunnel data . Sherman et al. (2019) used extensive field observations to find evidence that the proportion of creep transport compared to total transport weakly increases with shear velocity, in agreement with our theory and wind tunnel experiments.
Conclusions
The present study on aeolian creep transport quantifies an important geophysical phenomenon that has consequences on both bedform evolution and desertification. We proposed a theory for creep transport subjected to wind shear with the aid of a constitutive relation and a depth-averaged integration, which remarkably agrees with the results of wind tunnel experiments. Therefore, this theory might both provide a new way to fully and quantitatively validate aeolian creep transport and deliver a complete description of all relevant fields, including those that might be difficult to measure experimentally at present, such as the relaxation process of creep. These results can be useful in numerous applications ranging from sediment transport or aeolian erosive mechanisms to the understanding of the dynamics of the morphology of Earth's surface and on other planets.
Data Availability Statement
All data are available in Zenodo Digital Repository at https://zenodo.org/record/3884458#.Xt3VnoUmRhc. | 4,614.4 | 2020-08-03T00:00:00.000 | [
"Materials Science"
] |
Validation of a New Dynamic Muscle Fatigue Model and DMET Analysis
Automation in industries reduced the human effort, but still there are many manual tasks in industries which lead to musculo-skeletal disorder (MSD). Muscle fatigue is one of the reasons leading to MSD. The objective of this article is to experimentally validate a new dynamic muscle fatigue model taking cocontraction factor into consideration using electromyography (EMG) and Maximum voluntary contraction (MVC) data. A new model (Seth’s model) is developed by introducing a co-contraction factor ‘n’ in R. Ma’s dynamic muscle fatigue model. The experimental data of ten subjects are used to analyze the muscle activities and muscle fatigue during extension-flexion motion of the arm on a constant absolute value of the external load. The findings for co-contraction factor shows that the fatigue increases when co-contraction index decreases. The dynamic muscle fatigue model is validated using the MVC data, fatigue rate and co-contraction factor of the subjects. It has been found that with the increase in muscle fatigue, co-contraction index decreases and 90% of the subjects followed the exponential function predicted by fatigue model. The model is compared with other models on the basis of dynamic maximum endurance time (DMET). The co-contraction has significant effect on the muscle fatigue model and DMET. With the introduction of co-contraction factor DMET decreases by 25.9% as compare to R. Ma’s Model. Index Terms Muscle fatigue, maximum voluntary contraction (MVC), muscle fatigue model, co-contraction, fatigue rate, electromyography (EMG), maximum endurance time (MET), industrial ergonomics.
I. INTRODUCTION
In the field of industrial bio-mechanics, muscle fatigue is defined as "any exercise-induced reduction in the maximal capacity to generate the force and power output" (Vøllestad 1997). In industries, mostly repetitive manual tasks leads to work related Musculo-Skeletal Disorder (MSD) problems (Nur, Dawal, and Dahari 2014;Punnett and Wegman 2004). Work on repetitive and uncomfortable tasks can be painful (Huppe, Muller, and Raspe 2006) and leads to MSD (Chaffin, Andersson, and Martin. 1999;World-Health-Organization 2003). One of the reasons for MSD can be muscle fatigue (Nur, Dawal, and Dahari 2014). To limit MSD, the study of muscle fatigue can be very important. There are various factors which contribute to MSD problems. These factors are repetitive tasks, excessive efforts, long duration static tasks, uncomfortable working conditions, etc. These factors can cause problems like rapid muscle fatigue, increase in recovery time, muscles tension, muscle pain, muscle injury, tendon injury, nerves injury, etc. Muscle fatigue can have significant effect on muscle endurance. Muscle endurance is the ability to do some work or task over and over for an extended period of time without getting tired. The time when the force production can no longer be maintained is defined as the endurance time (ET). In this study we are mainly focusing on muscle fatigue during dynamic repetitive tasks.
Various static and dynamic muscle fatigue models were proposed earlier to study muscle fatigue (Ding, Wexler, and Binder-Macleod 2003;Hill 1938;Ma et al. 2008;Syuzev, Gouskov, and Galiamova 2010;Xia and Lawa 2008). Silva (Silva, Pereira, and Martins 2011) simulate the hill's model and validate it theoretically using Opensim. Dynamic model proposed by L. Ma (Ma et al. 2008) and R. Ma (Ma et al. 2011) have experimented validation for fatigue in arm with a static drilling posture and dynamic push-pull operation respectively. Maximum endurance time (MET) of muscle shows capacity of muscle to generate force up to the point of initiation of muscle injury and MSD. L. Ma and R. Ma also determined the maximum endurance time (MET) for static and dynamic condition respectively. Some other dynamic fatigue models were also introduced by various authors (Freund and Takala 2001;Liu, Brown, and Yue 2002;Ma, Chablat, and Bennis 2012a), however, no consideration about the co-contraction of paired muscles is taken into account in these models. The study of Missenard (Missenard, Mottet, and Perrey 2008) is one of the example to understand the fatigue and co-contraction, which shows the reduction in accuracy of tasks with fatigue. The main objective of this study is to revise the dynamic muscle fatigue model pro-posed by R. Ma by including the co-contraction factor of paired muscles into the model and experimentally validating the muscle fatigue model. The experiment duration part of each subject will then be compared to the dynamic maximum endurace time (DMET) determined for our model. The determined DMET will also be compared with L. Ma's and R. Ma's model. In this article, we focus on the study of muscle co-contraction activity, using elbow joint muscle groups as a target. Experiments were performed on 10 subjects to study the EMG activity of bicepss, tricepss and trapezius muscles. Processing of raw EMG data (Doguet and Jubeau 2014;Guvel et al. 2011) was done. With the assistance of EMG, the function of co-contraction is confirmed and calculated. The comparison of proposed model with L. Ma's model and R. Ma's models on the basis of DMET is also done. The comparison show the significant advantage of our proposed model (which will later called in article as Seth's model) over other models. The work related to the proposed model to limit musculoskeletal disorder is presented in VRIC 2016, France (Seth et al. 2016).
MUSCULAR FATIGUE
The dynamic muscle fatigue model is applicable on the dynamic motion of the human body parts. A dynamic muscle fatigue model was proposed by L. Ma (Ma et al. 2008;Ma et al. 2009) firstly applied on static drilling task. R. Ma (Ma et al. 2011;Ma, Chablat, and Bennis 2012b) develops this model for the dynamic motions like push/pull operation of the arm. However, the co-contraction of the muscles are not included in both models. In dynamic muscle fatigue model (Ma, Chablat, and Bennis 2012a), we selected two parameters Γ joint and Γ MVC to build our muscle fatigue model. The hypotheses can then be incorporated into a mathematical model of muscle fatigue which is expressed as follows: where, k is the fatigue factor (defines the rate of fatigue) and n is the co-contraction factor.
And, if Γ Joint and Γ MVC hold constant, the model can then simplify as follows: The other parameters for this model are same as in Table 1. n is the co-contraction factor.
Dynamic Maximum Endurance Time (DMET)
Endurance (also related to sufferance, resilience, constitution, fortitude, and hardiness) is the ability of a person or muscle to exert itself and remain active for a longer period of time, as well as its ability to resist, withstand, recover from, and have immunity to trauma, wounds, or fatigue. It is (Ma 2012) The reduction in the maximum exertable force or torque capacity of muscle is one of the hypothesis for the proposed dynamic muscle fatigue model. Maximum endurance time (MET) represents the maximum time during which a static load can be maintained (Ahrache, Imbeau, and Farbos 2006). The MET is generally calculated as the percentage of the maximum voluntary contraction (%MVC) or to the relative force/torque (Γ MVC = %MVC/100). MET models are used to predict endurance time of a muscle under static or dynamic conditions. By solving Eq. 2 for Γ cem (T ) = Γ max Joint with physical and mechanical parameters of motion using the method described by R. Ma (Ma 2012), DMET can be rewritten as: Here, f MVC = Γ max joint Γ MVC . The parameter 'd' involved in the DMET model varies between 0 and 1 and depends on the magnitude and speed of the movement. 'd' closer to 0 represents dynamic conditions and 'd' closer to 1 represents static conditions.
Push-Pull Operation and Muscles activities
The push/pull motion of the arm is flexion and extension of the arm about the elbow. The Push/pull activities with the muscle activation is shown in Fig. 2. There is co-contraction between Push and pull phases, however when phase changes from pull to push we can observe there is delay. The muscle activities shown in Fig. ?? is for flexion and extension in saggital vertical plane. Co-contraction factor 'n' The co-contraction is the simultaneous contraction of both the agonist and antagonist muscle around a joint to hold a stable position at a time. Assumptions made for finding cocontraction factor, which depict that, the co-contraction is the common intersecting area between the two groups of muscles, see the yellow area in Fig. 3. The co-contraction factor will be the same for each agonist and antagonist activities. The co-contraction area can be understand by the Fig. 3. This figure is just an example representation of the muscle activity during one motion cycle. In this figure, we can see the common EMG activity between biceps and triceps muscle shown by the orange color, which is co-contraction area. The formula for calculating the co-contraction index C A (represent part of co-contraction for each cycle) from EMG activities is given in Eq. 5. The trapezius activity shown along with the two muscles is co-activation.
Where, EMG common is the common area share by the EMG activity of bicepss and tricepss, EMG agonist and EMG antagonist are the full activities of the biceps and tricepss muscle.
The co-contraction index C A can also be represented as follows: C A = co-contraction index between the two group of muscles.
Where, a and b are constant float integers and x is the time.
The activities of both muscles are normalized with respect to the normalized value of the activities of each muscle, calculated using Eq. 8 described in section .
Subjects description
Ten male subjects participated in the experiments. The subjects details are given in Tab
Experiment Protocol
A biodex (REF) system is used to perform flexion and extension in isotonic mode in vertical plane as in Fig. 4, with 70 • range of motion (-20 • to 50 • ). Each protocol lasts 1 minute which includes 20 cycles (flexion + extension). The test protocol repetition continues till exhaustion of the subjects. During the fatigue test protocol, the external load will be 20% of MVC. MVC is tested every one minute and at the end of the protocol. To restrict the backward motion of the arm, a support is provided behind the upper arm. Data Acquisition
Data Processing and analysis
All the raw data were processed using standardized MAT-LAB program. Data processing includes noise filtering from raw EMG data with the band pass filter (butterworth, 2nd order, 10-400 Hz) and normalization of the data. EMG was normalized with a value calculated by Eq. 8. The total number of cycles compared for all the ten subjects are 1998 cycles. All the cycles are normalized on time scale and compared. The cycle selection for the flexion and extension phases is done according to the velocity change in each cycle. The collective EMG plots for bicepss, tricepss and Trapezius muscles are show in Fig. 5 and Fig. 6 for all the ten subjects and the collective comparison for the mechanical data position, velocity and torque is shown in Fig. 7 and Fig. 8
IV. RESULTS AND DISCUSSION
The raw data obtained after the fatigue test is processed and the results are discussed in this section. After processing the EMG data of all the muscle groups from Figs. 5, 6, 7 and 8, we can observe that when the bicepss are active during the flexion phase. There are always some activities from the tricepss and on the other hand when tricepss are active during pull phase, the bicepss are almost passive or activities are very close to zero. We can also observe the co-activation of trapezius muscle with the activation of bicepss. The activation of tricepss with the bicepss is co-contraction between two muscles during the flexion phase.The co-activation of the trapezius muscle is observed mostly in the flexion phase. The co-contraction index calculated by using Eq. 5 is fitted with Eq. 7 (looks almost linear) described in section III. The Figs. 9 -18 show the fitted graphs for the co-contraction percentage for test cycles of all ten subjects. In Figs. 9 -18 blue dots show the percentage area of contraction during each extension-flexion cycle and red curves show the exponential fit for the percentage co-contraction. This shows that the co-contraction percentage for activity between the muscles reduce as the fatigue test proceeds or the muscles get fatigued. By Eqs. 3 and 5 we can find n i as shown in Tab. 3, where i is the subject number.
We can notice that only the subject number 8 in Fig. 16 has increasing slope for the co-contraction area, this behavior can be associated with his sport activity which is rock climbing and very different from other subjects see Tab. 2.
The MVC values are measured between each protocol of one minute. We can see in most of the cases MVC decreases as fatigue increases. The MVC is same as Γ cem used in our model. The theoretical and experimental evolution of Γ cem is on the basis of k (fatigue rate) using Eqs. 2 and 3 and calculated, n i and C = 0.2. The evolution of Γ cem extension for fatigue parameter 'k' is shown in Figs. 19,21,23,25,27,29,31,33,35 and 37. Similarly the evolution of Γ cem flexion is shown in Figs. 20,22,24,26,28,30,32,34,36 and 38. In these figures blue lines show the MVC measured for flexion and extension after each test protocol of 1 minute. The MVC values measured are Γ cem (t), used in calculating fatigue rate 'k' using Eq. 3. The theoretical Γ cem is then calculated w.r.t minimum, maximum and average value of fatigue rate using Eq. 2. The theoretical and experimental evolutions of Γ cem show that the experimental values are well fit with in the theoretical model. The co-contraction factor have significant effect on the model. The minimum, maximum and average value of 'k' for each subject are shown in Tab. 4. The red, pink and black dotted curves in Figs. 19 -38 represent theoretical Γ cem , calculated from minimum, maximum and average values of fatigue rate 'k' respectively. The experimentally calculated values of Γ cem (t) is mostly in the range of theoretical Γ cem , which validates our muscle fatigue model. The fatigue rate increased with the input of co-contraction factor in the fatigue model, which shows the significant effect of co-contraction factor in the fatigue model. In Tab. 3 and 4, i represents subjects number.
DMET Analysis
The dynamic maximum endurance time was calculated for each subject according to their respective fatigue rate values. The results are given in Tab. 5 The fatigue rate for the flexion and extension phases are different for each subject. We have selected the maximum average value of k from both the phases using Eq. 3. The reason behind the particular selection of values of k is to make model more safe and hence analysis on the basis of that value. More the fatigue rate we choose for work design with Seth's model safer will the endurance time for the subject. In DMET calculations, 'd' represents the dynamic factor as mentioned before in Eq. 4 ranging between 0.1 to 1. The larger the value of d represent more static conditions. A smaller value of d represent more dynamic model. The comparison between the DMET calculated for the same subject with R. Ma's model and MET calculated by L. Ma's model has been done. The DMET comparison is shown in Tab. 5. The DMET is calculated using Eq. 4 for proposed model. MET calculation for L. Ma is done by the same equation with n = 1 and d = 1 because this model is for static conditions and without co-contraction. The DMET calculation for R. Ma is also done by the same equation with n = 1 and d = 0.5 because there is no co-contraction included and dynamic factor is for medium dynamic motion. For Seth's model, the DMET is calculated with the parameters, n = 1.38 and d = 0.5. The percentage difference between the DMET calculated from Seth's model and experiment test duration is also presented in Tab. 5. The DMET is calculated for each subject on the basis of their maximum fatigue rate 'k' so that the DMET calculated can be safer to subjects. According to fatigue experiment protocol for each subject, the load was 20% of MVC. The values of load for each subject corresponding to their maximum MVC values are also presented in table 5.
The DMET is also predicted for Seth's model keeping the value of co-contraction factor n = 1.38 at the value of d = 0.5. Fig. 39 represents the DMET for subjects with respect to the value of load f MVC , which is the ratio of external load to the maximum capacity or MVC of a subject, see Tab.dmettable2 for the values of the load for each subject. We can observe in Fig. 39 that DMET for Seth's model (red line) is less than R. Ma's model (green line) and more than L. Ma's model (blue line). We can see that the DMET calculated for each subject is more than the experimented value. It is because DMET is the maximum limit of any human and we did the experiment for each subject up to comfortable exhaustion level. Comfortable exhaustion level means the level at which the subjects want to stop the experiment because of fatigue. It may be possible in this cases that the subjects do not reach their maximum limit but they stop the test, for example, subject number 10, we can see in Tab. 5 that he stopped the test after 3 minutes, after completing 60 cycles but the maximum endurance time is much larger than the experiment duration. For subject 10 the percentage difference between the DMET calculated by Seth's model and experiment duration is 80.2%, which is much higher in comparison to other subjects. L. Ma's model is a static model, that is why maximum endurance time calculated is less than the experimented value. R. Ma's model gives more DMET for each subject in comparison to Seth's DMET model which is much closer to the experimental values. The DMET calculated by Seth's model is 25.9% less than the DMET calculated by R. Ma's model. The DMET calculated for Seth's model is less because we have introduced the co-contraction factor into the model. This gives more approximate value to the experimental data. So the work design according to this model will be safer in comparison to R. Ma's Model, L. Ma's model is for static posture hence, it may not be real to compare for dynamic situations.
V. CONCLUSIONS
The proposed model for dynamic muscle fatigue includes the co-contraction parameter, unlike in any other existing model according to the author's knowledge. The results and analysis of the experimental data validate best of the assumptions made for the proposed model. EMG analysis along with MVC helps to understand the muscle activities, also it justifies the significance of the co-contraction parameter in the proposed dynamic muscle fatigue model (Seth's model). The experimental data also helps in validating the new dynamic muscle fatigue model. The co-contraction factor allowed reducing DMET. The DMET model validation shows that there is a reduction of endurance time in comparison with R. Ma model and in static cases with L. Ma model.
The DMET comparison shows that there is 25.9% reduction in the maximum endurance time in Seth's model as compared to R. Ma's model. When we compare the DMET at dynamic variation factor, d = 0.5, it shows that the time taken by different subjects to complete the fatigue protocols during the experiment are near to the DMET values calculated for maximum values of k and closer to Seth's model. It shows that Seth's DMET model is more safer and better for dynamic condition in comparison to other models. | 4,592 | 2016-01-01T00:00:00.000 | [
"Engineering",
"Medicine"
] |
An Educational Tool based on Virtual Construction Site Visit Game
To enhance the engagement of Civil Engineering students and encourage active learning, a virtual construction site investigation game is developed in the present paper. 3D construction site environment is built based on BIM models and relevant objects common on construction sites are created to enhance the realistic. Navigation and Interactions are developed to enable the students to explore the virtual sites freely and get instant feedbacks. Different modules, such as Questions and Tasks, are developed to exam how well the students master the domain-related knowledge. Unity, a cross-platform game engine, is used as the development platform for this research project. The architecture, mechanism and the implementation are described in detail in this paper. A pedagogical methodology for improving the quality of learning is thus developed by transforming traditional instructional delivery techniques into technology-based active learning. Students’ engagement in the learning process is improved by establishing a contextual connection between ordinary textbook materials and technologies that students use in their daily routines. This new approach enables students to interact, and learn abstract topics in engineering design and construction method. The effectiveness of this active learning method is investigated by the feedback from two groups of students using a questionnaire. The potential benefits of the proposed research are: enhanced understanding of complicated structures; better accessibility to more construction site virtually; more convenient and flexible time for learning practices; and safer site visit with this pre-training tool.
Introduction
In the Department of Civil Engineering in a university, field trips to construction sites are usually arranged every semester.The purpose is to enhance the students' learning of different construction methods, identifying different types of structures, and getting familiar with engineering tasks, etc.It is important to gain practical experiences rather than learning knowledge from the textbooks only.However, finding suitable and convenient construction sites is not easy.Moreover, proper time of visiting construction site is also an extra constraint in terms of availability of the site being visited, and the readiness of proper knowledge engaged to the students.Usually, field trips are organized by the lecturer at the time that related knowledge has been introduced in the class.However, finding a construction site doing the similar construction tasks are not always available.The task schedules and time availability for the students do not match all the time.Moreover, not all the structures are available in the area of the university.A long-distance traveling may be impossible due to all kinds of constraints.
On the other hand, the new generation of students is technology savvy with high knowledge of and interest in social media, mobile technologies, and strategy games (Friedrich et al., 2009).Pan et al. (2006) discussed that using virtual learning applications may result in an efficient and effective learning.It has been indicated by Shirazi and Behzadan (2013) that it is almost impossible to separate students from their technology-enables devices or ask them to think and act differently than how they do outside the classroom.Rather, a more reasonable approach is to find ways to create a seamless transition between the outside world and the classroom environment.
Building Information Modeling (BIM) is a new approach to design, construction, and facilities management in which a digital representation of the building process is used to facilitate the exchange and interoperability of information in a digital format (Eastman et al., 2011).Compared with the traditional CAD technology, BIM is capable of restoring both geometric and rich semantic information of building models, as well as their relationships, to support lifecycle data sharing.BIM has been used to enhance course design and improve student learning outcomes in traditional coursework.For instance, Cory et al. (2010) described the evolution of a construction graphics course facilitated by the use of BIM at Purdue University.Sacks and Barak (2010) restructured their engineering graphics course with principles and applications of BIM in recognition of the fact that three-dimensional (3D) models will be the principal medium for expression and communication of design intent in the civil engineering profession.Barham et al. (2011) evaluated the effectiveness of BIM in enhancing student learning in a structural concrete design course.Hyatt (2011) demonstrated the convergence of lean construction, sustainability, and BIM in an undergraduate construction management scheduling course.
The aim of the present research is to develop a 3D virtual construction environment investigation game, which will enhance learning and teaching in the department of Civil Engineering.The objectives are: (1) to build virtual construction sites by using available resources from previous and current construction site visits; (2) to design a learning and teaching mechanism by providing a visual environment for active learning; and (3) to investigate the feasibility of providing an alternative assessment method for civil engineering students.
Methodology
The framework of the research methodology is shown in Figure 1.A 3D virtual environment is built based on BIM model of a building integrated with domain-related knowledge.Navigation and Interaction are developed to enable the students to explore the virtual sites freely and get instant feedbacks.Different modules are developed, which includes Identification, Questions, and Tasks.Identification module focuses on identification of construction equipment, structure, construction materials, and construction methods and so on.For example, for a steel structure, to identify the main beams, secondary beams of a portal frame; where are the welding, bolts connections.By that way, the students connect abstract concept or description with reality and get visual impression.Question module provides a traditional way to check how much the student mastered the knowledge.Questions with different level of difficulty are designed by summarizing knowledge from textbooks and experiences from the construction site.Task module assigns different tasks to students to accomplish, for example, to remove safety hazard on site and identify what kind of prevention measures should be applied.
Figure 1.Framework of a virtual construction site game design
Geometrical Information Extracted from BIM Model
BIM models are now popularly used in the built environment industry; which provide accurate geometrical information for the whole building.The interoperability aspect of BIM enhances data exchanges between different software packages and application.For example, Revit from Autodesk© is capable to export building information in various format, such as .ifc,.fbx,and other formats that can be directly read by other software packages.In this research, a BIM model of a commercial building is used to provide the basic geometric information to create the construction site.An .fbxfile is exported from Revit and imported to Unity environment, as shown in Figure 2.However, information of materials, texture coordinates and stair cases are missing after the importing; therefore, they are manually added to complete the 3D environment.In addition, other objects common for a construction site are added to make the environment more realistic, as shown in later sections.
Game Engine Architecture
A game engine is a piece of software for creating video games on different operation platforms.The architecture of a game engine is complicated and comprises of a bunch of tools for advanced shading and shadows system, a skeletal animation system, a post processing system such as anti-aliasing and human computer interaction (HCI).
Simulations of a real world can thus range from a vivid character, realistic lighting effects to an attractive scene.
The main components of a general game engine architecture include a rendering engine, a physics engine and a core engine bounding two engines together and handling interactions between them.Rendering engine is responsible for visual effects with high reality degree and physics engine can further improve the simulation of the real world by applying physical formulas on the environment to provide realistic feedbacks for users' behavior.Details are discussed as follows and the general architecture is shown in Figure 3.A rendering engine is one of the largest subsystems in the architecture, which is responsible for drawing images on the display.There are many different design philosophies of implementing a rendering engine and one of them is based upon layer design philosophy.The lowest layer of a rendering engine is a graphics device interface.Generally two graphics application programming interfaces (API) are in use, DirectX 3D for windows and OpenGL for all platforms.Additionally, there should be optimization algorithms for scene culling since a camera cannot 'see' all the objects outside of its frustum.Furthermore, a rendering engine should support a wide range of visual effects including advanced lighting and environment mapping, shadows and post processing.
(2) Physics Engine A physics engine is responsible for detecting collisions applying physics on objects.Without it, two solid objects would intersect with each other and the simulation of the world becomes odd.Nowadays, many game companies deploy 3rd party's physics engines which can either be used as stand-alone software or be integrated into a game engine via APIs.One of those commonly in user free physics engine is PhysX, which is provided by NVIDIA.It is integrated into many commercial game engines such as Unreal Engine 3/4 and Unity 5. Unity5 is used in the present paper as a standard kit of game development according to the development cycle and basic requirements of this project.
3) Core Engine
A core engine provides a collection of utilities and is the main entry of a game.For example, a game supplies a set of 3D geometrical data to a core engine.The core engine first calculates its position via the physics engine and then employs the rendering engine to render it on screen within the game loop.Additionally, it is responsible for memory management to ensure high speed memory allocations and avoid memory fragmentation.Furthermore, a core engine shall offer at least one math library to implement linear and affine algebras as well as calculus.
Implementation Using Unity 5
Unity, a cross-platform game engine, is the main development platform for our research project.Unity 5, the newest version released on March 3, 2015, added a large amount of advanced real time global illumination techniques based upon Geometrics Enlighten technique (Martin, 2011).Additionally, it added the features of physically based shaders (Harris et al., 2002), audio mixer, High dynamic range (HDR) reflection probes (Waese, 2009) and Physx3.3.
(1) Mechanism The mechanism is divided into two parts, the Question module and the Task module.The main structure is shown in Figure 4.The question module provides chances to answer questions during exploration to test students' academic knowledge.Signs are set up for spots where questions will pop up upon the user getting close to them.When a question is triggered, an answering panel interface will be displayed in the center of the screen.Optional choices will be shown for the students to choose.Hints are available to help the students answer the questions.The function of the task module is similar to the question module, but is conducted in a more vivid way.Instead of choosing options from a list, the students are required to use tools or trigger some emergency to solve problem that they encountered.Two major types of tasks are designed as follows: Dominant task: If students can choose appropriate tools and put them in correct location, it means that the task has been accomplished successfully, otherwise, emergency may happen such as the avatar will fall down or getting an electric shot, and game is over.
Recessive task: In some case, the user needs to trigger some action by themselves, for example, moving construction materials on site to designated locations.
(2) Game Development The development of the game mainly focuses on scene construction and GUI design.For the scene construction, partially-constructed site is created.As explained in Subsection 3.1, a BIM model of the building is imported from Revit to Unity in the form of discrete vertices.After that, physics effects are added to the raw models constructed from those vertices to avoid walking through the walls.Then textures and materials are defined for objects in the scene such as brick, cement, etc. because those data are missing after the importing.Moreover, to improve the reality, shading system is built and the camera is scripted so that objects and shadows can appear differently from various viewpoints.In this research, a third-person viewpoint is set as default to avoid losing focus during exploration.Users can also switch between third-person and first-person viewpoints.The model of an avatar along with the animation matrices is chosen from the embedded asset store of Unity, and C# scripts is used to develop the action management in the scene.Screenshots of the site are illustrated in Figure 5. Occlusion Query is a query of how many pixels would have been drawn on the texture after the end of graphics pipeline, the alpha test, the stencil test and the depth test.The query of visible pixels is implemented in extensions in OpenGL ( 2015) and incorporated into DirectX 9.The process of basic occlusion culling is intuitive, as shown below: 1. Render the object with color buffer and depth buffer writing disabled 2. Render the game object's bounding box with depth test enabled 3. Make Occlusion Query for the rendered bounding box 4. If the number of visible pixels is smaller than a threshold, do not render the actual game object
Otherwise, Render it
To make occlusion culling as fast as possible, opaque game objects are sorted from front to back according to the distance to the camera.Those objects closer to the camera would be considered as occluders while other objects would be treated as occludees.The occluders shall not be accounted for occlusion culling processes.One of the possible ways to sort game objects is Binary Space Partitioning (BSP) tree (Schumacker et al., 1969).BSP tree is also used as a scene graph managing the virtual world in OGRE, an open-source 3D graphics engine.Additionally, back in the early days where Z-buffer algorithm is not cooperated into the graphics pipeline, BSP tree is also used to implement the painter's algorithm, where the farthest 3D object is rendered first and then the second farthest one and so on.The construction of a BSP tree is a representation about a recursive division of n-dimensional complicated objects into smaller convex ones.Once, upon the finishes of the construction of BSP tree, rendering engine would effectively draw objects onto our screen by querying BSP tree with the current position of the player.If the camera is in front a node's partitioning plane, the node's front plane would be visited recursively until it is a leaf node.Then, all the polygons coplanar with this dividing plane would be rendered and finally the back node would be visited recursively.This traversal is similar if the camera is behind the partitioning plane.It usually cooperates with scanline approach to eliminate the 3D objects that are not visible to the camera.
Graphic User Interface
A menu is designed for better interaction (Figure 6(a)).At the beginning of the game, the users have the options to start a new game, resume a previous game, or quit the game.In addition, an Option button provides the users to change the setting of the game, which includes game setting, sound setting and graphic setting.Game setting could be used to change the features of game.For example, users could change the display of hints.All the features that are directly related to the game play could be found in this game setting menu.Sound setting helps players to adjust the volume of the background music, the effect sound and other sounds.In addition, it is also used to turn on or off the background music and other sounds.The graphic setting is mainly used to change the graphics quality.If the player's computer does support running the game with high graphic quality or they decided to sacrifice the graphic quality for a smoother game experience, they can switch to mid-quality or low-quality modes.A brief introduction is also given at the beginning at a new game to explain how to play this game.) is placed on the lower-right corner of the screen.Upon clicking, a window of inventory will be displayed to show the tools that can help the users accomplishing the tasks.A mini map (Figure 6(C)) is designed to navigate the users to avoid losing in the virtual environment.A camera is created for the mini map, through which players can see the game world in a bird's eye view.Moreover, the camera for the mini map keeps tracking of the player in most circumstances.It is worth to be mentioned that the projection for the camera of mini map is orthogonal since the mini map is a simplified 2D image of the game world.Unity renders the scene into a texture which is going to be displayed on the mini map GUI.In addition, locations of questions are marked in the mini map.By clicking the icons shown in the mini map, the users could immediately be brought to the place where a question is set.
Questions and Task Design
According to the different construction sites, different types of questions are design in terms of building structures, construction methods, construction safety management, etc.For instance, in concrete construction, most questions are designed according to the concrete material and relevant structure.The number of options of a question is decided by the level of difficulty of the questions.Figure 7 shows the question answering scheme.In order to help the students gain more information, the hints about the solution are given for every question.If the user chooses to skip, the medal of the question will turn to gray which means the question is not accessible again in the current round of answering.
If the correct answer is chosen, one medal will be given to the user and the question sign will turn to gray.The same question will become unavailable after two trails of answering.Figure 8 shows an example of answering a question.First of all, when the user approaches to a medal (Figure 8(a)), a question will pop up (Figure 8(b)).If the user needs some hint, by clicking the Tips button, detailed information is provided (Figure 8(c)).In addition, videos are embedded in the game to be watched by the users and questions are given based on the contents shown (Figure 9).The video can be re-watched for a better understanding of the questions.The procedure of accomplishing a task is shown in Figure 10.Initially, the user can choose different tasks that need to be accomplished at the beginning of the game.Figure 11 shows an example of a safety hazard on the construction site, where an opening is found on the floor without any safety protection measures.If the user continues walking, the avatar will fall into the opening and the game is over.In that case, the user needs to open the Tool Box and select appropriate tools, Fences, and put it around the opening.Therefore, the safety hazard is removed and there is no danger for the user any more.When the total number of correctly answered questions and accomplished tasks reaches the threshold, the user will be upgraded to the next level, which is going up to the higher floor in the building, as shown in Figure 13.The difficulty level of the questions is also increased.Tasks that have not been accomplished are taken into the next level as well.
Conclusions and Future Work
This paper proposes a pedagogical methodology for improving the quality of learning through transforming traditional instructional delivery techniques into technology-based learning.A 3D virtual construction environment investigation game is developed to enhance learning and teaching in the department of Civil Engineering.By answering questions and carrying on tasks, the students get instant feedback of how well they mastered the domain-related knowledge and will be encouraged to seek answers after finishing the play.By this way, students' engagement in the learning process will be improved by establishing a contextual connection between ordinary textbook materials and technologies that students use in their daily routines.
Students of different levels of Civil Engineering Department were invited to evaluate the game and give feedbacks using a questionnaire.Positive feedback were given showing that students are interested in such kind of games and encouraged to study more details and play the game again for better performance.
A complementary assessment method for civil engineering students can be provided based on this game.Game scenes and questions related to different civil engineering domains can be designed based on the characteristics of different modules.Our future work will be adding more questions and tasks to the game and all students from the built environment cluster will be invited to play the game and give feedbacks.Further statistical analysis can be applied to investigate the effectiveness of this proposed method Figure 2. Creating 3D construction site from Revit to Unity
Figure 3 .
Figure 3. General game engine architecture
Figure 5 .
Figure 5. Designed construction site scenes
Figure
Figure 6.GUI Design
Figure 9 .
Figure 9. Answering questions related to a video
Figure 12 .
Figure 12.Other safety hazard on a construction site
Figure
Figure 13.Level up | 4,744.6 | 2017-07-09T00:00:00.000 | [
"Computer Science"
] |
SINAI at SemEval-2019 Task 6: Incorporating lexicon knowledge into SVM learning to identify and categorize offensive language in social media
Offensive language has an impact across society. The use of social media has aggravated this issue among online users, causing suicides in the worst cases. For this reason, it is important to develop systems capable of identifying and detecting offensive language in text automatically. In this paper, we developed a system to classify offensive tweets as part of our participation in SemEval-2019 Task 6: OffensEval. Our main contribution is the integration of lexical features in the classification using the SVM algorithm.
Introduction
In recent years, with the emergence of social media, the user-generated content on the Web has grown exponentially. This content has the potential to be transmitted quickly, reaching anywhere in the world in a matter of seconds. Due to the exchange of ideas between users, we find not only positive comments, but also a wide diffusion of aggressive and potentially harmful content. Consequently, this type of remarks affects millions of online users. In fact, it has been reported that these incidents have not only created mental and psychological agony to the online users, but have forced people to deactivate their accounts and, in severe cases like cyberbullying, to commit suicides (Hinduja and Patchin, 2018). One of the strategies used to deal with aggressive behavior in social media is to monitor or report this type of content. However, this strategy is not entirely feasible due to the huge amount of data that is generated daily by users. Therefore, it is necessary to develop systems capable of identifying this type of content on the Web.
In order to tackle this problem, firstly it is important to define the toxic language. The toxic language can be broadly divided into two categories: hate speech and offensive language (Cheng, 2007;Davidson et al., 2017;Gaydhani et al., 2018). According to Cambridge Dictionary, hate speech is defined as "public speech that expresses hate or encourages violence towards a person or group based on something such as race, religion, sex, or sexual orientation". Offensive language is defined as the text which uses hurtful, derogatory or obscene terms made by one person to another person.
In this paper, we present the system we developed as part of our participation in SemEval-2019 Task 6 OffensEval: Identifying and Categorizing Offensive Language in Social Media) (Zampieri et al., 2019b). In particular, we participated in subtask A: Offensive language identification. It is a binary classification task and consists of identifying if a post contains or not offense or profanity language.
The rest of the paper is structured as follows. In Section 2, we explain the data used in our methods. Section 3 introduces the lexical resources used for this work. Section 4 presents the details of the proposed systems. In Section 5, we discuss the analysis and evaluation results for our system. We conclude in Section 6 with remarks on future work.
Data
To run our experiments, we used the English dataset provided by the organizers in SemEval19 Task 6 OffensEval: Identifying and Categorizing Offensive Language in Social Media (Zampieri et al., 2019a).
The datasets contain tweets with five fields. Each tweet comprises an identifier (id), the tweet text (tweet), field for subtask A (subtask a), field for subtask B (subtask b) and field for subtask C (subtask c). Since we have only participated in sub-task A, we are interested in the fields id, tweet and subtask a.
In sub-task A, we are interested in the identification of offensive tweets and tweets containing any form of (untargeted) profanity. In this sub-task, there are 2 categories in which the tweet could be classified: (NOT) Not Offensive -This post does not contain offense or profanity.
(OFF) Offensive -This post contains offensive language or a targeted (veiled or direct) offense. In the annotation, this category includes insults, threats, and posts containing profane language and swear words.
During pre-evaluation period, we trained our models on the train set, and evaluated our different approaches on the trial set. During evaluation period, we trained our models on the train and trial sets, and tested the model on the test set. Table 1 shows the number of tweets per class for English language used in our experiments. Total Train 8840 4400 13,240 Trial 243 77 320 Test --860
Resources
For this subtask A, we used different lexicons that we explain in detail below. VADER (Valence Aware Dictionary and sEntiment Reasoner) (Gilbert, 2014). The VADER sentiment lexicon is a rule-based sentiment analysis tool. This is sensitive both to the polarity and the intensity of sentiments expressed in social media contexts, and is also generally applicable to sentiment analysis in other domains. VADER has been validated by multiple independent human judges. The tool return four values: positive, negative, neutral and compound. The first three scores represent the proportion of text that falls in these categories. The compound score is computed by summing the valence scores of each word in the lexicon, adjusted according to the rules, and then normalized between -1 (most extreme negative) and +1 (most extreme positive).
Offensive/Profane Word List (von Ahn, 2009). A list of 1,384 English terms (unigrams and bigrams) that could be found offensive. The list contains some words that many people won't find offensive, but it's a good start for anybody wanting to detect offensive or profane terms.
System Description
In this section, we describe the systems developed for the subtask A in OffensEval task. During our experiments, scikit-learn machine learning in Python library (Pedregosa et al., 2011) was used for benchmarking. A general scheme of the system can be seen in Figure 1.
Prepared corpus
Step 2
Data Preprocessing
In first place, we preprocessed the corpus of tweets provided by the organizers. We applied the following preprocessing steps: the documents were tokenized using NLTK, the URLs and mentions users are removed and all letters were converted to lower-case.
Feature Extractor
Converting sentences into feature vectors is a focal task of supervised learning based sentiment analysis. Therefore, the features we chose in our system can be divided into two parts: statistic features and lexical features.
• Statistic features. We employed the features that usually perform well in text classification: Term Frequency (TF) taking into account unigrams.
• Lexical features. As we explained in Section 3, we used two lexicons to obtain our features in the following way: 1. VaderSentiment. We use the sentiment.vader module 1 provided by the Natural Language Toolkit (NLTK).
With this module, we analyze each sentence and we obtained a vector of four scores: negative sentiment, positive sentiment, neutral sentiment and compound polarity. 2. Offensive/Profane Word List. We checked the presence of each word of offensive/profane word list in the tweet and if it exists we assigned 1 as Confidence Value (CV). Then, we summed the CV of all the words finding in the tweet and this value is divided for the total number of words of tweet. As a result, we obtained a parameter that will be used as a feature applied for the classifier.
Classifier
The concatenation of the features described before are applied for the classification using the SVM algorithm. We selected the Linear SVM formulation, known as C-SVC and the value of the C parameter was 1.0.
Analysis of results
During the pre-evaluation phase we carried out several experiments and the best experiment were taken into account for the evaluation phase. The system has been evaluated using the official competition metric, the macro-averaged F1-score. The metric has been computed as follows: The results of our participation in the subtask A of OffensEval task during the evaluation phase can be seen in Table 2. In relation to our results, it should be noted that we achieve better score in case of the class NOT offensive (F1: 0.88). However, our system is not able to classify well the OFF class (F1: 0.56). This issue may be due to overtraining for the NOT class since as we can see in the Table 1 of Section 2, around 67% of the total tweets belong to that class in the training set in comparison to 33% of OFF class.
With respect to other users, we were ranked in the 68th position as can be seen in Table 3.
Conclusions and Future Work
In this paper, we present the system we have developed as part of our participation in SemEval-2019 Task 6: OffensEval: Identifying and Categorizing Offensive Language in Social Media. Specifically, we have participated in subtask A. To solve this task, we have developed a classifier system based on SVM incorporating lexical features from a polarity lexicon and a offensive/profane word list.
Our next study will focus on exploring more features from lexicons because in SemEval-2018 Task 1 (Mohammad et al., 2018), most of the topperforming teams relied on features derived from existing affective lexicons. Also, we will continue working on classifying offensive tweets because today it is a very important task due to the large amount of offensive data generated by users on the Web and we need to prevent the serious consequences it can have on other users. | 2,123.8 | 2019-06-01T00:00:00.000 | [
"Computer Science",
"Linguistics"
] |
Minimal Dirac Neutrino Mass Models from $U(1)_R$ Gauge Symmetry and Left-Right Asymmetry at Colliders
In this work, we propose minimal realizations for generating Dirac neutrino masses in the context of a right-handed abelian gauge extension of the Standard Model. Utilizing only $U(1)_R$ symmetry, we address and analyze the possibilities of Dirac neutrino mass generation via (a) \textit{tree-level seesaw} and (b) \textit{radiative correction at the one-loop level}. One of the presented radiative models implements the attractive \textit{scotogenic} model that links neutrino mass with Dark Matter (DM), where the stability of the DM is guaranteed from a residual discrete symmetry emerging from $U(1)_R$. Since only the right-handed fermions carry non-zero charges under the $U(1)_R$, this framework leads to sizable and distinctive Left-Right asymmetry as well as Forward-Backward asymmetry discriminating from $U(1)_{B-L}$ models and can be tested at the colliders. We analyze the current experimental bounds and present the discovery reach limits for the new heavy gauge boson $Z^{\prime}$ at the LHC and ILC. Furthermore, we also study the associated charged lepton flavor violating processes, dark matter phenomenology and cosmological constraints of these models.
Introduction
Neutrino oscillation data [1] indicates that at-least two neutrinos have tiny masses. The origin of the neutrino mass is one of the unsolved mysteries in Particle Physics. The minimal way to obtain the non-zero neutrino masses is to introduce three right-handed neutrinos that are singlets under the Standard Model (SM). Consequently, Dirac neutrino mass term at the tree-level is allowed and has the form: L Y ⊃ y ν L L Hν R . However, this leads to unnaturally small Yukawa couplings for neutrinos (y ν ≤ 10 −11 ). There have been many proposals to naturally induce neutrino mass mostly by using the seesaw mechanism [2][3][4][5][6] or via radiative mechanism [7]. Most of the models of neutrino mass generation assume that the neutrinos are Majorana 1 type in nature. Whether neutrinos are Dirac or Majorana type particles is still an open question. This issue can be resolved by neutrinoless double beta decay experiments [10]. However, up-to-now there is no concluding evidence from these experiments.
However, it is more appealing to forbid all these unwanted terms utilizing simple gauge extension of the SM instead of imposing discrete or continuous global symmetries. This choice is motivated by the fact that contrary to gauge symmetries, global symmetries are known not to be respected by the gravitational interactions [22][23][24][25][26].
In this work, we extend the SM with U (1) R gauge symmetry, under which only the SM right-handed fermions are charged and the left-handed fermions transform trivially. This realization is very simple in nature and has several compelling features to be discussed in great details. Introducing only the three right-handed neutrinos all the gauge anomalies can be canceled and U (1) R symmetry can be utilized to forbid all the unwanted terms to build desired models of Dirac neutrino mass. Within this framework, by employing the U (1) R symmetry we construct a tree-level Dirac seesaw model [27] and two models where neutrino mass appears at the one-loop level. One of these loop models presented in this work is the most minimal model of radiative Dirac neutrino mass [28] and the second model uses the scotogenic mechanism [29] that links two seemingly uncorrelated phenomena: neutrino mass with Dark Matter (DM). As we will discuss, the stability of the DM in the latter scenario is a consequence of a residual Z 2 discrete symmetry that emerges from the spontaneous breaking of the U (1) R gauge symmetry.
Among other simple possibilities, one can also extend the SM with U (1) B−L gauge sym-Quarks Q Li (3, 2, 1 6 , 0) u Ri (3, 1, 2 3 , R H ) d Ri (3, 1 Dirac neutrino mass models in details, along with the particle spectrum and charge assignments. In Section 4, we discuss the running of the U (1) R coupling. Charged lepton flavor violating processes are analyzed in Section 5. We have also done the associated dark matter phenomenology in Section 6 for the scotogenic model. Furthermore, we analyze the collider implications in Section 7. In Section 8, we study the constraints from cosmological measurement and finally, we conclude in Section 9.
Framework
Our framework is a very simple extension of the SM: an abelian gauge extension under which only the right-handed fermions are charged. Such a charge assignment is anomalous, however, all the gauge anomalies can be canceled by the minimal extension of the SM with just three right-handed neutrinos. Within this framework the minimal choice to generate the charged fermion masses is to utilize the already existing SM Higgs doublet, hence the associated Yukawa couplings have the form: L Y ⊃ y u Q L Hu R + y d Q L Hd R + y e L L H R + h.c. (2.1) As a result, the choice of the U (1) R charges of the right-handed fermions of the SM must be universal and obey the following relationship: Here R k represents the U (1) R charge of the particle k. Hence, all the charges are determined once R H is fixed, which can take any value. The anomaly is canceled by the presence of the right-handed neutrinos that in general can carry non-universal charge under U (1) R . Under the symmetry of the theory, the quantum numbers of all the particles are shown in Table I.
In our set-up, all the anomalies automatically cancel except for the following two: 3) This system has two different types of solutions. The simplest solution corresponds to the case of flavor universal charge assignment that demands: R ν1,2,3 = R H which has been studied in the literature [38][39][40][41][42]. In this work, we adopt the alternative choice of flavor non-universal solution and show that the predictions and phenomenology of this set-up can be very different from the flavor universal scenario. We compare our model with the other U (1) R extensions, as well as U (1) B−L extensions of the SM. As already pointed out, a different charge assignment leads to distinct phenomenology in our model and can be distinguished in the neutrino and collider experiments.
Since SM is a good symmetry at the low energies, U (1) R symmetry needs to be broken around O(10) TeV scale or above. We assume that U (1) R gets broken spontaneously by the VEV of a SM singlet χ(1, 1, 0, R χ ) that must carry non-zero charge (R χ = 0) under U (1) R .
As a result of this symmetry breaking, the imaginary part of χ will be eaten up by the corresponding gauge boson X µ to become massive. Since EW symmetry also needs to break down around the O(100) GeV scale, one can compute the masses of the gauge bosons from the covariant derivatives associated with the SM Higgs H and the SM singlet scalar χ: As a consequence of the symmetry breaking, the neutral components of the gauge bosons will all mix with each other. Inserting the following VEVs: one can compute the neutral gauge boson masses as: Where, r v = Rχvχ R H v H and the well-known relation tan θ w = g /g and furthermore v H = 246 GeV. In the above mass matrix denoted by M 2 , one of the gauge bosons remains massless, which must be identified as the photon field, A µ . Moreover, two massive states appear which are the SM Z-boson and a heavy Z -boson (M Z < M Z ). The corresponding masses are given by: here we define: Which clearly shows that for g R = 0, the mass of the SM gauge boson is reproduced: To find the corresponding eigenstates, we diagonalize the mass matrix as: (2.14) The couplings g ψ of all the fermions in our theory are collected in Table II and will be useful for our phenomenological study performed later in the text. Note that the couplings of the left-handed SM fermions are largely suppressed compared to the right-handed ones, since they are always proportional to sin θ X and θ X must be small and is highly constrained by the experimental data.
Based on the framework introduced in this section, we construct various minimal models of Dirac neutrino masses in Sec 3 and study various phenomenology in the subsequent sections.
Dirac Neutrino Mass Models
By adopting the set-up as discussed above in this section, we construct models of Dirac neutrino masses. Within this set-up, if the solution R ν i = R H is chosen which is allowed by the anomaly cancellation conditions, then tree-level Dirac mass term y ν v H ν L ν R is allowed and observed oscillation data requires tiny Yukawa couplings of order y ν ∼ 10 −11 . This is expected not to be a natural scenario, hence due to aesthetic reason we generate naturally small Dirac neutrino mass by exploiting the already existing symmetries in the theory. This requires the implementation of the flavor non-universal solution of the anomaly cancellation conditions, in such a scenario U (1) R symmetry plays the vital role in forbidding the direct Dirac mass term and also all Majorana mass terms for the neutrinos.
In this section, we explore three different models within our framework where neutrinos receive naturally small Dirac mass either at the tree-level or at the one-loop level. Furthermore, we also show that the stability of DM can be assured by a residual discrete symmetry resulting from the spontaneous symmetry breaking of U (1) R . In the literature, utilizing U (1) R symmetry, two-loop Majorana neutrino mass is constructed with the imposition of an additional Z 2 symmetry in [38,39] and three types of seesaw cases are discussed, standard type-I seesaw in [40], type-II seesaw in [41] and inverse seesaw model in [42]. In constructing the inverse seesaw model, in addition to U (1) R , additional flavor dependent U(1) symmetries are also imposed in [42]. In all these models, neutrinos are assumed to be Majorana particles which is not the case in our scenario.
Tree-level Dirac Seesaw
In this sub-section, we focus on the tree-level neutrino mass generation via Dirac seesaw mechanism [27] 2 . For the realization of this scenario, we introduce three generations of vector-like fermions that are singlets under the SM: N L,R (1, 1, 0, R N ). In this model, the quantum numbers of the multiplets are shown in Table III and the corresponding Feynman diagram for neutrino mass generation is shown in Fig. 1. This choice of the particle content allows one to write the following Yukawa coupling terms relevant for neutrino mass generation: Here, we have suppressed the generation and the group indices. And the Higgs potential is given by: When both the U (1) R and EW symmetries are broken, the part of the above Lagrangian responsible for neutrino mass generation can be written as: Where M ν,N is a 6 × 6 matrix and, since ν R 1 carries a different charge we have y χ i1 = 0. The bare mass term M N of the vector-like fermions can in principle be large compared to the two VEVs, M N v H,χ , assuming this scenario the light neutrino masses are given by: Assuming v χ = 10 TeV, y H = y χ ∼ 10 −3 , to get m ν = 0.1 eV one requires M N ∼ 10 10 GeV.
Dirac neutrino mass generation of this type from a generic point of view without specifying the underline symmetry is discussed in [17].
In this scenario two chiral massless states appear, one of them is ν R 1 , which is a consequence of its charge being different from the other two generations. In principle, all three generations of neutrinos can be given Dirac mass if the model is extended by a second SM singlet χ (1, 1, 0, −6). When this field acquires an induced VEV all neutrinos become massive. This new SM singlet scalar, if introduced, gets an induced VEV from a cubic coupling of the form: µχ 2 χ + h.c.. Alternatively, without specifying the ultraviolet completion of the model, a small Dirac neutrino mass for the massless chiral states can be generated via the dimension-5 operator N L ν R χ χ /Λ once U (1) R is broken spontaneously.
Vector-like fermion N L,R (1, 1, 0, 1) Table III: Quantum numbers of the fermions and the scalars in Dirac seesaw model.
Simplest one-loop implementation
In this sub-section, we consider the most minimal [28] model of radiative Dirac neutrino mass in the context of U (1) R symmetry. Unlike the previous sub-section, we do not introduce any vector-like fermions, hence neutrino mass does not appear at the tree-level. All treelevel Dirac and Majorana neutrino mass terms are automatically forbidden due to U (1) R symmetry reasons. This model consists of two singly charged scalars S + i to complete the loop-diagram and a neutral scalar χ to break the U (1) R symmetry, the particle content with their quantum numbers is presented in Table IV. With this particle content, the gauge invariant terms in the Yukawa sector responsible for generating neutrino mass are given by: And the complete Higgs potential is given by: By making use of the existing cubic term V ⊃ µS + 2 S − 1 χ + h.c. one can draw the desired Fig. 2. The neutrino mass matrix in this model is given by: Here θ represents the mixing between the singly charged scalars and m H i represents the mass of the physical state H + i . Here we make a crude estimation of the neutrino masses: This is the most minimal radiative Dirac neutrino mass mechanism which was constructed by employing a Z 2 symmetry in [44] and just recently in [28,33] by utilizing U (1) B−L symmetry. As a result of the anti-symmetric property of the Yukawa couplings y S 1 , one pair of chiral states remains massless to all orders, higher dimensional operators cannot induce mass to all the neutrinos. As already pointed out, neutrino oscillation data is not in conflict with one massless state.
Scotogenic Dirac neutrino mass
The third possibility of Dirac neutrino mass generation that we discuss in this sub-section contains a DM candidate. The model we present here belongs to the radiative scotogenic [29] class of models and contains a second Higgs doublet in addition to two SM singlets.
Furthermore, a vector-like fermion singlet under the SM is required to complete the oneloop diagram. The particle content of this model is listed in Table V and the associated loop-diagram is presented in Fig. 3. The relevant Yukawa interactions are given as follows: And the complete Higgs potential is given by: (3.23) The SM singlet S and the second Higgs doublet η do not acquire any VEV and the loop-diagram is completed by making use of the quartic coupling V ⊃ λ D η † HχS + h.c..
Here for simplicity, we assume that the SM Higgs does not mix with the other CP-even states, consequently, the mixing between S 0 and η 0 originates from the quartic coupling λ D (and similarly for the CP-odd states). Then the neutrino mass matrix is given by: Where the mixing angle θ ( θ ) between the CP-even (CP-odd) states are given by: For a rough estimation we assume no cancellation among different terms occurs. Then by setting m H = 1 TeV, M N = 10 3 TeV, λ D = 0.1, v χ = 10 TeV, y η,S ∼ 10 −3 one can get the correct order of neutrino mass m ν ∼ 0.1 eV.
Since ν R 1 carries a charge of −5, a pair of chiral states associated with this state remains massless. However, in this scotogenic version, unlike the simplest one-loop model presented in the previous sub-section, all the neutrinos can be given mass by extending the model further. Here just for completeness, we discuss a straightforward extension, even though this is not required since one massless neutrino is not in conflict with the experimental data. If the model defined by Table V is extended by two SM singlets χ (1, 1, 0, −6) and a S (1, 1, 0, 11 2 ), all the neutrinos will get non-zero mass. The VEV of the field χ can be induced by the allowed cubic term of the form µχ 2 χ + h.c. whereas, S does not get any induced VEV.
Here we comment on the DM candidate present in this model. As aforementioned, we do not introduce new symmetries by hand to stabilize the DM. In search of finding the unbroken symmetry, first, we rescale all the U (1) R charges of the particles in the theory given in Table V including the quark fields in such a way that the magnitude of the minimum charge is unity. From this rescaling, it is obvious that when the U (1) R symmetry is broken spontaneously by the VEV of the χ field that carries six units of rescaled charge leads to: However, since the SM Higgs doublet carries a charge of two units under this surviving Z 6 symmetry, its VEV further breaks this symmetry down to: Z 6 → Z 2 . This unbroken discrete Z 2 symmetry can stabilize the DM particle in our theory. Under this residual symmetry, all the SM particles are even, whereas only the scalars S, η and vectorlike fermions N L,R are odd and can be the DM candidate. Phenomenology associated with the DM matter in this scotogenic model will be discussed in Sec. 6.
Running of the U (1) R Gauge Coupling
In this section, we briefly discuss the running of the U (1) R gauge coupling g R , at the one-loop level in our framework. The associated β-function can be written as: Where the coefficient b R can be calculated from [45]: The first (second) sum is over the fermions (scalars), f i (s i ). Here, κ = 1/2 for Weyl fermions, N g is the number of fermion generations, η = 2 for complex scalars and S 2 are the Dynkin indices of the representations with the appropriate multiplicity factors. By solving Eq. (4.27), the Landau pole can be found straightforwardly: Fig. 4 for the three different models discussed in this work. As expected, the higher the value of g R , smaller the Λ Landau gets.
Lepton Flavor Violation
In this section, we pay special attention to the charged lepton flavor violation (cLFV) which is an integral feature of these Dirac neutrino mass models. These lepton flavor violating processes provide stringent constraints on TeV-scale extensions of the standard model and, as a consequence put restrictions on the free parameters of our theories. For the first model we discussed, where neutrino masses are generated via Dirac seesaw mechanism, the cLFV decay rates induced by the neutrino mixings (cf. The cLFV decay processes α → β + γ arise from one-loop diagrams are shown in the SU (2) L singlet charged scalars (H ± 1,2 ). However, the charged scalar S ± 1 determines the chirality of the initial and final-state charged leptons to be left-handed, whereas S ± 2 mediated process fixes the chirality to be right-handed and hence there will be no interference between these two contributions. The Yukawa term y S 1 is anti-symmetric in nature, whereas y S 2 has completely arbitrary elements in the second and third rows (recall the restriction y S 2 i1 = 0). We can always make such a judicious choice that no more than one entry in a given row of y S 2 can be large and thus we can suppress the contribution from the charged scalar H ± 2 for the cLFV processes. The expression for α → β + γ decay rates can be expressed as 3 : Similarly, we analyze the major cLFV processes in scotogenic Dirac neutrino mass model.
The representative Feynman diagram for the cLFV process α → β + γ is shown in Fig. 5 (right diagram). Here also, charged Higgs H ± , which is the part of the SU (2) L doublet η, mainly contributes to the cLFV process α → β + γ (cf. Fig. 5). The decay rate for α → β + γ solely depends on the two mass terms m H + , m N and Yukawa term y η . The decay width expression for this process can be written as: Here, t = m 2 F /m 2 B , and the function f B (t) is expressed as [46,47] f In Fig. 7, we have shown the branching ratio predictions for the different cLFV processes: µ → e + γ (top left), τ → e + γ (top right) and τ → µ + γ (bottom) as a function of mass (m H + ) in scotogenic one-loop Dirac neutrino mass model for three benchmark values of Yukawas: y η αi y η * βi = 10 −1 , 10 −2 and 10 −3 . For our analysis, we set the vector-like fermion mass m N to be 5 TeV. The µ → eγ process imposes the most stringent bounds. In this setup, for the Yukawas: y η αi y η * βi = 10 −1 , 10 −2 and 10 −3 , we get charged Higgs mass bounds to be m H + = 3.1 TeV, 4.6 TeV and 5 TeV respectively. As we can see from Fig. 7, most of the parameter space in this model is well-consistent with these cLFV processes and which can be testable at the future experiments. We have shown the future projection reach for these cLFV processes by red dashed lines in Fig. 7.
Dark Matter Phenomenology
In this section, we briefly discuss the Dark Matter phenomenology in the scotogenic Dirac neutrino mass model. As aforementioned, in this model, a Z 2 subgroup of the original U (1) R symmetry remains unbroken that can stabilize the DM particle. Under this residual symmetry, all the SM particles are even, whereas only the scalars S, η and vector-like Dirac fermion Ref. [52] in a different set-up and corresponding study has been done for the neutral singlet scalar, S in Ref. [53,54]. In the following analysis, we consider N 1 to be the lightest among all of these particles, hence serves as a good candidate for DM (for simplicity we will drop the subscript from N 1 in the following). We aim to study the DM phenomenology associated with the vector-like Dirac fermion N L,R here. Due to Dirac nature of the dark matter, the phenomenology associated with it is very different from the Majorana fermionic dark matter scenario [55].
In our case, N pairs can annihilate through s-channel Z exchange process to a pair of SM fermions and right-handed neutrinos. Furthermore, if m DM > m Z , then N may also annihilate directly into pairs of on-shell Z bosons, which subsequently decay to SM fermions. It can also annihilate to SM fermions and right-handed neutrinos via t− channel scalar (S, η 0 , η + ) exchanges. The representative Feynman diagrams for the annihilation of DM particle are shown in Fig. 8. It is important to mention that for the Majorana fermionic dark matter case, the annihilation rate is p− wave (∼ v 2 ) suppressed since the vector coupling to a self-conjugate particle vanishes, on the contrary, the annihilation rate is not suppressed for the Dirac scenario (s-wave). The non-relativistic form for this annihilation cross-section can be found here [58]. In Fig. 9, we analyze the dark matter relic abundance as a function of dark matter mass m DM for various gauge couplings g R (left) and Z boson masses (right). In addition to the relic density, we also take into account the constraints from DM direct detection experiments. In case of Majorana fermionic dark matter, at the tree-level, the spin-independent DM-nucleon scattering cross-section vanishes. However, at the loop-level, the spin-independent operators can be generated and hence it is considerably suppressed.
The dominant direct detection signal remains the spin-dependent DM-nucleon scattering cross-section which for the Majorana fermionic dark matter is four times that for the Diracfermionic dark matter case. In general, the Z interactions induce both spin-independent (SI) and spin-dependent (SD) scattering with nuclei. The representative Feynman diagram for the DM-nucleon scattering is shown in Fig. 10. Particularly, in the scotogenic Dirac neutrino mass model, DM can interact with nucleon through t− channel Z exchange. Hence, Figure 11: Spin-independent dark matter-nucleon scattering cross-section, σ (in pb) as a function of the dark matter mass m DM with different gauge coupling g R = 0.2, 0.277.
Here we set m Z = 10 TeV. Yellow, blue and green color solid lines represent current direct detection cross-section limit from LUX-2017 [59], XENON1T [60] and PandaX-II (2017) [61] experiment respectively. large coherent spin-independent scattering may occur since both dark matter and the valence quarks of nucleons possess vector interactions with Z and this process is severely constrained by present direct detection experiment bounds. The DM-nucleon scattering cross-section is estimated in Ref. [58]. In Fig. 11, we analyze the spin-independent dark matter-nucleon
Collider Implications
Models with extra U (1) R implies a new Z neutral boson, which contains a plethora of phenomenological implications at colliders. Here we mainly focus on the phenomenology of the heavy gauge boson Z emerging from U (1) R .
Constraint on Heavy Gauge Boson Z from LEP
There are two kinds of Z searches: indirect and direct. In case of indirect searches, one can look for deviations from the SM which might be associated with the existence of a new gauge boson Z . This generally involves precision EW measurements below and above the Z-pole. e + e − collision at LEP experiment [62] above the Z boson mass provides significant constraints on contact interactions involving e + e − and fermion pairs. One can integrate out the new physics and express its influence via higher-dimensional (generally dim-6) operators.
For the process e + e − → ff , contact interactions can be parameterized by an effective Lagrangian, L ef f , which is added to the SM Lagrangian and has the form: Due to the nature of U (1) R gauge symmetry, the above interaction favors only the righthanded chirality structure. Thus, the constraint on the scale of the contact interaction for the process e + e − → l + l − from LEP measurements [62] will indirectly impose bound on Z mass and the gauge coupling (g R ) that can be translated into: Other processes such as e + e − → cc and e + e − → bb impose somewhat weaker bounds than the ones quoted in Eq. 7.35. on the Z mass and U (1) R coupling constant g R in our model as the production cross-section solely depends on these two free parameters. Throughout our analysis, we consider that the mixing Z −Z angle is not very sensitive (s X = 0). In order to obtain the constraints on these parameter space, we use the dedicated search for new resonant high-mass phenomena in dielectron and di-muon final states using 36.1 fb −1 of proton-proton collision data, collected at √ s = 13 TeV by the ATLAS collaboration [63]. The searches for high mass phenomena in dijet final states [64] will also impose bound on the model parameter space, but it is somewhat weaker than the di-lepton searches due to large QCD background. For our analysis, we implement our models in FeynRules_v2.0 package [65] and simulate the events for the process pp → Z → e + e − (µ + µ − ) with MadGraph5_aMC@NLO_v3_0_1 code [66]. Then, using parton distribution function (PDF) NNPDF23_lo_as_0130 [67], the cross-section and cut efficiencies are estimated. Since no significant deviation from the SM prediction is observed in experimental searches [63] for high-mass phenomena in di-lepton final states, the upper limit on the cross-section is derived from the experimental analyses [63] using σ×
Heavy Gauge Boson Z at the LHC
where N rec is the number of reconstructed heavy Z candidate, σ is the resonant production cross-section of the heavy Z , BR is the branching ratio of Z decaying into di-lepton final states , A × is the acceptance times efficiency of the cuts for the analysis. In Fig. 12 In Fig. 13, we have shown all the current experimental bounds in M Z − g R plane. Red meshed zone is excluded from the current experimental di-lepton searches [63]. The cyan meshed zone is forbidden from the LEP constraint [62] and the blue meshed zone is excluded from the limit on SM Z boson mass correction: 1 3 M Z /g R > 12.082 TeV as aforementioned. We can see from Fig. 13 that the most stringent bound in M Z − g R plane is coming from direct Z searches at the LHC. After imposing all the current experimental bounds, we analyze the future discovery prospect of this heavy gauge boson Z within the allowed parameter space in M Z − g R plane looking at the prompt di-lepton resonance signature at the LHC. We find that a wider region of parameter space in M Z − g R plane can be tested at the future collider experiment. Black, green, purple and brown dashed lines represent the projected discovery reach at 5σ significance at 13 TeV LHC for 100 fb −1 , 300 fb −1 , 500 fb −1 and 1 ab −1 luminosities. On the top of that, the right-handed chirality structure 5 For related works see also [68,69].
of U (1) R can be investigated at the LHC by measuring Forward-Backward (FB) and top polarization asymmetries in Z → tt mode [70] and which can discriminate our U (1) R Z interaction from the other Z interactions in U (1) B−L model. The investigation of other exotic decay modes (N N , χχ, S + 2 S − 2 ) of heavy Z is beyond the scope of this article and shall be presented in a future work since these will lead to remarkable multi-lepton or displaced vertex signature [71][72][73][74][75][76][77] at the colliders.
Heavy Gauge Boson Z at the ILC
Due to the point-like structure of leptons and polarized initial and final state fermions, lepton colliders like ILC will provide much better precision of measurements. The purpose of the Z search at the ILC would be either to help identifying any Z discovered at the LHC or to extend the Z discovery reach (in an indirect fashion) following effective interaction.
Even if the mass of the heavy gauge boson Z is too heavy to directly probe at the LHC, we will show that by measuring the process e + e − → f + f − , the effective interaction dictated by Eq. 7.34 can be tested at the ILC. Furthermore, analysis with the polarized initial states at ILC can shed light on the chirality structure of the effective interaction and thus it can distinguish between the heavy gauge boson Z emerging from U (1) R extended model and the Z from other U (1) extended model such as U (1) B−L . The process e + e − → f + f − typically exhibits asymmetries in the distributions of the final-state particles isolated by the angularor polarization-dependence of the differential cross-section. These asymmetries can thus be utilized as a sensitive measurement of differences in interaction strength and to distinguish a small asymmetric signal at the lepton colliders. In the following, the asymmetries (Forward-Backward asymmetry, Left-Right asymmetry) related to this work will be described in great detail.
Forward-Backward Asymmetry
The differential cross-section in Eq. 7.44 is asymmetric in polar angle, leading to a difference of cross-sections for Z decays between the forward and backward hemispheres. Earlier, LEP experiment [62] used Forward-backward asymmetries to measure the difference in the interaction strength of the Z-boson between left-handed and right-handed fermions, which gives a precision measurement of the weak mixing angle. Here we will show that our framework leads to sizable and distinctive Forward-Backward (FB) asymmetry discriminating from other models and which can be tested at the ILC, since only the right-handed fermions carry non-zero charges under the U (1) R . For earlier analysis of FB asymmetry in the context of other models as well as model-independent analysis see for example Refs. [40,42,[78][79][80][81][82][83][84][85][86][87][88]. At the ILC, Z effects have been studied for the following processes: where σ i = ±1 are the helicities of initial (final)-state leptons and k i 's are the momenta.
Since the e + e − → µ + µ − process is the most sensitive one at the ILC, we will focus on this process only for the rest of our analysis. One can write down the corresponding helicity amplitudes as: where s = (k 1 + k 2 ) 2 = (k 3 + k 4 ) 2 , s Z = s − m 2 Z + im Z Γ Z , and cos θ indicates the scattering polar angle. e 2 = 4πα with α = QED coupling constant, c R = tan θ W and c L = − cot 2θ W and θ W is the weak mixing angle.
For a purely polarized initial state, the differential cross-section is expressed as: Then the differential cross-section for the partially polarized initial state with a degree of polarization P e − for the electron beam and P e + for the positron beam can be written as [40,78]: One can now define polarized cross-section σ L,R (for the realistic values at the ILC [89]) as: Using this one can study the initial state polarization-dependent forward-backward asymmetry as: (7.48) where L represents the integrated luminosity, indicates the efficiency of observing the events, and c max is a kinematical cut chosen to maximize the sensitivity. For our analysis we consider = 1, and c max = 0.95. Then we estimate the sensitivity to Z contribution by: where A SM +Z 2σ sensitivity for FB asymmetry by looking at e + e − → µ + µ − process at the ILC. We can also expect much higher sensitivity while combining different final fermionic states such as other leptonic modes (e + e − , τ + τ ) as well as hadronic modes jj. Moreover, the sensitivity to Z interactions can be enhanced by analyzing the scattering angular distribution in details, although it is beyond the scope of our paper.
Left-Right Asymmetry
The simplest example of the EW asymmetry for an experiment with a polarized electron beam is the left-right asymmetry A LR , which measures the asymmetry at the initial vertex.
Since there is no dependence on the final state fermion couplings, one can get an advantage by looking at LR asymmetry at lepton collider. Another advantage of this LR asymmetry measurement is that it is barely sensitive to the details of the detector. As long as at each value of cos θ, its detection efficiency of fermions is the same as that for anti-fermions, the efficiency effects should be canceled within the ratio because the Z decays into a back-toback fermion-antifermion pair and about the midplane perpendicular to the beam axis, the detector was designed to be symmetric. For earlier studies on LR asymmetry in different contexts, one can see for example Refs. [78][79][80][81][82][83][84][85][86][87][88]90]. LR asymmetry is defined as: where N L is the number of events in which initial-state particle is left-polarized, while N R is the corresponding number of right-polarized events.
Similarly, one can estimate the sensitivity to Z contribution in LR asymmetry by [79,82,90]: with a statistical error of the asymmetry δA LR , given [79,82,90] as (7.54) In Fig. 15, we analyze the strength of LR asymmetry ∆A LR for the e + e − → µ + µ − process as a function of VEV v χ (= M Z /3g R ). In order to distinguish Z interaction, we have analysed both the cases: Z emerging from both U (1) R and U (1) B−L cases. We have considered the center of mass energy for the ILC at √ s = 500 GeV and the integrated ∆N ef f and to compute it we follow the procedure discussed in Ref. [91]. After ν R states decouple, specifically for T < T ν L dec < T ν R dec (T ν L/R dec represents the decoupling temperature of the ν L/R neutrinos) their total contribution is given by: 55) here N ν R is the number of massless or light right-handed neutrinos, g(T ) is the relativistic degrees of freedom at temperature T, with the well-known quantities g(T ν L dec ) = 43/4 and T ν L dec = 2.3 MeV [92]. For the following computation, we take the temperature-dependent degrees of freedom from the data listed in Table S2 of Ref. [93], and by utilizing the cubic spline interpolation method, we present g as a function of T in Fig. 17 (left plot).
The current cosmological measurement of this quantity is N ef f = 2.99 +0. 34 −0.33 [94], which is completely consistent with the SM prediction N SM ef f = 3.045 [95]. These data limit the contribution of the right-handed neutrinos to be ∆N ef f < 0.285. However, future measurements [96] can put even tighter constraints on this deviation ∆N ef f < 0.06. The righthanded neutrinos decouple from the thermal bath when the interaction rate drops below the expansion rate of the Universe: Here the Hubble expansion parameter is defined as: where M P l is the Planck mass and g ν R = 2 is the spin degrees of freedom of the right-handed neutrinos. And the interaction rate that keeps the right-handed neutrinos at the thermal bath is given by: Here, the Fermi-Dirac distribution is f ν R (p) = 1/(e p/T + 1), the number density is n ν R = (3/(2π 2 )) ζ(3)T 3 , s = 2pq(1 − cos θ) and v = 1 − cos θ. Furthermore, the annihilation cross-section σ(ν R ν R → f i f i ) is as follows: Where N f C and Q f represent the color degrees of freedom and the charge under the U (1) R for a fermion f respectively.
Conclusions
We believe that the scale of new physics is not far from the EW scale and a simple extension of the SM should be able to address a few of the unsolved problems of the SM. Adopting this belief, in this work, we have explored the possibility of one of the most minimal gauge extensions of the SM which is U (1) R that is responsible for generating Dirac neutrino mass and may also stabilize the DM particle. Cancellations of the gauge anomalies are guaranteed by the presence of the right-handed neutrinos that pair up with the left-handed partners to form Dirac neutrinos. Furthermore, this U (1) R symmetry is sufficient to forbid all the unwanted terms for constructing naturally light Dirac neutrino mass models without imposing any additional symmetries by hand. The chiral non-universal structure of our framework induces asymmetries, such as forward-backward asymmetry and especially left-right asymmetry that are very distinct compared to any other U (1) models. By performing detailed phenomenological studies of the associated gauge boson, we have derived the constraints on the U (1) R model parameter space and analyzed the prospect of its testability at the collider such as at LHC and ILC. We have shown that a heavy Z (emerging from U (1) R ), even if its mass is substantially higher than the center of mass energy available at the ILC, would manifest itself at tree-level by its propagator effects producing sizable contributions to the LR asymmetry or FB asymmetry. This can be taken as an initial guide to explore the U (1) R model at colliders. These models can lead to large lepton flavor violating observables which we have studied and they could give a complementary test for these models. In this work, we have also analyzed the possibility of having a viable Dirac fermionic DM candidate stabilized by the residual discrete symmetry originating from U (1) R , which connects to SM via Z portal coupling in a framework that also cater for neutrino mass generation. The DM phenomenology is shown to be crucially dictated by the interaction of N with Z . Furthermore, we have inspected the constraints coming from the cosmological measurements and compared this result with the different collider bounds. For a comparison, here we provide a benchmark point by fixing the gauge coupling g R = 0.056. With this, the current upper bound on the Z mass is M Z > 4.25 TeV from 13 TeV LHC data with 36.1f b −1 luminosity, and the future projection reach limit translates into M Z > 4.67 TeV with 100f b −1 luminosity. Whereas for the same value of the gauge coupling, the ILC has the discovery reach of 4.63 TeV at the 2σ confidence level looking at the left-right asymmetry. The corresponding bounds from LEP, Z−boson mass correction and from cosmology are M Z > 0.2, 2, 1.49 TeV respectively, which are somewhat weaker compared to LHC and ILC bounds. To summarize, the presented Dirac neutrino mass models are well motivated and have rich phenomenology. | 9,718.8 | 2019-04-16T00:00:00.000 | [
"Physics"
] |
Human Enamel Nanohardness , Elastic Modulus and Surface Integrity after Beverage Contact
This study evaluated nanohardness, elastic modulus and surface roughness of human enamel after contact with citric beverages. Human enamel samples were assigned to 3 groups according to the type of beverage used: carbonated drink, orange juice and tap water (control). Surface roughness was assessed using a profilometer, and nanohardness and elastic modulus were recorded using a nanoindenter. The pH of the beverages was measured before and after citric contact during 5 weeks. Means(SD) were as follows: Carbonated drink: elastic modulus decreased from 111.6(14.5) to 62.3(10.3) GPa, nanohardness decreased from 4.62(0.67) to 1.28(0.46) GPa, roughness increased from 5.30(2.39) to 6.86(2.56) μm, and the pH changed from 2.69(0.35) to 2.29(0.24); Orange juice: elastic modulus changed from 115.15(12.94) to 92.11 GPa (13.83), nanohardness from 5.54(1.48) to 3.18 GPa (0.64), roughness from 5.26(2.27) to 6.73(2.25) μm, and pH from 3.46(0.20) to 3.03(0.14); Tap water (control): elastic modulus changed from 117.87(22.3) to 107.91(20.05) GPa, nanohardness from 4.35(1.66) to 4.28(0.93) GPa, roughness from 5.76(3.11) to 6.11(2.65) μm, and pH from 7.97(0.28) to 8.11(0.21). In conclusion, soft drink exposure caused a significant decrease in nanohardness and elastic modulus. The pH of the soft drink was more acidic from 5°C to 37°C. Orange juice showed a similar trend but, surprisingly, it had less effect on hardness, elastic modulus and roughness of enamel than the carbonated drink.
INTRODUCTION
Citrus fruits are found in the daily diet of many people, some of the more common being orange, pineapple, lemon, tangerine and grapefruit.A number of acidic fruit juices that are part of everyday diet have been implicated as important causes of erosion of tooth structure (1,2).The pH level of soft drinks and its buffering effect have been thoroughly investigated.Edwards et al. (3) found that fruit-based carbonated drinks have more erosive potential than other carbonated drinks, with flavored waters having the same erosive potential as fruit-based carbonated drinks.When calcium lactate was added to Coca-Cola ® there was significantly reduced tooth erosion in rats compared to unaltered Coca-Cola ® after 5 weeks of contact (4).
Most of the experimental work to determine the effect of citrus fruits on enamel has been focused on beverages.Efforts to protect tooth deterioration from erosive substances have included decreasing consumption of acidic foods and carbonated drinks (5,6).Citrus foods like fruits, candies and carbonated beverages are common in contemporary diets.Research has demonstrated that there is a strong correlation between citrus candy, citrus soft beverages and enamel loss (1,5).It has been found that the microhardness of composite resin remained stable up to 1 month of beverage exposure, but decreased significantly at the second month (7).Temperature and exposure time affect mechanical properties of human enamel (8)(9)(10).
The nanohardness test is an ultra-micro indentation system for testing hard materials.The use of a
Braz Dent J (2008) 19(1): 68-72
nanoindenter with a Berkovich diamond tip with a nominal radius of 5,000 nm has been widely used.A nanoindenter measures the hardness and elastic modulus of a material on an extremely small surface scale of 50 nm.The nanoindenter technique is also essentially non-destructive because the indentations that are made are too small to be seen except with a high power optical or atomic force microscope.The nanoindenter system is equipped with an atomic force microscope that is useful in imaging the indentations or the surface of a material in general (11).
The purpose of the present study was to evaluate the nanohardness, elastic modulus and surface integrity of human enamel after citrus drink contact.
MATERIAL AND METHODS
Thirty enamel slabs (5 mm x 4 mm x 1.5 mm) were obtained from buccal intact non-grounded surfaces of freshly extracted unerupted human third molars using a diamond disc (Isomet, Buehler Ltd, Evanston, IL, USA) with copious water cooling.The samples were mounted in clear polymethylmethacrylate cylinders, initially immersed in tap water at 37°C oven for 48 h, and were randomly assigned to 3 groups (n=10) according to the type of beverage in which they were immersed: a carbonated drink (Sprite ® ; The Coca-Cola Company Atlanta, GA, USA), an orange juice-based drink (Minute Maid ® ; The Coca-Cola Company) and tap water (control).
The samples immersed in carbonated drink and orange juice were removed daily from the tap water and immersed in the fresh beverages at 5°C, and kept at room temperature for 30 min every day during 5 consecutive weeks.The samples were maintained in tap water at 37°C when were not in contact with fresh beverages (carbonated drink and orange juice).The surface integrity (roughness) test was performed with profilometry and quantified in micrometers before the samples were immersed in the specified beverages and after 5 weeks of contact.Average roughness (Ra) is the arithmetical mean of surface integrity.Profilometry showed maximum and minimum lines drawn at the highest peak and lowest valley.The mean line was determined by equating the areas defined by the profile curve above and below the minimum and maximum lines.The distance from the mean line to the contours of this profile edge (z) as a function of incremental length (x) was used to directly calculate a center-line average roughness value (Ra) using the following equation: Ra =1/L∫ L o ⏐z (x)⏐dx, along the measurement length (L).The nanoindentation hardness and the elastic modulus tests used a nanoindenter XP ® (MTS Systems Corporation, Oak Ridge, TN, USA) with a Berkovich diamond tip with a nominal radius of 5,000 nm (5 µm), and a force of 1.5X10 -3 N producing 10 nanoindentations per sample.An atomic force microscope (AFM) (MTS Systems Corporation) was used to obtain visual evidence of the indentations on the enamel surfaces.Nanohardness and elastic modulus measurements were performed before and after immersion of the samples, and the results were recorded in gigapascals (GPa).The equation used to calculate hardness was: H = P/Ap, where P is the work done by an indentation load and Ap the projected contact area of the indentation.The elastic modulus or Young's modulus is the constant between stress and strain.The modulus of elasticity was interpreted as the slope of the stress-strain graph using the equation: E=σ/ε, where, σ is the elastic stress and ε is the elastic strain.
The acidity of the beverages and tap water (control) was measured with a pH meter (Check-Mite pH-30 ® ; Corning Incorporated, Corning, NY, USA) daily before and after immersion of the enamel samples.
Before and after beverage exposure data [pH, surface integrity (roughness), nanoindentation hardness and elastic modulus)] were recorded, and means and standard deviation (SD) were calculated.The data of each physical property were analyzed for differences using an one-way analysis of variance followed by the Post-Hoc Tukey-Kramer test, using a confidence level of <0.0001, <0.49, <0.0001 and <0.0001 to determine the mean differences for pH, surface integrity, nanohardness and elastic modulus, respectively.
RESULTS
The mean pH, surface integrity, nanohardness, and elastic modulus for the 3 groups are shown in Tables 1 to 4. The pH results revealed that when the temperature of the beverages was increased from 5ºC to 37ºC after 30 min of contact, the soft drink became significantly more acidic (p<0.0001), while the orange juice did not have a statistically significant change (p>0.0001).The profilometric test did not show statistically significant (p>0.49)differences for the surface integrity for any of the groups.The nanoindentation hardness test data showed statistically significant lower values (p<0.0001) for the two experimental groups.There was a statistically significant reduction in the elastic modulus (p<0.0016) for the carbonate drink group compared to the orange juice and control groups.
The carbonated beverage had a greater effect on the nanohardness and elastic modulus of the human enamel samples compared to the orange juice.
DISCUSSION
Chemical composition of an acidic drink is clearly an important factor in the mechanical properties of enamel.Drinks that contain citric acid have been shown to be more erosive than those containing phosphoric acid (12).Factors such as pH, liquid environment and temperature affect physical properties, such as elastic modulus, hardness and surface roughness of human enamel.This study evaluated those physical properties after contact with citrus beverages.Although in vitro tests may not reflect intraoral conditions, findings under controlled conditions are helpful and can be applicable to clinical performance.
This study immersed enamel samples in the tested beverages during 30 min following a similar protocol to the ones used by West et 10 to 30 min.The results showed a significant decrease in the pH of the soft drinks when changing the temperature from 5°C to 37°C (8-10).These findings have a significant clinical relevance due to the fact that commonly soft drinks are served cold and temperature suddenly increases when the beverage is consumed, and reaches oral environment.Nanoindentation is a very selective and nondestructive test for obtaining hardness and elastic modulus data from very hard materials.The use of an AFM verifies that the indentation was made on surfaces without gaps, or Retzius striae.Despite the broad use of the nanoindentation technique in engineering and physics, its use in dentistry has not been popular until recently.Nevertheless, authors like Barbour et al. (10) used nanoindentation hardness and AFM to correlate soft drink consumption with softening of human enamel.
Many investigators have confirmed the negative effect of carbonated drinks on human enamel (1,6,7,(13)(14)(15)(16)(17)(18).This is in agreement with the results of this study, which revealed that the carbonated drinks decreased the physical properties (hardness and elastic modulus) of human enamel.The explanation for this decrease may be erosion of enamel; erosion is defined as a loss of tooth substance by chemical processes not involving bacteria (1,6).This study showed that the soft beverage has greater effect, decreasing mechanical properties of human enamel than orange juice which is in agreement with the results of Larsen and Nyvad (4).
Acidic foods and drinks are considered important factors in erosion of human enamel (1,2).However, Meurman and Frank (12) found that unpolished enamel was less liable to erosion than polished enamel under low pH due to the structure of enamel that greatly modify the progression of dental erosion on an in vitro test.This study showed that changes in temperature caused the pH of carbonated drink and orange juice to become more acidic.Maupome et al. (6) stated that the acidicity effect of carbonated beverages is mainly due to phosphoric acid.
The results of the present study did not reveal statistically significant changes of the surface integrity, a different finding from the results of dissolution in terms of depth of erosions reported by Larsen and Nyvad (16).However, further research should be conducted with new measurement systems, such as laser systems.There is also a need for more visual evidence with AFM analyses and for measurement of hardness and elastic modulus using nanoindentation techniques.
The findings of this in vitro test lead to the conclusion that: 1. Human enamel nanohardness significantly decreased to about 1/3 and 1/2 of its initial value after contact with the carbonated soft drink and the orange juice, respectively; 2. Elastic modulus of human enamel decreased more in the carbonated drink (Sprite ® ) than in the orange juice; 3. Exposures to soft drink or orange juice for 5 weeks did not significantly affect the surface integrity of human enamel when measured by profilometry; 4. The pH of the soft drink and orange juice dropped from low to very low (acidic values) with a temperature change from 5°C to 37°C.
Table 1 .
al. andBarbour et 10) (8,10), who used periods of immersion ranging from pH values of the beverages before and after immersion of human enamel samples.
Table 2 .
Surface roughness of human enamel before and after beverage contact.
Table 3 .
Nanoindentation hardness of human enamel before and after contact with the beverages.
Table 4 .
Elastic modulus of human enamel before and after contact with the beverages. | 2,710.4 | 2008-01-01T00:00:00.000 | [
"Medicine",
"Materials Science"
] |
Integrating Aerial LiDAR and Very-High-Resolution Images for Urban Functional Zone Mapping
: This study presents a new approach for Urban Functional Zone (UFZ) mapping by integrating two-dimensional (2D) Urban Structure Parameters (USPs), three-dimensional (3D) USPs, and the spatial patterns of land covers, which can be divided into two steps. Firstly, we extracted various features, i.e., spectral, textural, geometrical features, and 3D USPs from very-high-resolution (VHR) images and light detection and ranging (LiDAR) point clouds. In addition, the multi-classifiers (MLCs), i.e., Random Forest, K-Nearest Neighbor, and Linear Discriminant Analysis classifiers were used to perform the land cover mapping by using the optimized features. Secondly, based on the land cover classification results, we extracted 2D and 3D USPs for different land covers and used MLCs to classify UFZs. Results for the northern part of Brooklyn, New York, USA, show that the approach yielded an excellent accuracy of UFZ mapping with an overall accuracy of 91.9%. Moreover, we have demonstrated that 3D USPs could considerably improve the classification accuracies of UFZs and land covers by 6.4% and 3.0%, respectively.
Introduction
Urban Functional Zones (UFZs, acronyms used throughout the manuscript are listed in Supplementary Material Table S1) refer to different functional divisions of urban lands, e.g., commercial, residential, industrial, and park zones [1]. Different UFZs often feature different architectural environments and are composed of various land covers. However, previous studies pay much attention to land cover mapping instead of large-scale UFZ classification [1]. As the basic spatial unit in cities, UFZs are vital for the urban planner and managers to conduct urban-related applications, e.g., the investigation of land surface temperatures, landscape patterns, urban planning, and urban ecological modeling [2][3][4][5]. Therefore, the detection of UFZs is a basis for urban management and provides a better understanding of urban spatial structures [6,7].
Very-High-Resolution (VHR) images represent urban surfaces with good spatial details, capturing tiny differences in spectral and textural records, thus can be utilized for UFZ mapping [8][9][10]. Many authors used VHR data to perform the UFZ classification [11][12][13]. Zhang et al. [11] proposed a Hierarchical Semantic Cognition (HSC) method to establish four semantic layers, i.e., visual features, object categories, spatial patterns of objects, and zone functions. They used their hierarchical relations to identify UFZs and found that the HSC method yields a good accuracy for UFZ mapping (the overall accuracy was 90.8%). Further, Zhang et al. [1] performed a top-down feedback method-Inverse Hierarchical Semantic Cognition (IHSC) to optimize the initial HSC results, and they found that the IHSC increased the Overall Accuracy (OA) from 84.0 to 90.5%. Recently, authors utilized the IHSC increased the Overall Accuracy (OA) from 84.0 to 90.5%. Recently, authors utilized the Point of Interest (POI) data for UFZ mapping. For instance, Hu et al. [12] generated parcel information using the road networks and integrated Landsat 8 Operational Land Imager images and POI data to classify parcels into eight functional zones (level I, e.g., residential, commercial, industrial, and institutional areas) and 16 land covers (Level II). They found that the OA value of Level I classification was 81.04%. Besides, Zhou et al. [13] proposed a Super Object-Convolutional Neural Network (SO-CNN) method to conduct UFZ classification. They used the POI data to identify four UFZs, i.e., commercial office, urban green, industrial warehouse, and residential zones in Hangzhou city, China and found that the classification results are refined with an OA value of 91.1%. However, previous studies did not explore the impacts of three-dimensional (3D) urban structure parameters (USPs), e.g., building height (BH) and sky view factor (SVF) on UFZ detection.
It is noteworthy that 3D USPs play distinctive roles in describing urban layouts and constructions [14]. For example, an investigation from the northern part of Brooklyn, New York City, USA, shows that industrial zones usually locate on an open ground surface, resulting in higher values of SVF than residential and commercial zones (Figure 1a). Generally, industrial zones feature low-rise and large buildings, thus often have low BHs (Figure 1b). In addition, different Street Aspect Ratios (SAR) and Floor Area Ratios (FAR) were observed among different functional zones (Figure 1c,d). Hence, it is of importance to consider 3D USPs for UFZ mapping. Recently, Light Detection And Ranging (LiDAR) technology rise as it can provide a fast and straightforward approach to acquiring the height information of underlying surfaces [15]. Thus, LiDAR technology can be regarded as a feasible approach to extract 3D USPs [14]. Note that it is challenging to extract UFZs directly from the VHR images because spectral, textural, and geometrical features can only be effective in segment objects instead of identifying UFZs [11]. As one of the essential elements of UFZs, the components and configurations of land covers exert significant influences on measuring and analyzing UFZs [16]. Thus, it is important to perform the land cover classification for the subsequent UFZ mapping. In this study, a new approach that integrates multiple machine learning algorithms and 3D USPs is introduced for UFZ mapping. The objectives of the study are:
•
To integrate multi-machine learning algorithms and various features, primarily 3D USPs, for enhancing land cover mapping; • To perform UFZ mapping by coupling 3D USPs and multi-classifiers (MLCs); • To evaluate the influence of 3D USPs on the classifications of both land covers and UFZs.
Study Area
Our study area lies in the northern part of Brooklyn, New York City, USA ( Figure 2), with an area of 6.12 km 2 . The eastern and northern parts of the area are along the East River. The area has 8779 buildings, 8146 parcels, and 493 blocks. In addition, the area includes four typical UFZs, i.e., commercial, residential, industrial, and park zones [10]. In this study, a new approach that integrates multiple machine learning algorithms and 3D USPs is introduced for UFZ mapping. The objectives of the study are: • To integrate multi-machine learning algorithms and various features, primarily 3D USPs, for enhancing land cover mapping; • To perform UFZ mapping by coupling 3D USPs and multi-classifiers (MLCs); • To evaluate the influence of 3D USPs on the classifications of both land covers and UFZs.
Study Area
Our study area lies in the northern part of Brooklyn, New York City, USA ( Figure 2) with an area of 6.12 km 2 . The eastern and northern parts of the area are along the East River. The area has 8779 buildings, 8146 parcels, and 493 blocks. In addition, the area includes four typical UFZs, i.e., commercial, residential, industrial, and park zones [10].
Data
The primary data used in the study includes VHR images, LiDAR point clouds, road networks, and land-lot information.
Data
The primary data used in the study includes VHR images, LiDAR point clouds, road networks, and land-lot information.
• VHR images The high-resolution orthophotos of the study area were acquired from the New York City Office of Information Technology Services [17] (Figure 2b). The images consist of four bands (i.e., blue, green, red, and near-infrared bands) with a 0.3 m (1.0 ft) spatial resolution, which provides rich spectral information for the classification of land covers and UFZs.
•
LiDAR point clouds The point cloud data was acquired in May 2017 and collected using a Cessna 402 C or Cessna Caravan 208B aircraft equipped with Leica ALS80 and Riegl VQ-880-G laser systems. The data is released by the New York City Department of Information Technology and Telecommunications (NYCDITT) [18]. Furthermore, in order to generate an accurate Digital Surface Model (DSM), we eliminated the noise points (i.e., outliers and isolated points) using the "StatisticalOutlierRemove" filter operation of Point Cloud Library 1.6, and a voxel grid filter was adopted to reduce the redundant points. After filtering, the density of point clouds is about 8.0 points/m 2 [14].
Road networks and land-lot information
The road networks were obtained from Open Street Map (OSM) in 2017. We used the networks to delineate the blocks' boundaries, and the block was regarded as a basic unit for UFZ mapping [19,20]. In addition, the land-lot data, released by NYCDITT, provides the basic information of land functions. Thus, it could be used to label the blocks' functional attributes and provide the ground reference. Finally, 493 zones were generated from the road networks and land-lot data. Figure 3 shows the workflow of the UFZ classification, including two key steps: (1) Land cover mapping: a method ensemble multi-machine learning algorithms and 3D USPs was proposed for enhancing land cover mapping; in details, we extracted the spectral, textural, geometrical features, and 3D USPs of objects that were determined by the multi-resolution image segmentation; further, the feature optimization was employed by using the method of mean decrease impurity; at last, we selected the best classifier from three machine learning methods, i.e., Random Forest (RF), K-Nearest Neighbor (KNN), and Linear Discriminant Analysis (LDA), to label those objects. (2) UFZ mapping: a new method that integrates urban spatial information of both two-dimensional (2D) USPs, 3D USPs and MLCs were utilized for UFZ mapping; in details, we extracted the 2D USPs from the land cover mapping results and 3D USPs from LiDAR point clouds; then, Nearest Neighbor Index (NNI) was used to identify three spatial patterns of land covers, i.e., random distribution, aggregation, and uniform distribution; finally, we chose the classifier with the best performance to conduct UFZ classification.
Multi-Feature Extraction
To avoid the "salt and pepper" phenomenon of land cover classification, firstly, multi-resolution segmentation was used to segment the VHR images into multi-scale objects, which was performed by using eCognition software. In particular, the Estimation of Scale Parameter (ESP) tool [21][22][23][24][25][26] was used to generate the appropriate scale of land cover object. Secondly, we extracted four categories of features, i.e., spectral, textural, geometrical features and 3D USP, for the subsequent land cover labeling ( Table 1). As shown, the spectral features included spectral information (i.e., red, blue, green, and nearinfrared bands), Normalized Difference Vegetation Index (NDVI), Ratio Vegetation Index (RVI), Difference Vegetation Index (DVI), Normalized Difference Water Index (NDWI), Meani, Brightness, Ratio, Mean diff. to neighbor (Mean. diff.), Standard Deviation (Std. Dev). The textural features were revealed by different indices from the Gray-Level Cooccurrence Matrix (GLCM), i.e., angular second moment, variance, contrast, entropy, energy, correlation, inverse differential moment, dissimilarity, and homogeneity. The geometrical features were used to reveal geometrical characteristics of objects, i.e., area, border length, length/width, compactness, asymmetry, border index, density, elliptic fit, main direction, shape index. The spectral, textural, and geometrical features are widely used in object-based image research [27,28]. In particular, we selected three features for 3D USPs, including the Digital Surface Model (DSM), Sky View Factor (SVF), and flatness, all of them were extracted from LiDAR point clouds.
Multi-Feature Extraction
To avoid the "salt and pepper" phenomenon of land cover classification, firstly, multiresolution segmentation was used to segment the VHR images into multi-scale objects, which was performed by using eCognition software. In particular, the Estimation of Scale Parameter (ESP) tool [21][22][23][24][25][26] was used to generate the appropriate scale of land cover object. Secondly, we extracted four categories of features, i.e., spectral, textural, geometrical features and 3D USP, for the subsequent land cover labeling ( Table 1). As shown, the spectral features included spectral information (i.e., red, blue, green, and nearinfrared bands), Normalized Difference Vegetation Index (NDVI), Ratio Vegetation Index (RVI), Difference Vegetation Index (DVI), Normalized Difference Water Index (NDWI), Mean i , Brightness, Ratio, Mean diff. to neighbor (Mean. diff.), Standard Deviation (Std. Dev). The textural features were revealed by different indices from the Gray-Level Cooccurrence Matrix (GLCM), i.e., angular second moment, variance, contrast, entropy, energy, correlation, inverse differential moment, dissimilarity, and homogeneity. The geometrical features were used to reveal geometrical characteristics of objects, i.e., area, border length, length/width, compactness, asymmetry, border index, density, elliptic fit, main direction, shape index. The spectral, textural, and geometrical features are widely used in object-based image research [27,28]. In particular, we selected three features for 3D USPs, including the Digital Surface Model (DSM), Sky View Factor (SVF), and flatness, all of them were extracted from LiDAR point clouds. The ratio of the short axis to the long axis of an approximate ellipse of image objects
Border Index
The ratio of the perimeter of image object to the perimeter of the object's MBR. Density The ratio of area to radius of image objects The fitting degree of eclipse fit Main Direction Eigenvectors of covariance matrix of image objects Shape Index The ratio of perimeter to four times side length 3D USP Digital Surface Model (DSM, Figure 2c) DSM was produced by using an interpolation algorithm (i.e., binning approach) with all points. [39] Sky View Factor (SVF, Figure 2d) Sky view factor refers to the visible degree of sky in the ground level and its values vary from 0 to 1. 0 refers to the sky is not visible; in contrast, 1 refers to the sky is completely visible. [40] Flatness (the details can be found in Supplementary Material Figure S1) Flatness derived from DSM and refers to the flatness of the non-ground points. The points were generated by using the "lasground" filter operation in the LAStools. [14,41] Based on the spectral response of features on VHR images and investigation, six land covers were identified, including buildings, trees, grasses, soil lands, impervious grounds, and water bodies. Table 2 provides the details of the samples used for different land covers, and the training samples are randomly selected by using the "model_selection" tool in the sklearn package. As shown, 8/10 of the training samples were randomly selected and used for classification, and the others were chosen for the accuracy assessment.
Feature Optimization
Feature optimization provides a better understanding of the feature importance and is crucial to improve the classification accuracy. We used the Gini Index (GI) (i.e., Mean Decrease Impurity) to measure the importance of each variable. GI was calculated from the structure of the RF classifier, representing the average degree of error reduction by each feature, and can be defined as [42,43]: where GI (P) is the GI value, k represents the kth classes, and P k represents the probability that the sample belongs to the k class. Generally, a higher GI value means the corresponding variable exerts more influence on the classification. Details of feature optimization steps can be found in [44].
(1) RF classifier consists of multiple decision trees and can utilize different trees to train samples and predict results. In particular, each tree would yield its predicted result. Then, by counting the vote results in different decision trees, RF integrates their vote results to predict the final results. Therefore, the RF model can significantly improve the classification results compared with a single decision tree. In addition, RF has a good performance for the outlier as well as noise and can effectively avoid overfitting [3,49]. (2) KNN classifier measures the weight of its neighbors when performs a new instance. The classifier labeled objects with different categories according to the weight and is more suitable than other classifiers when the class fields overlaps in the sample set [50]. (3) LDA classifier projects the training sample on a straight line to make project objects of the same class as close as possible; in contrast, heterogeneous sample projection points away from as far as possible. The classifier assumes that all data sets are followed by a normal distribution and can reduce the dimensions of the original data. LDA classifier calculates the probability density of each class sample, and the classification results depend on the maximum probability of each category [51,52].
Classification Post-Processing and Accuracy Evaluation
Classification post-processing is a pivotal step to optimize the classification result [53,54]. We used the rules of classification post-processing for the objects (Table 3). In addition, we used a confusion matrix to evaluate the accuracy of land cover classification [55]. Three indices, including Overall Accuracy (OA), Producer's Accuracy (PA), and User's Accuracy (UA), were utilized to quantify the accuracy of classification results.
Feature Extraction
We utilized three feature categories, including 2D USP, 3D USP, and spatial pattern of land covers, to assist the following UFZ classification ( Table 4). The 2D USPs described the landscape compositions in different UFZs. They were revealed by building coverage (BC), tree coverage (TC), grass coverage (GC), soil coverage (SC), impervious surface coverage at ground level (ISC_G), and water coverage (WC). We extracted 3D USPs by integrating results of the land cover classification and LiDAR point clouds. For example, we obtained building labels from the results of land cover classification, and then calculated the average building height by using the height information from the LiDAR data [56]. The 3D USPs including sky view factor (SVF), building height (BH), street aspect ratio (SAR), and floor area ratio (FAR).
Previous studies have demonstrated that the NNI can measure the spatial patterns of land covers, and thus help the UFZ mapping [57][58][59]. To avoid such biases as the same landscape compositions or 3D USPs in different UFZs, we introduced the Nearest Neighbor Index (NNI) to improve the UFZ classification using different spatial patterns of land covers [60]. We considered three typical spatial patterns of land covers, i.e., random distribution, aggregation, and uniform distribution. The NNI can be defined as: where d min is the distance between a specific land cover (i.e., a building) and its nearest same object, and d min is the average distance of d min in a block. E(d min ) is the expectation of d min in complete space randomness mode, which is calculated based on the area of the block (A) and the number of buildings (n). Modes of random distribution, aggregation, and uniform distribution were identified when NN I value = 1, NN I value < 1, and NN I value > 1.
Experiment Design
We determined the feature optimization using the GI method and selected the best combination of features for UFZ mapping (details can be found in Section 3.2.2). To further analyze the influences of 3D USP on UFZ mapping, we designed seven experiments with different variable combinations based on the results of feature optimization (Table 5). In this way, we tried to recognize the most significant feature category for UFZ mapping. As shown, Exps. a, b, and c featured 2D USP, 3D USP, and spatial pattern feature, respectively, and were meant to examine the ability of a single category in the UFZ mapping. Exp. d consisted of fusion categories involving 2D USP, 3D USP and was designed to identify 2D and 3D USP combined effect on the UFZ mapping. Exp. e selected 2D USP and spatial pattern feature as input variables. In addition, Exp. f consisted of mixed categories involving 3D USP and spatial pattern features. Exps. f and g differed in that Exp. f did not contain 2D USP, while Exp. g included all the categories. In addition, 307 zones were selected as training samples and used for classification, and the others were chosen for the accuracy assessment. Figure 4 shows the importance ranking of 44 variables for land cover classification. As shown, firstly, DSM reached the highest variable importance value (GI value = 0.057) in the classification, and SVF also yielded a high GI value (0.042), indicating that 3D USPs exert considerable influences on the land cover classification. In particular, the GI value of NDVI was 0.050, suggesting that vegetation coverage is vital for land cover mapping. The spectral features showed better performance in classification (all their GI values were higher than 0.020); in contrast, the GI values of textural features were relatively low (most of their GI values was lower than 0.02). Limited influence of geometrical features, i.e., asymmetry, area, and compactness, on the classifications was observed. In summary, the variable importance ranked from high to low was 3D USPs > spectral features > geometrical features > textural features. Figure 4 shows the importance ranking of 44 variables for land cover classification. As shown, firstly, DSM reached the highest variable importance value (GI value = 0.057) in the classification, and SVF also yielded a high GI value (0.042), indicating that 3D USPs exert considerable influences on the land cover classification. In particular, the GI value of NDVI was 0.050, suggesting that vegetation coverage is vital for land cover mapping. The spectral features showed better performance in classification (all their GI values were higher than 0.020); in contrast, the GI values of textural features were relatively low (most of their GI values was lower than 0.02). Limited influence of geometrical features, i.e., asymmetry, area, and compactness, on the classifications was observed. In summary, the variable importance ranked from high to low was 3D USPs > spectral features > geometrical features > textural features. We further tested the varying OA values associated with different input variables ( Figure 5). An increasing trend and followed by stable OA values were observed with the rising number of input variables. The highest OA value (87.4%) was observed when the number of the input variables was 24. Thus, we selected the 24 variables to perform the subsequent land cover mapping (details of the 24 optimal variables can be found in Supplementary Material Table S2). We further tested the varying OA values associated with different input variables ( Figure 5). An increasing trend and followed by stable OA values were observed with the rising number of input variables. The highest OA value (87.4%) was observed when the number of the input variables was 24. Thus, we selected the 24 variables to perform the subsequent land cover mapping (details of the 24 optimal variables can be found in Supplementary Material Table S2). Remote Sens. 2021, 13, x FOR PEER REVIEW 11 of 21 Table 6 shows land cover classification accuracy using three classifiers, i.e., RF, KNN, and LDA. As shown, the RF classifier had the highest accuracy with an OA value of 87.4% (details of its confusion matrix see Supplementary Material Table S3). In contrast, the lowest accuracy of the classification was observed by using the LDA classifier (the OA value was 74.0%). LDA was failed to distinguish trees from grasses and buildings from groundlevel impervious surfaces (details of its confusion matrix see Supplementary Material Table S4). Regarding the KNN classifier, its OA value was 77.4%, and it wrongly classified the tree and grass (details of its confusion matrix see Supplementary Material Table S5). 74.0 Figure 6 shows the details of land cover classification using different MLCs. As shown, the RF classifier can better capture small trees (location g in Figure 6b). Moreover, the RF classifier can effectively identify building boundaries (location h in Figure 6b). However, the KNN classifier could not depict the building boundaries clearly, and wrongly classified the shape of trees (location g in Figure 6c). The LDA classifier failed to capture small trees; meanwhile, some building boundaries were poorly detected using the LDA classifier (location h Figure 6d). Table 6 shows land cover classification accuracy using three classifiers, i.e., RF, KNN, and LDA. As shown, the RF classifier had the highest accuracy with an OA value of 87.4% (details of its confusion matrix see Supplementary Material Table S3). In contrast, the lowest accuracy of the classification was observed by using the LDA classifier (the OA value was 74.0%). LDA was failed to distinguish trees from grasses and buildings from ground-level impervious surfaces (details of its confusion matrix see Supplementary Material Table S4). Regarding the KNN classifier, its OA value was 77.4%, and it wrongly classified the tree and grass (details of its confusion matrix see Supplementary Material Table S5). Figure 6 shows the details of land cover classification using different MLCs. As shown, the RF classifier can better capture small trees (location g in Figure 6b). Moreover, the RF classifier can effectively identify building boundaries (location h in Figure 6b). However, the KNN classifier could not depict the building boundaries clearly, and wrongly classified the shape of trees (location g in Figure 6c). The LDA classifier failed to capture small trees; meanwhile, some building boundaries were poorly detected using the LDA classifier (location h Figure 6d). Remote Sens. 2021, 13, x FOR PEER REVIEW 12 of 21 Table 7 shows the comparison of land-cover classification by using 2D USPs and 3D USPs against those using only 2D USPs (details regarding the confusion matrixes of each classifier can be found in Supplementary Material Tables S6-S8). As shown, the accuracies of all classifiers were increased after adding 3D USPs. The OA values of RF, KNN, and LDA classifiers increased by 3.1, 2.3, and 3.5%, respectively. Further, it is found that, after 3D USPs involved in classification, PA values of all land covers increased, but different increasing degrees of varying land covers were observed. Tree yielded the highest increased accuracy with an average increased PA value of 6.7%. The possible reason is that the height information can be used to distinguish trees from grasses. In addition, 3D USPs can increase the PA value of buildings (increased by 3.3%). The possible reason is that the SVF can distinguish building roofs (featuring a high SVF) from impervious grounds (feature a low SVF). In summary, 3D USPs significantly increased the accuracy of land cover mapping. These findings were consistent with the previous studies by [14,64]. Figure 7 compares spatial details of land-cover classification by using 2D-3D USPs against those by using only 2D USPs. It is found that the classification results using 2D-3D USPs were better than those using only 2D USPs. As shown in Figure 7a, the trees in the blue circle were mistakenly classified as impervious ground, and the building boundaries cannot be captured by using 2D USPs (red rectangle). However, both were captured after adding 3D USPs (Figure 7b), indicating that 3D USPs can help distinguish trees from Table 7 shows the comparison of land-cover classification by using 2D USPs and 3D USPs against those using only 2D USPs (details regarding the confusion matrixes of each classifier can be found in Supplementary Material Tables S6-S8). As shown, the accuracies of all classifiers were increased after adding 3D USPs. The OA values of RF, KNN, and LDA classifiers increased by 3.1, 2.3, and 3.5%, respectively. Further, it is found that, after 3D USPs involved in classification, PA values of all land covers increased, but different increasing degrees of varying land covers were observed. Tree yielded the highest increased accuracy with an average increased PA value of 6.7%. The possible reason is that the height information can be used to distinguish trees from grasses. In addition, 3D USPs can increase the PA value of buildings (increased by 3.3%). The possible reason is that the SVF can distinguish building roofs (featuring a high SVF) from impervious grounds (feature a low SVF). In summary, 3D USPs significantly increased the accuracy of land cover mapping. These findings were consistent with the previous studies by [14,64]. Figure 7 compares spatial details of land-cover classification by using 2D-3D USPs against those by using only 2D USPs. It is found that the classification results using 2D-3D USPs were better than those using only 2D USPs. As shown in Figure 7a, the trees in the blue circle were mistakenly classified as impervious ground, and the building boundaries cannot be captured by using 2D USPs (red rectangle). However, both were captured after adding 3D USPs (Figure 7b), indicating that 3D USPs can help distinguish trees from impervious grounds and delineate the building boundaries. In addition, some impervious grounds are mistakenly recognized as buildings (the red circle in Figure 7e), and some trees were poorly captured (blue circle in Figure 7e) using only 2D USPs. In contrast, Figure 7f shows that the impervious grounds and trees were correctly classified using 2D USPs and 3D USPs. impervious grounds and delineate the building boundaries. In addition, some impervious grounds are mistakenly recognized as buildings (the red circle in Figure 7e), and some trees were poorly captured (blue circle in Figure 7e) using only 2D USPs. In contrast, Figure 7f shows that the impervious grounds and trees were correctly classified using 2D USPs and 3D USPs. Figure 8 demonstrates the variable importance ranking of 16 variables in UFZ mapping. As shown, the most critical variable in UFZ mapping was SVF, which reached the highest GI value of 0.138. Besides, SAR also had a high GI value of 0.131 and performed well in UFZ mapping, indicating the importance of 3D USPs in UFZ mapping (note that all the GI values of 3D USPs were higher than 0.06). In addition, TC reached the secondhighest GI value in UFZs (GI value = 0.134), suggesting that tree coverage plays an essential role in UFZ classification. The other 2D USPs (i.e., BC, ISC_G, and GC) were also crucial for UFZ mapping since their GI values were higher than 0.06. Moreover, the spatial patterns of land covers occupied a relatively low importance rank in UFZ mapping, and all their GI values were lower than 0.060. As shown, the most critical variable in UFZ mapping was SVF, which reached the highest GI value of 0.138. Besides, SAR also had a high GI value of 0.131 and performed well in UFZ mapping, indicating the importance of 3D USPs in UFZ mapping (note that all the GI values of 3D USPs were higher than 0.06). In addition, TC reached the second-highest GI value in UFZs (GI value = 0.134), suggesting that tree coverage plays an essential role in UFZ classification. The other 2D USPs (i.e., BC, ISC_G, and GC) were also crucial for UFZ mapping since their GI values were higher than 0.06. Moreover, the spatial patterns of land covers occupied a relatively low importance rank in UFZ mapping, and all their GI values were lower than 0.060. Remote Sens. 2021, 13, x FOR PEER REVIEW 14 of 21 Figure 8. Variable importance ranking for UFZ mapping revealed by the RF algorithm.
Results of Feature Optimization
We also separately tested the variable importance of each category of features. Figure 9a-c show the results of the variable importance of 2D USPs, 3D USPs, and spatial patterns. The most crucial variable in 2D USPs was TC (Figure 9a), which reached a high GI value of 0.32, indicating that tree coverage plays an essential role in UFZ classification. Besides, building coverage yielded a high GI value of 0.25. However, SC and WC showed a relatively low variable importance with GI values of less than 0.02. Figure 9b shows that SVF (GI value = 0.29) and SAR (GI value = 0.28) yielded good performance in UFZ mapping. Notably, all the GI values of 3D USPs exceeded 0.20, suggesting that 3D USPs are indispensable for UFZ mapping. Figure 9c shows that the top 2 important spatial pattern features were: TNNI (GI values = 0.30) and BNNI (GI values = 0.29). In addition, the variable importance ranking of spatial patterns from high to low was TNNI > BNNI > INNI > GNNI > SNNI > WNNI. Figure 10 shows the changes in the OA value with different input variables. As shown, the OA value increased rapidly when the number of variables changed from 1 to 4, and it reached the highest accuracy of 91.9% when the number of input variables was 14. Therefore, the 14 variables of the optimal combination were selected to perform the We also separately tested the variable importance of each category of features. Figure 9a-c show the results of the variable importance of 2D USPs, 3D USPs, and spatial patterns. The most crucial variable in 2D USPs was TC (Figure 9a), which reached a high GI value of 0.32, indicating that tree coverage plays an essential role in UFZ classification. Besides, building coverage yielded a high GI value of 0.25. However, SC and WC showed a relatively low variable importance with GI values of less than 0.02. Figure 9b shows that SVF (GI value = 0.29) and SAR (GI value = 0.28) yielded good performance in UFZ mapping. Notably, all the GI values of 3D USPs exceeded 0.20, suggesting that 3D USPs are indispensable for UFZ mapping. Figure 9c shows that the top 2 important spatial pattern features were: TNNI (GI values = 0.30) and BNNI (GI values = 0.29). In addition, the variable importance ranking of spatial patterns from high to low was TNNI > BNNI > INNI > GNNI > SNNI > WNNI. We also separately tested the variable importance of each category of features. Figure 9a-c show the results of the variable importance of 2D USPs, 3D USPs, and spatial patterns. The most crucial variable in 2D USPs was TC (Figure 9a), which reached a high GI value of 0.32, indicating that tree coverage plays an essential role in UFZ classification. Besides, building coverage yielded a high GI value of 0.25. However, SC and WC showed a relatively low variable importance with GI values of less than 0.02. Figure 9b shows that SVF (GI value = 0.29) and SAR (GI value = 0.28) yielded good performance in UFZ mapping. Notably, all the GI values of 3D USPs exceeded 0.20, suggesting that 3D USPs are indispensable for UFZ mapping. Figure 9c shows that the top 2 important spatial pattern features were: TNNI (GI values = 0.30) and BNNI (GI values = 0.29). In addition, the variable importance ranking of spatial patterns from high to low was TNNI > BNNI > INNI > GNNI > SNNI > WNNI. Figure 10 shows the changes in the OA value with different input variables. As shown, the OA value increased rapidly when the number of variables changed from 1 to 4, and it reached the highest accuracy of 91.9% when the number of input variables was 14. Therefore, the 14 variables of the optimal combination were selected to perform the Figure 10 shows the changes in the OA value with different input variables. As shown, the OA value increased rapidly when the number of variables changed from 1 to 4, and it reached the highest accuracy of 91.9% when the number of input variables was 14. Therefore, the 14 variables of the optimal combination were selected to perform the UFZ mapping (the details of 14 optimal variables can be found in Supplementary Material Table S9). UFZ mapping (the details of 14 optimal variables can be found in Supplementary Material Table S9). Table 8 shows the accuracy of UFZ classification revealed by different classifiers (Details concerning the confusion matrix of three classifiers are shown in Supplementary Material Tables S10-S12). It is found that the RF classifier generated the highest accuracy results of UFZ mapping with an OA value of 91.9%. For example, the results of commercial zone classification show that the highest accuracy was found in the RF classifier (PA value = 88.9%), but low accuracies were observed in KNN (PA value = 64.4%) and LDA (PA value = 80.0%) algorithms. In summary, the RF classifier produced more accurate results and had more tremendous advantages in identifying UFZs than KNN and LDA classifiers do. The results of UFZ classification revealed by RF, KNN, and LDA classifiers are shown in Figure 11a-c. We selected three sub-regions to show their differences in spatial classification details (Figure 11d-f). Figure 11d gives an example of a residential site. We found that the site could be well recognized using the RF classifier; however, it was wrongly classified as an industrial zone using KNN or LDA classifiers. Likewise, Figure 11e gives an example of a commercial site and found that the RF classifier could accurately classify the place; yet, the KNN classifier mistakenly classified it as a residential zone and the LDA classifier wrongly recognized it as an industrial zone.
Results of UFZ Mapping
Moreover, Figure 11f shows an industrial zone with irregularly disturbed buildings and some large trucks. It is found that the RF classifier recognized the site well, but it was incorrectly classified as a residential zone using the KNN or LDA classifiers. Thus, based on the above analyses, the RF classifier was more suitable for UFZ classification. Table 8 shows the accuracy of UFZ classification revealed by different classifiers (Details concerning the confusion matrix of three classifiers are shown in Supplementary Material Tables S10-S12). It is found that the RF classifier generated the highest accuracy results of UFZ mapping with an OA value of 91.9%. For example, the results of commercial zone classification show that the highest accuracy was found in the RF classifier (PA value = 88.9%), but low accuracies were observed in KNN (PA value = 64.4%) and LDA (PA value = 80.0%) algorithms. In summary, the RF classifier produced more accurate results and had more tremendous advantages in identifying UFZs than KNN and LDA classifiers do. The results of UFZ classification revealed by RF, KNN, and LDA classifiers are shown in Figure 11a-c. We selected three sub-regions to show their differences in spatial classification details (Figure 11d-f). Figure 11d gives an example of a residential site. We found that the site could be well recognized using the RF classifier; however, it was wrongly classified as an industrial zone using KNN or LDA classifiers. Likewise, Figure 11e gives an example of a commercial site and found that the RF classifier could accurately classify the place; yet, the KNN classifier mistakenly classified it as a residential zone and the LDA classifier wrongly recognized it as an industrial zone. Table S13). As show firstly, Exp. g yielded the highest accuracy with an OA value of 91.9%, suggesting t advantages of integrating 2D USPs, 3D USPs and spatial pattern features of land cove In contrast, Exp. c produced the worst classification results (OA value = 67.7%), suggesti the disadvantages of using only spatial patterns of land covers for UFZ mapping. S ondly, using combinations of different categories was generally better than using the s gle category (e.g., Exps. a VS. d and Exps. b VS. e). However, it is found that the OA va of Exp. a was higher than that of Exp. f, suggesting that 2D USPs are indispensable for t UFZ classification. Moreover, Figure 11f shows an industrial zone with irregularly disturbed buildings and some large trucks. It is found that the RF classifier recognized the site well, but it was incorrectly classified as a residential zone using the KNN or LDA classifiers. Thus, based on the above analyses, the RF classifier was more suitable for UFZ classification. Table S13). As shown, firstly, Exp. g yielded the highest accuracy with an OA value of 91.9%, suggesting the advantages of integrating 2D USPs, 3D USPs and spatial pattern features of land covers. In contrast, Exp. c produced the worst classification results (OA value = 67.7%), suggesting the disadvantages of using only spatial patterns of land covers for UFZ mapping. Secondly, using combinations of different categories was generally better than using the single category (e.g., Exps. a VS. d and Exps. b VS. e). However, it is found that the OA value of Exp. a was higher than that of Exp. f, suggesting that 2D USPs are indispensable for the UFZ classification.
Results of UFZ Mapping
advantages of integrating 2D USPs, 3D USPs and spatial pattern features of land covers. In contrast, Exp. c produced the worst classification results (OA value = 67.7%), suggesting the disadvantages of using only spatial patterns of land covers for UFZ mapping. Secondly, using combinations of different categories was generally better than using the single category (e.g., Exps. a VS. d and Exps. b VS. e). However, it is found that the OA value of Exp. a was higher than that of Exp. f, suggesting that 2D USPs are indispensable for the UFZ classification. To explore the impacts of 3D USPs on UFZ mapping, we compared the two pairs of experiments (i.e., Exps. a vs. d and Exps. e vs. g) in Figure 13. As shown, the experiments with 2D USPs and 3D USPs produced more accurate classification results than those with only 2D USPs. For example, block a in Figure 13 is an industrial zone with low-rise and large buildings; however, it was incorrectly classified as a park in Exp. a. Similarly, residential zone b was incorrectly identified as a commercial zone in Exp. a. Yet, it is noteworthy that blocks a and b were correctly classified in Exp. d by using 3D USPs. The possible reason is that SAR and FAR can help with UFZ mapping. SARs in industrial zones are higher than those in parks (Figure 1c), and FARs in residential zones are higher than those in commercial zones (Figure 1d). In addition, blocks c and d were correctly identified as the industrial and commercial zones, respectively, in Exp. g, whereas they were wrongly labeled as residential zones in Exp. e that did not contain 3D USPs. Our results verified that the importance of 3D for UFZ classification. To explore the impacts of 3D USPs on UFZ mapping, we compared the two pairs of experiments (i.e., Exps. a vs. d and Exps. e vs. g) in Figure 13. As shown, the experiments with 2D USPs and 3D USPs produced more accurate classification results than those with only 2D USPs. For example, block a in Figure 13 is an industrial zone with low-rise and large buildings; however, it was incorrectly classified as a park in Exp. a. Similarly, residential zone b was incorrectly identified as a commercial zone in Exp. a. Yet, it is noteworthy that blocks a and b were correctly classified in Exp. d by using 3D USPs. The possible reason is that SAR and FAR can help with UFZ mapping. SARs in industrial zones are higher than those in parks (Figure 1c), and FARs in residential zones are higher than those in commercial zones (Figure 1d). In addition, blocks c and d were correctly identified as the industrial and commercial zones, respectively, in Exp. g, whereas they were wrongly labeled as residential zones in Exp. e that did not contain 3D USPs. Our results verified that the importance of 3D for UFZ classification.
Discussion
Previous studies use only 2D features, e.g., spectral, textural, and geometrical features to perform UFZ mapping [11][12][13], whereas our approach first considered the potentials of 3D USPs, i.e., BH, SVF, FAR, and SAR for the UFZ mapping. It is important to introduce 3D USPs for the UFZ mapping since different UFZs yield essentially 3D heterogeneity (Figure 1). Our results verified that 3D USPs could considerably improve the OA values of UFZ and land cover mapping by 6.4% and 3.0% (Table 8 and Figure 12).
To better illustrate the advantages of our approach, we further compare the proposed approach with relevant literature on the data source and evaluated results, i.e., OA value (Table 9). First, our approach obtains more refined evaluation results than most of the considered strategies (OA value = 91.9%). Second, the main idea of UFZ mapping is to fully use the differences in spectral features [12], textual features [64], the spatial pattern of objects [1,11], and 3D USPs among various UFZs. Note that different strategies and indicators may yield extra costs. Our approach needs VHR images and LiDAR point clouds producing additional expenses, i.e., 3D data. Yet, nowadays, accurate 3D data is more accessible, i.e., low-cost LiDAR and terrestrial LiDAR systems are becoming more affordable [14]. Based on the above discussion, our approach is suitable for accurate UFZ mapping due to better accuracy and affordable costs. As stated earlier, this study achieved an accurate method for UFZ classification by considering 3D USPs. However, several limitations need to be noted. First, this study highlights the distinctive role of 3D USPs in UFZ mapping. However, cities composed of complex and various landscapes require more 3D variables, i.e., 3D spatial patterns, to label the objects to obtain accurate UFZ classification results. Second, the UFZs relevant to socioeconomic events, and people usually conduct different activities in various UFZs. The open social data related to human activity (i.e., POI, public transport data, mobile phone positioning data) are valuable for UFZ mapping [11,12,64]). Therefore, studies that explore additional available features and integrate open social data for more accurate UFZ mapping are required in the future.
Conclusions
In this study, we proposed a new approach for UFZ mapping by integrating 2D USPs, 3D USPs and the spatial patterns of land covers. The approach was then verified in Brooklyn, New York City, USA. We evaluated the influence of 3D USPs on the classifications of land covers and UFZs. The conclusions can be drawn as follows.
Our results show that the approach yielded an excellent accuracy of UFZ mapping with an overall accuracy of 91.9%. The RF classifier produced the highest accuracies of both land cover and UFZ classifications. In addition, 3D USPs considerably improved the classification accuracy of land cover by increasing the average OA value of 3.0% and can help the UFZ recognition, which improved the accuracy of UFZ mapping (increasing the OA value by 6.4%). Moreover, we verified DSM was the most critical variable of 44 features in land cover mapping, which obtained the GI value as 0.057. In addition, SVF was the top importance variable for UFZ classification with a GI value of 0.138. Our research provides a new perspective for UFZ mapping and highlights that 3D USPs should be considered in future studies that perform UFZ mapping. | 10,493 | 2021-07-01T00:00:00.000 | [
"Environmental Science",
"Mathematics"
] |
A method for the construction of equalized directional cDNA libraries from hydrolyzed total RNA
Background The transcribed sequences of a cell, the transcriptome, represent the trans-acting fraction of the genetic information, yet eukaryotic cDNA libraries are typically made from only the poly-adenylated fraction. The non-coding or translated but non-polyadenylated RNAs are therefore not represented. The goal of this study was to develop a method that would more completely represent the transcriptome in a useful format, avoiding over-representation of some of the abundant, but low-complexity non-translated transcripts. Results We developed a combination of self-subtraction and directional cloning procedures for this purpose. Libraries were prepared from partially degraded (hydrolyzed) total RNA from three different species. A restriction endonuclease site was added to the 3' end during first-strand synthesis using a directional random-priming technique. The abundant non-polyadenylated rRNA and tRNA sequences were largely removed by using self-subtraction to equalize the representation of the various RNA species. Sequencing random clones from the libraries showed that 87% of clones were in the forward orientation with respect to known or predicted transcripts. 70% matched identified or predicted translated RNAs in the sequence databases. Abundant mRNAs were less frequent in the self-subtracted libraries compared to a non-subtracted mRNA library. 3% of the sequences were from known or hypothesized ncRNA loci, including five matches to miRNA loci. Conclusion We describe a simple method for making high-quality, directional, random-primed, cDNA libraries from small amounts of degraded total RNA. This technique is advantageous in situations where a cDNA library with complete but equalized representation of transcribed sequences, whether polyadenylated or not, is desired.
Background
Almost the entire trans-acting fraction of genetic information is represented by the transcriptome, the population of transcribed sequences in a cell. In terms of complexity, much of the functional transcriptome of eukaryotic cells has traditionally been considered poly-adenylated and translated. In terms of quantity, this poly-adenylated frac-tion constitutes only 3-6% of the total RNA population. For these reasons, experimental representation of eukaryotic transcriptomes was usually done by constructing cDNA libraries from the poly A + fraction of the RNA population. All such libraries do not, by design, represent the entire trans-acting genetic information. They lack representation of non-coding but functional RNAs (ncRNA), e.g. [1], including the abundant but low complexity tRNAs and rRNAs and the increasingly studied populations of various snRNAs, scRNAs, snoRNAs, telomeric RNAs, vRNAs, and microRNAs [2][3][4][5][6][7]. They lack representation of the mRNAs of organelles -mitochondria and chloroplasts -for which polyadenylation may be a signal for degradation [8,9]. They lack representation of mRNAs that are not poly-adenylated or lose their polyA tails, but are nevertheless translated [10][11][12][13]. The recent call for a more systematic examination of the entire transcriptome -RNomics [14] led to much greater interest in ncRNAs and a variety of wet and computational approaches to their identification [reviewed in [15,16]].
Our purpose here was to develop a library construction method that would result in a more complete representation, in useable form, of the transcriptome. We reasoned that self-subtraction [17,18], which equalizes the representation of different sequences through reassociation kinetics, should work as well as poly A + selection for reducing the frequency of the abundant but low-complexity rRNAs and tRNAs, without eliminating them, or any other polyA -RNAs, from cDNA libraries. We describe here the method and show that informative, random-primed, directional, and more completely representative cDNA libraries can be made from partially degraded total RNA.
RNA preparation
Since a method for the production of high-quality, more fully representative cDNA libraries from even difficult samples was sought, three very different RNA sources were chosen: 48 hour zebrafish (Danio rerio) embryos, fieldcollected 36 hour embryonic amphioxus (Branchiostoma floridae), and isolated 3rd instar fruitfly (Drosophila melanogaster) larval brain and eye discs. Total RNA was extracted from these samples with TRIzol (Invitrogen, manufacturer's protocol). Contaminating genomic DNA was completely removed from aliquots of the RNAs by digestion with 1 U RNase-free DNase (NewEngland Biolabs)/ug RNA in the manufacturer's buffer for 30 minutes at room temperature. 1 µg of RNase free glycogen (Roche) was then added and the sample re-extracted with Trizol (Methods-1). Total RNA was partially hydrolyzed in 100 mM (Na)CO 3 , pH 10.0, for 20 min. at 60°C. This resulted in a population of 100-1300 nt RNA fragments (Methods-2).
cDNA synthesis
Double-stranded (ds) cDNAs were synthesized from the partially-degraded total RNA by a standard Gubler-Hoffman replacement procedure. Primer and template were annealed by mixing 5 µg of the partially hydrolyzed, DNA-free RNA with 0.7 µg of the 5'-phosphorylated, directional, 1st-strand primer (DRP1) (Fig. 1A), heating to 70°C for 10 minutes, and quenching briefly on ice. The first strand was synthesized by incubating the annealed mix with 200 units of Superscript II reverse transcriptase (Invitrogen) for one hour at 45°C in the manufacturer's buffer containing 10 mM DTT and 0.5 mM dNTPs. The second strand was synthesized by adding 90 µl H 2 O, 32 µl 5× 2nd strand buffer [100 mM HEPES pH 6.9 (buffered with KOH), 50 mM KCl, 25 mM MgCl 2 , 50 mM (NH 4 ) 2 SO 4 ], 6 µl dNTPs (5 mM each), 1 µl ® NAD (10 mM), 2 µl E. coli DNA ligase (6 U/µg), 6 µl 0.1 M DTT, 4 µl E. coli DNA polymerase I (10 U/ul), and 1 µl E. coli RNase H (2 U/µl). The reaction mixture was incubated for 16 hours at 16°C. Double-stranded cDNAs were then blunt-ended by adding 10 units of T4 DNA polymerase (New England Biolabs) and continuing the incubation for 10 minutes. The blunt-ended, double-stranded cDNAs (yields were between 1.2 and 1.6 µg) were then extracted with phenol:chloroform (1:1), precipitated with ammo-Random directional cDNA synthesis Figure 1 Random directional cDNA synthesis. A. The first strand cDNA primer (DRP1) contains: a 5' phosphorylated buffer sequence devoid of A bases (1), an AscI site (2), a 5 nt random sequence (3), and a 3' T residue (4). By design, priming should begin only at an A in the RNA template. B. The nonphosphorylated lone-linker LL1 consists of the the two complimentary oligonucleotides LL1F and LL1R. It has one blunt end and one non-adhesive staggered end. LL1 can therefore ligate only to one strand of the cDNAs and in only one orientation. The remaining nick in the second strand is removed by preincubating the cDNAs before the first PCR reaction at 72°C for one minute to strip off the non-ligated strand of the linker and regenerate the sequence by extension from the 3' end of the cDNA (lower grey). nium acetate and ethanol and then resuspended in 10 µl of TE. Lone linker LL1 (Fig. 1B) was prepared by annealing equimolar amounts of LL1L and LL1R (Fig. 1B) for 15 minutes at room temperature in TE. 1.2 µg of LL1 was then added to the cDNA and ligated overnight at 16°C (Methods-3). Ligated cDNAs were extracted with phenol:chloroform (1:1) and precipitated with sodium acetate and ethanol and resuspended in 50 µl TE. Fragments smaller than approximately 100 bp were removed by Sepharose CL-6B (Roche) gel filtration. The 250 µl flowthrough peak was collected and the cDNAs precipitated with sodium acetate, 1 µl of glycogen carrier and ethanol and resuspended in 20 µl TE. This >100 bp cDNA population was amplified in a 10 cycle PCR reaction. Each 100 µl reaction contained 1 µg of the primer LL1P, 6 U and 0.6 U of Vent exo -, and exo + thermostable DNA polymerases (New England Biolabs) respectively, 0.5 mM dNTPs, and 1 µl of α 32 P dCTP. An initial incubation at 72°C for one minute generated full-length 3' ends ( Fig. 1B) (Methods-4). The first 7 cycles were 95°C, 55°C, and 72°C for 1 min each. In the final three cycles, the 72°C extension steps were increased to 2, 4, and 8 min. respectively. Amplified cDNAs were purified by phenol/Chloroform extraction and ethanol precipitation.
Self-subtraction
5 µg of amplified cDNA in 10 µl of annealing buffer (0.34 M NaCl, 0.1 M Na(PO 4 ) pH 6.8, 1 mM EDTA) containing 300 ng of LL1F was overlayed with mineral oil, denatured by boiling for 5 min and then annealed at 60°C for one hour (C o t ~ 3 × 10 -4 M·min). 100 µl of binding buffer (0.12 M Na(PO 4 ), pH 6.8) was added to the bottom aqueous phase which was then transferred to another tube. 100 µl of hydrated hydroxylapatite (Methods-5) suspended in 1.2 ml of binding buffer at 60°C was then added and the suspension incubated at 60°C for 10 min with frequent mixing. Bound dsDNA and hydroxylapatite were removed completely by discarding the pellets after two consecutive centrifugations (Methods-6). The 32 P counts of sample aliquots taken before and after subtraction, indicated that between 90 and 97% of the cDNA was bound to the hydroxylapatite and removed. The hydroxylapatite phosphate buffer was replaced with 10 mM Tris, 1 mM EDTA, pH8 (TE), by four consecutive exchanges in Centricon-100 filters (Amicon; Methods-7). The remaining single-stranded (ss) cDNAs were then subjected to a second round of amplification, and self-subtraction as described above, with the exception that the second reannealing time was 24 hours instead of one hour (C o t of ~ 7 × 10 -3 M·min). Approximately 80% of the cDNA amplified after the first subtraction was removed in the second self-subtraction. The phosphate buffer of the second selfsubtraction reaction was exchanged for TE as described. The final double-stranded cDNA population for cloning was generated using the same PCR protocol used for the previous amplifications.
Analysis
The cloning strategy was designed to preserve the orientation of the cDNAs. The first strand directional random primer, DRP1, contained, from 5' to 3', 12 nt of defined buffer sequence, the 8 nt AscI restriction endonuclease sequence, 5 nt of random sequence and a 3' T nucleotide (Fig. 1A). The 5' buffer sequence was included to prevent destruction of the AscI site by the 5'->3' exonuclease activity of E. coli PolI during second-strand synthesis. Using a primer which did not contain the buffer sequence resulted in the frequent appearance of cDNA clones that had lost the AscI site and therefore orientation (data not shown). T was added to the 3' end and As were excluded from the defined primer sequence at the cost of initiating cDNA synthesis at A (Fig. 1A-1, 2, 4) to eliminate an observed DRP1 self-priming artifact during first-strand synthesis.
We tested the procedure by sequencing random clones picked from the different cDNA libraries. In an initial test, 43 clones from the Amphioxus library, 54 from the zebrafish library, and 15 from the Drosophila library were sequenced. Of the 112 clones, 9 clones contained no cDNAs and 4 clones yielded unreadable sequence data. The remaining sequences were compared to the NCBI non-redundant combined protein, combined DNA, and combined EST sequence databases, using the TBlastX search procedure [20]. In the self-subtracted libraries, 59 sequences matched translated RNAs, ESTs, or gene exons (e < 10 -5 ) and 87% of these were in the forward reading frame. This fraction is within the 80-95% observed in directional cDNA libraries made by standard approaches using oligo dT primed cDNAs [21], indicating that the directional cloning strategy functions properly. that cloning was, as designed, directional. 16 matches to genomic DNA that were not part of any known functional or encoding RNA were identified. Since the first strand priming technique was not completely random and the directional primer included additional 5' GC-rich sequence (Fig. 1, parts 1,2), we were concerned that priming might be biased towards GC-rich regions. We examined this using clones which identified perfect matches in the sequence databases. From the 55 such clones we collected the 1100 nucleotides of sequence data lying within 20 nt of the 3' end of the random primer, i.e. under the non-random 5' portion of DRP1. These sequences were 49% G+C, indicating that the additional GC rich sequence in the 5' portion of the first strand cDNA primer did not seriously bias the priming process. 74% of the sequenced clones in the self-subtracted libraries contained an A at the sixth nucleotide following the AscI restriction site in the primer, indicating that cDNA synthesis mostly initiated, as designed, at an A (Fig. 1A).
Since these data provided an initial indication that the library construction technique was working as expected, we randomly picked and sequenced another 200 colonies from the Danio rerio library. All the Danio rerio and Drosophila sequences were then compared to the NCBI nonredundant nucleotide sequence database (Gen-Bank+EMBL+DDBJ+PDB sequences -but no EST, STS, GSS, environmental samples or phase 0, 1 or 2 HTGS sequences) using TBLASTX and to the Ensembl unmasked zebrafish genome (Zv6, Ensemble genebuild, Aug. 2006) using BLASTN set to oligo sensitivity. Of the 257 sequences, 10 sequences showed inconclusive or no matches. The remainder matched genomic DNA to better than 97%, indicating that the genomic source contexts of almost all cDNAs were identified (see Additional file 1). These were then categorized with respect to known or hypothetical transcription units (Fig. 2). 245 of the 247 matches are unique in sequenced population, coming from different loci. Although the source material for these libraries was partially degraded total RNA, 70% of the sequences matched known or hypothetical mRNAs. 17% of these were novel, being either first matches to hypothetical mRNA sequence or unknown splice variants of previously identified mRNA transcription loci. 16% of sequences matched hnRNA sequence likely removed during mRNA splicing -which does not necessarily mean that they are non-functional. Eight matches to known or hypothetical ncRNA loci were identified. These included one match to a large rRNA transcription unit and 5 matches to known or hypothetical ncRNA transcription units encoding miRNAs. A total of 17 matches were outside either known or predicted transcription units. One of these exhibited discontinuous matches to very closely linked same-strand sequence in the genome, indicative of splicing, and suggested a novel unpredicted transcription unit. A total of 10 clones, some in known or predicted introns and others outside of predicted transcription units exhibited significant (e < 10 -9 ) matches to a large number of loci (>50) dispersed throughout the genome. These may represent undescribed interspersed repeat elements (LINES and SINES) in the zebrafish.
In 227 of the matches, orientation of the cDNAs could be assigned relative to known or hypothetical loci. Of these, 80% were in the sense orientation. Of the antisense matches, only four could clearly be considered artifacts of cloning. 13 antisense clones were structurally normal, exhibiting the appropriate transitions from cDNA to vector sequences, but were spliced according to their sense strand match. Although this suggests that the antisense orientation is an artifact of cloning, such precisely matching non-canonical splicing has been reported for at least one genuine antisense transcript [22].
We turn finally to the question of the extent of subtraction. The library data clearly show that the cDNA popula-Characterization of equalized cDNA libraries Figure 2 Characterization of equalized cDNA libraries. 257 cDNA sequences from the Danio rerio and Drosophila melanogaster self-subtracted cDNA libraries were used as Gen-Bank and Ensembl database query sequences. Diagnostic properties of near identity (>97%) matches were collected and scored. The labeled columns represent the following library characteristics: mRNA -the fraction of clones matching known or hypothetical mRNA sequences. mRNA (novel) -the fraction of clones representing the first reported matches to hypothetical mRNA sequence or previously unknown splice variants of known mRNA transcription units. hnRNA -the fraction of clones matching sequence that is likely removed during mRNA processing, eg. intron. ncRNA -the fraction of clones matching known or hypothetical non-coding RNA transcription units. interspersed repetitive -the fraction of clones matching more than 50 dispersed loci in the genome with probabilities of e < 10 -9 . gDNA -the fraction of clones matching genomic DNA outside of any known or predicted transcription unit. no match -the fraction of clones that did not match any known cDNA and could not be assigned an origin in the genome. sensethe fraction of matches in the sense orientation of known or hypothetical transcription units. antisense -the fraction of matches in the antisense orientation of known or hypothetical transcription units.
tion was subtracted enough to remove most representation of the abundant rRNAs and greatly increase the representation of the different mRNA species. However, equalization was not complete. Since the transcriptome contains all intron sequences, which greatly exceed the mRNA population in complexity (at least in the zebrafish), a complete subtraction would be expected to contain far more intron than mRNA sequence. This is not the case in the libraries examined, where mRNA sequences outnumber intron sequences by a ratio of almost 7:1. We addressed the issue of equalization within the mRNA population in two different ways. We determined the frequency of β-actin sequence, a particularly abundant mRNA, by probing the 200-300 bp zebrafish library with antisense oligonucleotide. Only one β-actin clone was identified among 5 × 10 4 colonies. Since β-actin represents approximately 1% of the mRNAs of embryonic zebrafish heart [23] and is relatively constant across tissues under physiological conditions [24], the representation of this abundant mRNA has been reduced in the equalized libraries. In the second approach, we compared mRNA representation among our sequences with mRNA representation in an unsubtracted cDNA library constructed from 72 hour embryonic zebrafish heart [23]. In the heart library, there are 11 mRNAs that are represented at frequencies between 0.4% (the ADP/ATP carrier protein) and 4.3% (the ribosomal proteins). Of these abundant mRNAs, two are represented once in our collection of 240 zebrafish cDNA sequences and the rest were not represented at all. There were two other mRNA loci that were represented twice among the zebrafish self-subtracted sequences, one encoding Ankyrin 3 and the other encoding Ran-binding protein 2. Neither of these two loci were represented in the 5000 sequences from the heart library.
There are several points of caution in using the procedure. In self-subtraction, it is the complexity of contaminating sequence, rather than its abundance, that may determine the extent of contamination in the final library. Even a few picograms of genomic DNA has a greater complexity than the entire functional RNA population. Although the importance and extent of non-coding transcripts is under revision [25] we did not observe any advantage to increasing the extent of self subtraction, either by including additional rounds of self subtraction or by increasing the C o t value of the subtractions. Clones chosen from a fourth library that had gone through an additional round of selfsubtraction and amplification did not match any mRNAs (data not shown.) The goal of the described procedure was to have as much as possible of the entire RNA population represented in a library, not to have it represented in full length copies. Some functional non-coding RNAs are almost certainly excluded from the libraries. We have used the procedure successfully on cDNA fragments as small as 100-200 bp. Self subtracted libraries of smaller sequences may be possible if the phosphate concentration in the hydroxylapatite binding buffer is decreased and only small sequences are included in the reaction. The smallest functional processed RNAs, e.g. the 22 nt microRNAs, will not be represented, although their primary transcripts certainly are. Since the procedure functions via inter-molecular reassociation kinetics, sequences with extensive self-homology (snap-back) will be removed independent of their abundance in the RNA population. The procedure will not work well for most full length transcripts since amplifying sequences larger than even 500 base-pairs in size is more difficult and more sensitive to reaction conditions. Even in our target size range of 200-300 bp there are undoubtedly some sequences that do not amplify well by PCR and are therefore underrepresented. We believe that the more complete representation possible with short sequences offsets the disadvantage of not cloning full length copies, especially as the sequences are more than long enough to identify significant matches in the databases, encode many complete protein domains, and to serve as probes for cloning or assembling full-length cDNAs by the many other methods available.
Conclusion
The simple procedures described here permit the construction of high-quality, directional cDNA libraries using small amounts of degraded total RNA. Since the method does not distinguish between polyA + and polyAspecies, all RNAs above 100 nt may be represented, including polyA -mRNAs and many functional but non-translated RNAs. The procedure should prove valuable in situations where more complete representation of the transcriptome is desired.
Methods
The following procedural details are relevant to the successful use of the method. They are cited in the Results and Discussion text as (Methods-#).
(1) The removal of genomic DNA contamination and purity of sample at the RNA extraction step is crucial. The self-subtraction procedure is more sensitive to the complexity of a contaminant than its abundance. Tissues should be processed in either disposable plasticware or baked glassware.
(2) Partial hydrolysis of the RNA is an important step. The method is PCR-based and PCR efficiency becomes increasingly sequence and reaction-condition dependent as the size of the template increases. To avoid this source of bias, short cDNAs are used. Producing these by shortening the RNA template through random hydrolysis has the added advantage of reducing the formation of intramolecular secondary structure that can interfere with priming and reducing the expected bias for the 5' ends of full-length molecules.
(3) The quality of the cDNA syntheses and subsequent PCR amplification steps was evaluated by tracing the reactions with 32 P-dCTP and then determining the incorporated fraction to estimate the yield and agarose gel electrophoresis to gauge the quality of the reaction.
Oligonucleotide quality was found to be important. We used HPLC grade oligonucleotides for the library construction and self-subtraction. DRP1 is 5'P-GCTCGCCCT CGCGGCGCGCCNNNNNT. The lone-linker LL1 is the annealed product of LL1F and LL1R 5'CTGGCTCGCCCTCGCGGATCCG (LL1F) AGACCGAGCGGGAGCGCCTAGGC 5' (LL1R) (4) LL1P is 5'CTGGCTCGCCCTCGCGG. The correct amount of template and the number of cycles were determined empirically. Using the reaction conditions described, a yield of between 1 and 5 µg was typical. A yield greater than this may be stressing the reaction, resulting in partial reaction products and other artifacts.
(5) The hydroxylapatite was de-fined prior to use by resuspending the powder in a large volume of annealing buffer, allowing the matrix to settle by gravity, and removing the still slightly cloudy upper phase. This was repeated 3-5 times. 1 ml of hydrated hydroxylapatite binds approximately 100 µg of DNA.
(6) Failure to completely remove all hydroxylapatite results in binding of ssDNA as the phosphate concentration drops during the next buffer exchange step.
(7) Depending upon the extent of annealing, the concentration of remaining ssDNA may be low enough to result in a significant fraction binding to the filter membrane. To prevent this, the centricons should be passivated in 5% Tween-20 for one hour followed by four rinses with ddH 2 O. | 5,301.8 | 2007-10-09T00:00:00.000 | [
"Biology"
] |
Deep architectures for long-term stock price prediction with a heuristic-based strategy for trading simulations
Stock price prediction is a popular yet challenging task and deep learning provides the means to conduct the mining for the different patterns that trigger its dynamic movement. In this paper, the task is to predict the close price for 25 companies enlisted at the Bucharest Stock Exchange, from a novel data set introduced herein. Towards this scope, two traditional deep learning architectures are designed in comparison: a long short-memory network and a temporal convolutional neural model. Based on their predictions, a trading strategy, whose decision to buy or sell depends on two different thresholds, is proposed. A hill climbing approach selects the optimal values for these parameters. The prediction of the two deep learning representatives used in the subsequent trading strategy leads to distinct facets of gain.
Introduction
Stock price prediction has been an evergoing challenge for economists but also for machine learning scientists. Different approaches have been applied over the decades to model either long-term or short-term behavior, taking into account daily prices and other technical indicators from stock markets around the world. During the last years, deep learning has also entered the stock market realm, particularly through its specific technique to model long-term data dependencies, the long short-term memory network (LSTM).
The present paper puts forward a new data set holding long-term data from 25 companies enlisted at the Bucharest (Romania) stock market and appoints a LSTM architecture to predict the subsequent close price. A convolutional neural network (CNN) with 1D (temporal) convolutions is also employed to produce an estimation of the next day close price value. The predictions of the two models are then used in a trading scenario. The difference between the current close price and its estimated value for the following day is measured against two thresholdsdepending on which of them is higher-in order to pursue a BUY or SELL action. The values for the two thresholds are found via the heuristic approach of hill climbing (HC -showing different measures of gain from the simulated transactions of shares for the 25 companies-indicate that the two deep learning approaches achieve a different way of learning. Their predictive prices make one excel at the total amount of money gained in the transactions over all companies and the other at the largest number of companies with profit after transactions. The recent research in the area of stock price prediction with deep learning methodologies makes use mainly of LSTM architectures and takes into account several predictors. In this light, besides putting forward a new data set, the aim of the proposed framework is to show that also 1D CNN can equally and faster help predict the next move of the stock price, as already demonstrated in literature [1] for different sequential tasks. Also, this study shows that there is no need to further complicate the current learning problem with more predictive variables or further feature extraction from these, as demonstrated in comparison to the methodology of a different study in section 3.2. There is also a practical aim to this study, regarding the subsequent trading simulation on the base of the deep learning predictions. The state of the art either employs a deterministic scheme (the literature entries using deep learning) or very complex evolutionary algorithms for trading rule generation (the papers using other machine learning techniques for prediction). In opposition, the approach presented in this paper parametrizes the BUY and SELL rules and determines the optimal variables through a simple HC heuristic.
The paper is structured in the following manner. Section 2 begins with the description of the novel data collection and draws the conceptual premises for the subsequent modelling. The chosen deep architectures and the proposed heuristic-driven search strategy are outlined against the state of the art. The experimental part, found in section 3, is composed of the exploration of the best parameter settings, the results of the two deep models and the effect of their predictions within the HC-powered trading strategy on the hypothetically generated profit. The discussion is concluded in section 4, by also advancing directions for further improvement.
Materials and methods
The new data employed in this study is described in detail both as concerns its content and the means to access it. The methodology is outlined with respect to the state of the art in deep learning for stock price prediction. The trading approach driven by the estimations of the deep learners and the parametrization of HC is presented in comparison to the classical strategies of buy & hold and Bollinger bands.
Stock price data from the Bucharest Stock Exchange
The data used refers 25 companies listed under the Romanian stock market. The trading indicators are the number and value of transactions, the number of shares, the minimum, average and maximum prices, the open and close price. The data has been collected since October 16 1997 until March 13 2019. The period for which each business is listed is different, as triggered by the date of the enlisting of the company, as well as the cease of activity at the other end. Fig 1 shows the available history for each of them. The overall trend for the close price as well as an indication of the periods in which this was recorded are illustrated in Fig 2. The collection is publicly available at the following link: https://doi.org/10.6084/m9.figshare.7976144.v1.
The prediction in this study targets only the close price. Taking into account its values from day t − N, where N is a given number of days back, the task is to forecast its value at day t + 1. The training data set is therefore comprised of N independent variables c t−N , . . .., c t and a dependent variable c t+1 for every day t 2 [N, M], where M is the number of days for training.
The validation data set considers the samples for the days t 2 [M − N, M + P], where P is the number of days for validation; the last N days from training are also referred, as a means to predict the first day of validation. The test collection contains those records with t 2 [M + P − N, M + P + Q], where Q is the number of days taken for testing and the last N days from validation are taken for the same reason mention before.
According to the terminology in [2], the window length is thus equal to N, the rolling window is equal to 1 day and the predict length also to 1 day. While the last two parameters have implicitly set values, the optimal period for the window length N has yet to be established during experimentation.
Deep learning for stock prediction. State of the art
In what follows, we present the state of the art in literature as concerns only prediction from stock data and not from additional information (such as textual references). Also, we solely review those entries where a DL model was used.
The study [3] builds a DNN on the TAQ data set of the NYSE for the Apple 1 min high-frequency stock pseudo-log-returns. The prediction of the average price is constructed on the basis of additional features: current time, price standard deviations and trend indicators provided for the selected window size. Based on the test predictions, a trading strategy is proposed in order to practically assess the performance of the constructed model.
The paper [4] models high-frequency (at 5 min) intraday stock returns from the Korean market through a DL neural network, both on raw data and with the support of additional feature extraction techniques (principal component analysis (PCA), autoencoders (AE), restricted Boltzmann machines). The study also tests the complementarity between the DL and the classical autoregressive model.
The article [5] uses a LSTM layer regenerating each trading day to predict the stock price movement from information recorded at 15 min from the Brazilian exchange. The attributes consist of the open, close, high, low and volume data and other technical indicators: future price, trading volume, intensity of the current movement tendency. The problem is transformed into a classification formulation, where there is a class indicating the increase of price and another its decrease. A trading operation is also simulated on the base of the predictions for the close price and compared to baseline strategies.
In [6], a cascade of three methods is used to predict daily stock moving trends from several indexes, each taken from the markets in one of the following countries: China, India, Hong Kong, Japan and USA. A wavelet transform first eliminates the noise from the input time series. Stacked AE then produce more general features and these are given to a LSTM. The initial features consist again of those of the trading data (open, high, low and close prices), of the technical variables (moving averages, commodity channel index, momentum indexes) and of the macroeconomic attributes (US Dollar index, inter-bank offered rate). Following the predictions of the model, a trading strategy is implemented on the index future contracts and compared to a traditional buy & hold procedure. The paper [7] decomposes daily prices from Yahoo! Finance trading data into frequency components through a state frequency memory recurrent NN and combines the discovered patterns into the eventual stock price prediction.
In [8], 36 variables referring the daily open, high, low, close prices and technical attributes (momentum, moving average, psychological line among others) from the Google stock multimedia are modelled by a combination of a (2D)2PCA and a DNN.
From the S&P 500 ETF data, the study [2] takes into account the log returns for close price and trading volume as the stock time series variables, and stock chart images as additional information, all taken at 1 minute. A LSTM models the numerical attributes and a CNN mines the image data, while the features extracted from the two models are finally fused at the fully connected layer. A trading simulation is performed with the predictions on the adjusted close price of the proposed model against other strategies, including the classical buy & hold one.
Proposed LSTM and CNN architectures for stock data modelling
On the base of its conceptual formation and, as also seen before in literature, the first obvious choice for learning stock price series data is a LSTM. On the other hand CNN with 1D convolutions can also model temporal data. Moreover, it has been recently shown [1] for other sequential tasks, that CNN has many advantages over recurrent models: it creates a hierarchical representation over the inputs through its multiple layers, which identifies relationships within the data, and, computationally speaking, it allows a faster training.
The LSTM is a recurrent neural network that is able to implicitly learn long-term dependencies in the data [9]. This is possible through the structure of the repeating module that has several special components interacting with each other: a cell state and the three types of layers that control it-the forget gate that is in charge of knowledge that will be discarded from the cell state, the input gate that manages the information that will be kept, and the output gate that regulates what will be the output of the module.
The design of the LSTM selected for the current problem consists of the subsequent layer flow: several consequent LSTM layers, each followed by a Dropout layer, and a final Dense layer. The input data has the shape (M, N, 1), where M is the number of instances in the training collection, N is the window length and 1 is the number of indicators that are taken into account, i.e. in this case only the close price.
CNN can model data given sequentially over time through a special type of convolutional layer. The 1D layer of the CNN convolves the neural weights of each kernel with the input over a single temporal dimension. The input is thus taken with the shape (N, 1) for every of the M training samples given in this problem. The architecture of the CNN is disposed in the following manner: a sequence of 1D convolution layers of chosen kernel size and depth, each followed by a ReLU transfer layer for nonlinearity and a Max Pooling one, with Dropout in between convolutions. A Dense layer takes the flattened output of the last convolution, Dropout is inserted again and a final Dense one gives the output of the network.
Dropout layers were used in between the specific ones for regularizing both deep learning approaches [10], in order to prevent the overfitting generated by the available sample size.
A first step in evaluating the models is by fine tuning their parameters with the goal of minimizing the mean squared error (MSE) in a validation step. Subsequently, the architectures can be applied on a test set.
Post-learning trading strategy with hill climbing
A trading simulation is necessary in order to practically assess the performance of the learning models in identifying the trend patterns and leading to profit. Optimal rules for the trading strategy must thus be established. In the current work, we construct such rules by appointing two threshold parameters and determining their optimal value through a HC procedure.
There are other entries in the literature that make use of heuristic methods within trading strategies, but directed towards the popular evolutionary algorithms (EA). These are subsequently outlined. At the parameter level, the optimal values for variables of a filter rule are generated through EA in [11], namely the percent of price movement over which buy or sell decisions are taken, the number of hold days, number of delay days and the number of previous days taken into account. At the architectural level, the paper [12] comparatively studies two rule-based trading schemes involving several stock indicators: a neural network whose weights are found by EA and a genetic programming approach for generating the trading rule tree. The two trading strategy encodings are evaluated through different means incorporated into a single fitness function: profit in comparison to the buy & hold approach, penalty for those strategies that follow the buy & hold one, relation between profit and loss.
EA are indeed known for their optimization potential, flexibility and local optima circumvention. For the task at hand, however, there is no need for the complexity (in architecture and running time) of the EA. HC is a simpler, faster, yet efficient candidate for the rule parametrization desired herein.
Given P is the predicted close price for time t + 1 and C is the actual close price at time t, the pair of rules for the buying and selling operations are defined as in Eq (1).
where � 1 and � 2 are the threshold parameters that will be heuristically determined. In other words, the operation is BUY when the subsequent close price is predicted to be higher than the current close price above a certain threshold � 1 , and it is SELL when the former is below the latter under a possibly different threshold � 2 .
In our scenario, there is only one share per company involved, so SELL may occur only if the share is owned and BUY takes place only if it is not already bought.
The HC procedure starts with a random pair of values for � 1 and � 2 and, by means of mutation, it reaches a better configuration of parameter values. Mutation does not allow the values for the two variables to go lower than 0. The evaluation measures the gain on the validation set.
The gain is measured for a period [t 0 , t f ]. At time t 0 we implicitly have the initial investment in a share (BUY). A sequence of SELL-BUY-SELL-BUY-. . . follows, where the decision is taken according to the rules in Eq (1). The final operation is either a BUY or a SELL one. At day t f , the sum of gains and losses derived from the sequence of operations is computed as in Eq (2), where n is the number of operations, {s 1 , s 2 , . . ., s n } are the selling transactions, {b 1 , b 2 , . . ., b n−1 } are the buying operations, b 0 is the BUY at time t 0 , b n is the last potential BUY and C k the close price at operation k.
The gain is defined as in Eq (3), where S is the sum of gains and losses obtained from trading as in Eq (2). If the user still has the share (last operation was b n ), then the price of the share at final time t f is added to the sum of gains and losses during trading.
Finally, a gain in percents is measured as shown in Eq (4).
The same procedure goes for the computation of profit for the test period. Two step-bystep scenarios are illustrated in Fig 8 in subsection 3.3.
The considered heuristic-based rule trading system is compared to two baseline methods, i.e. buy & hold and the Bollinger bands approach. These are considered as benchmark due to their traditional tactical use in practical stock trading and to their frequent employment in comparisons to newer methods, as found in literature (see section 2.2 on state of the art).
The buy & hold approach simply performs a BUY operation at time t 0 and a SELL one at time t f ; the gain is directly the difference C t f À C t 0 .
As concerns Bollinger bands, it is the bounce strategy that is employed for comparison. If the close price reaches the upper band then it will bounce back to the middle area (lower price), so the action should be SELL. In reverse, if the close price touches the lower band, the operation should be BUY, since it will bounce to the middle area (higher price), as well.
Results and discussion
Experiments target three directions: 1. Parametrization. On the one hand, the window length is a parameter of the time series task that controls both accuracy and processing time, and its value must thus strike a balance between the two. On the other hand, deep methodologies are known for their sensitivity to parameter tuning for every problem.
2. Prediction power. The accuracy of estimation of the LSTM and CNN is measured in comparison.
3. Trading simulation. The effect of the deep learning prediction is illustrated in a trading scheme, based on thresholds that are additionally heuristically determined.
The validity and utility of the framework will be consequently measured from multiple facets: 1. Choice of the optimal window length. The performance of the model concerning this parameter will be measured by comparing the MSE results on the validation data when a minimal number of reference days back (30) is used with the best ones achieved when the length is taken from a manual search. A Wilcoxon-Mann-Whitney statistical test will be employed to compare if the difference in performance between the results of 30 runs is significant as to justify an increased running time.
2. Architecture of the deep learners. The architecture of the LSTM and CNN will be manually selected to handle all performance issues, i.e. overfitting, running time and accuracy. When the difference in mean accuracy from 10 repeated runs between two architectures is not significant, it will be opted for the least complex one.
3. Comparison of deep prediction results. The mean accuracy for MSE obtained by the LSTM and the one of the CNN will be compared. Standard deviations with the minimum and maximum will also be calculated. Should the mean results be close, then the Mann-Whitney U test will determine if the difference in the MSE results from 10 validation runs is significant. Besides these, the MSE of the method in [6] is considered for comparison. 4. Gain measurement from a trading scenario. The gains in percents obtained from a simulated trading on the test data, on the base of the deep learning prediction results and the heuristic trading scheme, will be illustrated as bar and box plots. They will be compared in mean, minimum, maximum and standard deviation with the ones of the traditional trading tactics of buy & hold and Bollinger bands. The gain amounts derived from the two deep learners will also be put against those of the classical schemes. Again, if the mean results are very close, the same statistical test will confirm if there is a significant difference or not.
In order to conduct an objective statistical analysis of the performance measures, the data was split into training, validation (for parameter tuning and HC application) and test sets and the Monte Carlo cross-validation with 10 repeats was performed when reporting the accuracy results. Training contains 60% of the records for each company, validation takes the following 20% and test holds the final 20% instances.
It is well-known that deep learning presents a big problem as concerns the large running time necessary for training. The running times for the CNN and LSTM favor the former by a large extent. In average over all 25 companies, the building of the CNN model, when an architecture with two convolutional layers is used, takes 35.22 seconds. Comparatively, the corresponding value for the LSTM, having also 2 specific layers, takes 114.29 seconds and 160.91 seconds respectively for one with 3 such layers. The reduced running time for CNN allows for a more detailed parameter tuning. As the evaluation of one parameter setting for a company takes around 35 seconds, considering all 25 would lead to a running time of almost 15 minutes and this would not allow for the evaluation of too many combinations. Consequently, one company is next selected as representative for parameter tuning, and the discovered settings will be further used for every company, although it is natural that the found values are not necessarily optimal overall.
Parameter tuning
One important parameter that directly affects the running time regardless of the employed model is the window length. The company randomly chosen is the one with the symbol 'BRD' and different window lengths are tried between 30 and 120. For each case, a CNN is used to train the model, then it is applied for the validation set and the MSE is computed. The first plot in Fig 3 illustrates the average validation MSE as obtained after 10 repeated runs of the CNN model. Naturally, smaller values are preferred. The second plot shows the training running time in seconds when the same window length is varied.
The best result in the first plot corresponds to the window length 57. Since there is a high running time gap between 30 and 57 in the second plot, the two settings are further more thoroughly investigated through statistical means. The question that arises is whether the MSE results derived from the two window lengths are significantly different. The CNN is repeated for 30 times for each of the two window lengths, the average validation MSE results are of 0.41 and 0.36 and the two MSE vectors are next compared via a Wilcoxon-Mann-Whitney statistical test. The calculated p-value of 0.26 indicates that the two settings do not lead to statistically different results and the smaller window length of 30 is further preferred in all experiments because of the lower running time.
The next step of parametrization concerns the deep approaches. There are two general architectures tried for the CNN: with two and with three convolutional layers. For each of them, the convolutional kernel parameters and the dropout are tuned. Firstly, the approach with two layers is considered and various values for parameter combinations are tried. In all the cases there are 10 repeated runs considered, where the averages are computed, and the data used for this purpose consists only of the stocks for symbol 'BRD'. Subsequently, the presented results are calculated by fixing the parameter combinations two by two and the outcome is obtained by averaging the results for all the other varying involved parameters. The numerical values that appear in the labels for Figs 4, 5 and 6 refer to the number of the layer where that specific parameter appears. Lighter colors correspond to better results. Fig 4 shows the combinations between the pairs of values for filters, kernel sizes and dropout rates for the CNN with two layers. A result that consistently remains better is achieved for the larger number of filters and the lower value of the dropout rate for the first layer. The numbers of filters for the second layer leads to better results also for the value 128, although the gap between the two options is not as high as it was for the first layer. The kernel sizes are taken 5. As mentioned above, the dropout rate for the first layer is taken 0.3, while the one for the second layer appears to conduct to better outputs for 0.6. Alternatively, a CNN with three layers is also tried. The parameters found to be appropriate for the version with two layers are adopted in this case, except for the dropout rates. The number of filters for the third layer is taken 256 and the kernel size is set to 3. Fig 6 shows the interaction between the 3 dropout rates considered, each with 3 possible values. As observed, the nuances indicate better results for small amounts in first two dropout rates and generally good for the last one in all cases, with only a preference for 0.4 in the middle plot, where the rates for the first and third layers are confronted.
By comparing the best results illustrated by means of lighter nuances (and observing the minimum values from the color bar on the right) in Figs 4 and 5 to the ones in Fig 6, it can be concluded that the CNN model having two layers performs similar or even better in terms of MSE values. Another argument for keeping the 2 layer model for the next experiments is given Deep learning for long-term stock price prediction with a heuristic-based trading strategy by the running times, as this takes 44.26 seconds to build it, while training the one with 3 layers lasts 106.05 seconds.
The parameters of the LSTM were chosen manually with 50 units in each specific layer and a dropout of 0.2. The high running time did not allow a similar thorough parameter tuning for this model. However, alternatives with 2 and 3 layers respectively were tried. Naturally, the higher the number of layers, the greater the running time is. When it comes to MSE obtained over all the 25 companies, the variant with 3 layers had an average of 0.92, while the simpler alternative led to a considerably higher average of 2.39. Accordingly, the 3 layers option is preferred next. Table 1 illustrates the MSE average results over all 25 companies on the validation set as obtained by the CNN and LSTM from 10 repeated runs. The running times (in seconds) that Deep learning for long-term stock price prediction with a heuristic-based trading strategy were necessary for training the two deep learning models are in the second row of the table and those for building the HC used in the next experiment are given in the last row. The HC uses for the evaluation of the potential solutions the previously saved models of CNN and LSTM. It can be seen that the CNN needs approximately 4.5 less time than the LSTM, however at a somewhat high MSE difference. Deep learning for long-term stock price prediction with a heuristic-based trading strategy Although the difference between the mean MSE for the 2 models appears to be rather substantial, the gap between them is not that large for all companies. Each bar in Fig 7 illustrates the ratio between the MSE for CNN and LSTM for each company in turn. Although for 'ALT' and 'TLV' the discrepancies appear to be massive, they are obtained in both cases from the digits after the decimal point. The differences in the mean values in Table 1 come from 'TGN', which gives the maximal values for both and also has the highest close price values in Fig 2, and from the next largest values in both cases, for 'EBS', where CNN reaches a MSE of 29.39 and LSTM one of 7.07, respectively.
Prediction results
In order to compare the proposed deep learning architectures to the state-of-the-art methodology in [6], the latter was implemented and applied in the current study. In short, there are 11 more technical indicators used, i.e. moving average convergence divergence, commodity channel index, average true range etc; then a stacked autoencoder is employed to generate deep high-level features which are fed to a LSTM to forecast the close price. The obtained results were better than the ones from CNN from Table 1, but weaker than the ones of the pure LSTM. The mean MSE was of 7.69, while the corresponding average running time 1 was of 263.63 seconds, considerably larger than the one of the LSTM, which was already relatively high.
Since the predictors in [6] led to promising results in the indicated research paper, the same features are additionally used for the LSTM with the aim of forecasting the close price. The mean MSE reached a value of 1.55, higher than the 0.92 reported in Table 1 and a running time for training of 298.62 seconds. Since both results are weaker than the current LSTM that uses the previous 30 values only for the close price, the multiple predictors are omitted next.
Example trading scenario
The predictions of the deep learning approaches are subsequently used in a trading simulation in order to effectively test the performance of the models.
Table 1. Comparison between CNN and LSTM architectures in terms of MSE on the validation set, running time for training them (time 1) and running time for the HC when using the previously saved CNN and LSTM models (time 2).
All results are reported in average over 10 repeated runs of the method. Deep learning for long-term stock price prediction with a heuristic-based trading strategy All trading scenarios start from the assumption that one share is bought at the beginning of the trading period and then, depending on the strategy, it can be sold at some point in time, bought again later and so on. If at the end of the test period the share is still owned, it is evaluated at this final time as if it was sold just then. For each company, the gain is computed by subtracting the initial value of the share. The sum, the average, as well as minimum and maximum over gains are reported for all companies in order to assess the overall efficiency. The sum over all the initial values for every one share of all companies is of 555 RON (Romanian currency), i.e. 117 euro at the time of writing, so that can be viewed as the initial investment.
Measure
The HC algorithm in charge of setting the optimal thresholds for buying and selling runs for 60 iterations. 5 restarts are used in order to have higher chances of escaping local optima. The candidate solution encodes the � 1 and � 2 in (1). The two values are bounded to the interval given by 0 and one quarter of the difference between the maximum and the minimum value for the close price over the validation period. Surely, the high value of this interval is different from one company to another. A mutation with normal distribution is used and the mutation strength is set as the quarter of the maximum value � i can have, i 2 {1, 2}. The two values found by the HC are the ones deemed as very suitable for the validation set, meaning that the gain on this period is maximized. There are 2 HC versions, one that uses as fitness evaluation the CNN approximation of the close price and one that employs the LSTM for the same purpose. They are further denoted as HC-CNN and HC-LSTM. Subsequently, the solutions reached for each company in turn are used for the test period and the gains are computed. For overcoming the stochastic nature of the HC results, the reported outcomes are averaged over 10 repeated runs. Fig 8 illustrates the close price and the predicted one for the test period for two companies that should be illustrative of the process. The top plot shows the behavior of the CNN model for 'TEL' with the thresholds found as optimal by the HC in validation, which are � 1 = 0.38 and � 2 = 2.31. The red x and the green + signs indicate when the stock share is sold or bought, respectively. Although, when comparing the input share price with the output, the price decreases from the beginning of the test period till the end, the proposed model reaches a gain in percentage of 14.46%. In fact, for this company the CNN-based model is the only one that has a gain, while the others register losses. The LSTM has a positive outcome for this company only in 1 out of 10 HC runs (when it was a gain of 7.67%), while the CNN registers gains reaching from 2.09% up to 15.85% over the 10 repeated runs.
The second plot uses the thresholds discovered by HC-LSTM for the company with the symbol 'RRC', which are very close to zero-� 1 = 3.3E-05 and � 2 = 4.4E-04. The predicted prices and the found thresholds determine a high number of transactions and lead to a gain of 131.17%. Nevertheless, the share price is very low and such a great gain in percents leads in fact to a smaller gain in actual money value as opposed to the real money amount obtained, corresponding though to a smaller gain in percents, of the CNN in the top plot. Table 2 shows the mean, median, sum, minimum, maximum and standard deviation for different types of results over the tried scenarios and for all the 25 companies. Two more financial quality measures are added to the results, i.e. annualized return (AR) and sharpe ratio (SR). The first one shows the gain earned by an investment over a given time period, but Deep learning for long-term stock price prediction with a heuristic-based trading strategy reported to one year. Eq (5) illustrates how AR is calculated; SP and BT stand for selling and buying price respectively, while days represent the number of days between BP and SP. Eq (6) shows how the SR is computed. DR denotes daily return, while avg and std represent average and standard deviation, respectively. AR shows what an investor would earn over a period of time with respect to the annual return, without any indication of the volatility for the investment. Its output can reach very high values when the stock is bought at a low price and sold at a significantly higher price in a very short period of time. This observation becomes obvious, if we see in Eq (5) that the exponent is obtained by dividing 365 to the number of days between the buying and the selling dates: the smaller the number of days between the two dates, the higher the power exponent becomes and, consequently, the higher the entire AR. This is the reason why the AR values for the buy & hold in Table 2 are always small (the two dates represent the first and the last days). For the other three options, various inspired speculations led to very high values for some of the companies. For this measure, the median is more relevant than the mean. For this measure, HC-LSTM appears to point to the most inspired times for buying and selling.
In contrast to the AR, SR takes into account the volatility of the investment returns. Since the volatility is generally high for the companies in the current data set, and it represents the denominator, the values for SR are generally low. The highest value is registered for HC-CNN both for mean and median, in line with the results holding the number of times where gains were positive. Deep learning for long-term stock price prediction with a heuristic-based trading strategy Deep learning for long-term stock price prediction with a heuristic-based trading strategy HC-LSTM one has the largest variability. The highest median is also obtained by the HC-CNN model, fact that can be better observed in the first plot from Fig 11. The box plots with the largest parts of the inter-quartile ranges on the positive side are the ones corresponding to the HC-CNN and HC-LSTM models. What is more, most of the box that defines the inter-quartile range for HC-CNN is on the positive side. This also corresponds to the fact that it has the highest number of times out of 25 when the gains on the test period are positive, as shown on the last row of Table 2, as well as in the third plot from Fig 11. It comes as a surprise however that the largest gain in value (i.e. in RON) is achieved by the HC-LSTM, as indicated in the middle plot from the same Fig 11, despite the smaller number of times in which the gains were positive. This can be explained by the largest mean value for the gains on the test data in percents in Table 2, but also by the fact that the better results are obtained for companies that have more valuable (i.e. higher priced) shares. The highest value for the AR measure also indicates that HC-LSTM speculated well the times for buying and selling. Naturally, the buy & hold and Bollinger scenario do not need a validation period, since they do not use any automatic learning or calibration of the models. In this respect, their gains by applying the scenarios on this period would only provide further tests of the models. An interesting difference between the validation and test periods is given by the fact that the buy & hold strategy reaches a positive outcome for 16 companies for the former, as compared to only 13 in the period used for test, despite a relatively similar percentage gain of 9.91% and 9.76%, respectively. This indicates that there are more companies that had their shares with values rising in that period and also puts the buy & hold strategy on the top position among the two deterministic options discussed for this period with a gain of 124.99 RON. The Bollinger model gets only 74.93 RON. The HC-CNN and HC-LSTM are certainly far better in this period, since the goal of the HC was to find proper values for the thresholds by having access to the validation data. Their gains are of 202.65 and 239.38 RON, respectively, with 22 and 24 companies for which the gain is positive.
Conclusions and future perspectives
The paper puts forward two deep learning models for stock price prediction in the Romanian market, alongside this new data collection. The estimations of the two architectures are used within a trading strategy. The optimal amount of difference between the close price of the current day and the predicted price for the next day towards deciding BUY and SELL operations Deep learning for long-term stock price prediction with a heuristic-based trading strategy is determined through a HC procedure. It is interesting to see that the two deep networks lead each to a distinct facet of the gain within the trading simulation: while the LSTM has a higher gain in terms of the sum of money earned, the CNN has a higher number of times gained than lost for the 25 companies watched. Also, while HC-LSTM reaches a better value for the annualized return, HC-CNN leads to a better sharpe ratio.
Future work aims to target enhancements both at the level of the learning and that of the optimization. Other types of recurrent architectures, such as echo state networks, can be tried. The deep models (LSTM, CNN) can be more elaborately parametrized as in [13] and ensembles similar to [14] can be constructed with traditional machine learning techniques, i.e. SVM [15] or random forest [16]. The landscape of multiple stock indicators can be examined by EA [17] in order to work with several selected predictors. Also, the parameters of the trading strategies can be appointed in an evolutionary fashion. More complicated rules can be evolved by multiple population EA [18] in comparison to the genetic programming and neuro-evolution approaches in the state of the art. Additional textual knowledge that may predict the rise or drop of the stock price of a company triggered by its appearance in the media [19] are planned to be also investigated as auxiliary input. Also, more sophisticated scenarios like allowing to buy more stocks or having a sum and investing in various companies (not necessary all of them) in order to increase the overall gain will be also tried within the future directions. | 9,566.2 | 2019-10-10T00:00:00.000 | [
"Computer Science",
"Business"
] |
Large current difference in Au-coated vertical silicon nanowire electrode array with functionalization of peptides
Au-coated vertical silicon nanowire electrode array (VSNEA) was fabricated using a combination of bottom-up and top-down approaches by chemical vapor deposition and complementary metal-oxide-semiconductor process for biomolecule sensing. To verify the feasibility for the detection of biomolecules, Au-coated VSNEA was functionalized using peptides having a fluorescent probe. Cyclic voltammograms of the peptide-functionalized Au-coated VSNEA show a steady-state electrochemical current behavior. Because of the critically small dimension and vertically aligned nature of VSNEA, the current density of Au-coated VSNEA was dramatically higher than that of Au film electrodes. Au-coated VSNEA further showed a large current difference with and without peptides that was nine times more than that of Au film electrodes. These results indicate that Au-coated VSENA is highly effective device to detect peptides compared to conventional thin-film electrodes. Au-coated VSNEA can also be used as a divergent biosensor platform in many applications.
Background
Nanoelectrodes have many advantages such as a high current density and a large active area for chemical and biological sensing; the higher sensitivity of nanoelectrodes compared to bulk electrodes for the detection of various biological species has already been demonstrated [1][2][3]. Because of their unique shape with nanoscale diameters and micrometer-scale lengths that allow for an easy fabrication of device architectures, one-dimensional nanostructures, including carbon nanotubes and nanowires, are promising materials for nanoelectrodes [4][5][6][7]. Moreover, their possibility in highly complementary metal-oxidesemiconductor (CMOS) devices and their biocompatibility afford silicon nanowires (Si NWs) further advantages as nanoelectrodes [8][9][10][11][12].
Si NW sensors have been studied thus far with respect to applications as field effect-type transistor. These sensors are fabricated by first growing Si NWs on a substrate, dispersing them into solution such as deionized water or ethanol, depositing the nanowires on a pre-patterned substrate, and finally making metal contacts for electric current signaling. The functionalization of NWs with biological ligand molecules follows subsequently. In this type sensor, the electric field generated by the binding of charged biomolecules at the gate electrode acts as a gate voltage and changes the electric current, which, in turn, enables the sensing of biomolecules [13][14][15]. However, such sensors show low sensitivity because of the weak current difference induced by the low field effect of charged target biomolecules. These problems could be resolved by the electrochemical type sensor, especially using the vertical nanowire electrode array. In the electrochemical type sensor, the current occurs by the ion molecules in medium. In this situation, high current difference can be expected from the vertical nanowire electrode array when the biomolecules are attached at the nanosized electrode tip because the current path of the charged ion in a medium will be completely blocked. To exploit the potential of this approach, such vertical-type nanoelectrode array needs to be fabricated and biologically functionalized. In particular, the fabrication of such electrode array using a combination of a bottom-up approach that can provide the best suited bio-nanomaterials serving as building blocks of a sensor and a top-down approach that can provide a reliable device-fabrication process could be very useful for the mass production of high-performance biosensors for many applications.
In this paper, we fabricated Au-coated vertical Si nanowire electrode array (VSNEA) by combining a vaporliquid-solid (VLS) process for the growth of nanowires and CMOS process for the fabrication of electrodes using these nanowires. The feasibility of biomolecule detection of such VSNEA was verified by functionalization with peptides having fluorescent probes and detecting the corresponding signals.
Growth of nanowires and fabrication of electrode array
Si (111) substrate was deposited by 0.1 vol.% 3-aminopropyl triethoxysilane (APTES) solution in absolute ethanol for Au colloid coating. Subsequently, the substrate was immersed in the Au colloid solution having colloids with a diameter of 250 nm. After washing with deionized water and drying, the substrates were placed in a low-pressure chemical vapor deposition (CVD) chamber. Si NWs were synthesized on the Si substrate by a VLS process with the assistance of the Au colloids serving as catalyst at 550°C under a high-vacuum condition with SiH 4 gas as a precursor and H 2 as a dilution gas. For the fabrication of the electrodes, Si NWs grown on the substrate were coated with an Au electrode and a SiO 2 passivation layer. Thereafter, the SiO 2 passivation layer was selectively etched out to expose the Au tips at the top of the nanowires by using CMOS process.
Peptide functionalization of VSNEA
Au-coated VSNEA was immersed in a tissue culture well (BD Falcon™ 24-well Multiwell Plate, BD Biosciences, San Jose, CA, USA) and a mixture of peptides (5 μL of 2.13 μM) and water (700 μL) was added. The reaction was continued for 12 h. Afterward, the substrate was washed multiple times with distilled water and dried in air.
Microscopy and CV measurements
Peptide-decorated Au-coated VSNEA was observed by bright field and fluorescence microscopy (Olympus BX51, U-HGLGPS, Olympus Corporation, Shinjuku, Tokyo, Japan), as well as by scanning and transmission electron microscopies (SEM and TEM, respectively). For the electrochemical characterization of VSNEA, cyclic voltammograms (CV) measurements were carried out using an IVIUM Com-pactStat system in standard three-electrode configuration. VSNEA, an Au film electrode, and Ag/AgCl were used as working, counter, and reference electrodes, respectively, of the three-electrode configuration. CVs were performed with a 100-mM K 3 Fe(CN) 6 solution as redox material at a scan rate of 20 mV/s.
Results and discussion
In this study, vertically grown Si NWs were used as building blocks for Au-coated VSNEA. It requires vertical growth, as well as control of the areal density of NWs achieved by applying CMOS processing for the electrodes, thereby additionally performing a detection optimization. To control the growth of Si NWs, we utilized a VLS process using Au colloidal nanoparticles as catalyst [16,17]. To thoroughly disperse the Au nanoparticles, Si substrate was coated with a thin 3-aminopropyl triethoxysilane (APTES) layer. It is well known that Au nanoparticles have a negatively charged surface in aqueous solution and APTES has a positively charged functional amino group [18]. Therefore, the charged surface of the Si substrate deposited with an APTES layer can be used to establish a charge interaction with the Au nanoparticles, thus enabling to achieve a fine and homogeneous dispersion of 250-nm Au nanoparticles on the Si (111) substrate (See Additional file 1: Figure S1 of supplementary data). Figure 1a,b shows an SEM image of the Si NWs, indicating that they are well dispersed and grown vertically on the Si (111) substrate. The length of the NWs is approximately 8 μm. Figure 1c is the TEM image of the Si NWs. As shown in this figure, the diameter of the Si NWs was approximately 300 nm, thus following the size of the Au nanoparticles used as catalyst in the VLS process (See Additional file 1: Figure S2 of supplementary data). The selected area electron diffraction (SAED) pattern presented in the inset of Figure 1c indicates that the Si NWs are single crystalline and grew along the [111] direction. Figure 1d, a high-resolution transmission electron microscopy (HRTEM) image, also shows the single-crystalline nature of the Si NWs with a thin native oxide layer with a thickness of less than 2 nm. The metal globule at the end of NWs clearly indicates that the Si NWs were synthesized by the VLS mechanism. These results indicate that vertical Si NWs are epitaxially grown on the Si (111) substrate with a single-crystalline structure, and their NWs density can be well controlled by controlling the concentration of Au nanoparticles. Figure 2 shows the scheme of the Au-coated VSNEA fabrication procedure with vertical Si NWs and the corresponding SEM images of the individual fabrication steps. A Si NW array was grown on the Si (111) substrate (shown in Figure 2a,f), and a first SiO 2 passivation layer was coated on the surface by chemical vapor deposition (CVD) process. The SiO 2 layer was then selectively etched to expose the Si NWs, as shown in Figure 2b,g. For this purpose, poly (methyl methacrylate) (PMMA) resist serving as a SiO 2 protecting layer was coated on the substrate and the side walls of the NWs were selectively etched by dipping them into buffered oxide etch (BOE). Next, an Au film was deposited on the surface of the NWs to attach the peptides that can induce biological functionalization, as well as to act as an electrode within a sensing device, as shown in Figure 2c,h. A 50-nm-thick Au film was deposited by a sputtering process using a patterned steel use stainless (SUS) mask at a slow rate of 5 nm per minute to avoid roughening of the surface or thermal damage under an argon atmosphere (shown in Figure 2c,h). Finally, the same type of passivation using a SiO 2 layer was performed once more (shown in Figure 2d) by CVD process. The secondary SiO 2 passivation layer was then selectively etched to expose the Au layer only at the tip of VSNEA, as shown in Figure 2d,i. Figure 2e shows a cross-sectional scheme of the finished Au-coated VSNEA.
To verify the feasibility of Au-coated VSNEA for biomolecules detection, peptides having a fluorescent probe, carboxyfluorescein, were attached the surface on NWs. Peptides are well suited as the bioreceptor component of various biosensors and biomedical applications because they can selectively and tightly bind to a large diversity of biomolecules, including DNA, RNA, and protein targets through modification of their multi-linking polychains [19,20]. It is also well known that sulfur makes a strong covalent bond with Au in a wide range of temperatures for a variety of solvents [21]. Therefore, a 13-mer peptide was synthesized using solid phase peptide synthesis (SPPS) with standard Fmoc protocols. The peptide was designed to incorporate multiple polar and charged residues (glutamic acids and lysines) in order to increase its water solubility and to prevent nonspecific adsorption of the peptide onto the VSNEA specimen.
A fluorescent probe, carboxyfluorescein, was attached to the N-terminus of the peptide to visualize the peptide binding to the NWs in a noninvasive manner (Figure 3a). Cysteine that contains a thiol group was placed at the C-terminal of the peptide for the formation of thiol-Au bond. To verify that the peptides bind selectively to the Au tips of VSNEA, they were immersed in an aqueous solution of the peptide, incubated for 12 h, and subsequently washed thoroughly with distilled water. The number of peptide molecules in the solution was in large excess relative to the calculated surface area of the Au tips.
Investigations by bright field and fluorescence microscopy were carried out to confirm that peptides were selectively attached to the active Au tip of VSNEA. Figure 3b shows the scheme for the fluorescence analysis of the peptidedecorated Au-coated VSNEA. As shown in Figure 3c, the vertical NWs are observed in the form of small dots in the bright field microscope image. Because this VSNEA specimen has not been treated with peptides, its fluorescence image did not reveal any sign of green fluorescence originating from the fluorescein molecules (Figure 3d). In stark contrast, the VSNEA specimen treated with peptides shows bright green fluorescence light of fluorescein (Figure 3e,f). Colocalization of dots in the upright microscopy images of VSNEA, as evidenced by a comparison between the bright field and the corresponding fluorescence images, strongly suggests a selective binding of the peptides to the Au tips of VSNEA without any nonspecific binding to other areas of the substrate (Figure 3a,b). Calculations considering the volume of the peptide molecule and the surface area of Au show that multiple peptide molecules can be attached to a single Au tip. Such multiple additions of peptides onto Au-coated VSNEA are expected to be useful in exploring biological multivalent interactions.
Au-coated VSNEA was further characterized by cyclic voltammogram (CV) measurements. Figure 4a,b shows an electrolyte bath attached VSNEA device for CV measurements and the schematic of the three-electrode configuration of the CV measurement system using Au-coated VSNEA as a working electrode. Figure 4b and Additional file 1: Figure S3 (See Additional file 1: Figure S3 of supplementary data) show the CV data of Au-coated VSNEA and an Au film device, respectively. The Au film device was prepared by coating of 50-nm-thick Au on a flat Si substrate and characterized under identical conditions for comparison. As in the case of many other electrochemical nanoelectrodes, the CV of VSNEA shows a steady-state electrochemical current behavior caused by an enhanced mass transport and fast electron-transfer kinetics owing to its nanosize and unique shape [4][5][6][7]22,23].
The detection difference with respect to biomolecules was analyzed by CV measurements of the VSNEA and the results are summarized in Tables 1 and 2. Here, Q bare is the coulomb of the electrode before peptide functionalization, Q peptide is the coulomb of the peptide-decorated electrode, A is the total electrode area, and J is the current density of the electrode. As shown in Table 1, the current density of Au-coated VSNEA was approximately 7,000 times higher than that of an Au film electrode prior to functionalization by peptides, whereas after peptides decoration, the current density of the VSNEA was still approximately 3,000 times higher than that of the peptide-decorated Au film electrode. The much higher current density of Au-coated VSNEA is because of the critically small dimension and the vertically aligned nature of the NWs affecting the mass transport and the electron transfer. The mass transport and electron concentration of Au-coated VSNEA electrode are higher than that in the case of an Au film electrode because of the cylindrical shape of the nanowire (See Additional file 1: Figure S4 of supplementary data). Au-coated VESNA showed large differences in the current with and without the functionalization of peptides. As shown in Table 2, the current of Au-coated VSNEA with peptides decreased by 57.8% compared to the VSNEA without peptides. This large current difference (δ) was almost nine times more than that of the Au film electrodes. The results indicate that the two electrochemical type sensors can detect the target molecules by a current difference that is ascribed to the flow variations of ion molecules in medium. Furthermore, the results represent that the current path through the Au nanoelectrodes in the electrochemical type VSNEA is effectively blocked by the attached peptides and thus it can effectively detects the peptides. The high current density of VSNEA also ascribe to the large difference in current with and without peptides. These outcomes indicate that VSNEA can detect peptides better than film-type electrodes. It should be noted that peptides can be used as ligands to detect various other biomolecules and thus VSNEA could be used as divergent biosensor platforms in many applications.
Conclusions
We vertically grew Si NWs and fabricated Au-coated VSNEA for the peptide detection. The VSNEA was selectively functionalized by multiple peptides to verify their application potential for biomolecules detection. We obtain a steady-state electrochemical current behavior and a high current density from peptides-functionalized Au-coated VSNEA because of the critically small dimension and the vertically aligned nature of this device. Furthermore, VSNEA showed a large current difference with and without peptides that was nine times more than that of Au film electrodes. These results indicate that VSNEA is highly effective to detect peptides, compared to conventional thin-film electrodes. Therefore, VSNEA could be used as divergent biosensor platforms in many applications.
Additional file
Additional file 1: Figure S1. SEM images of 250 nm Au nanoparticles dispersed on the Si substrates. (a, b) 1 (Au nanoparticles) : 3 (DI water) solution. (c, d) 1 (Au nanoparticles) : 6 (DI water) solution. Figure S2. Scanning TEM images of the Si NW. (a) Energy dispersive scanning point of the Si NW. (b) Energy dispersive spectrum for each scanning point. Figure S3. (a) Schematic image of CV measurements system using Au film as a working electrode. (b) CV measurements of the Au film electrode in 100 mM K 3 Fe(CN) 6 . All CV measurements are taken with a scan rate of 20 mV/s. Figure S4. Schematic images of ion and electron transfer mechanism in the CV measurements system using Au film (a, b) and VSNEA (c, d) as a working electrode. | 3,746.4 | 2013-11-26T00:00:00.000 | [
"Chemistry",
"Physics"
] |
Predicting the Individuals ’ job satisfaction and determining the factors affecting it using the CHAID Decision Tree Data Mining Algorithm
DOI: http://dx.doi.org/10.24018/ejers.2019.4.3.1169 6 Abstract As the general attitude of the individual about what he does, job satisfaction is the result of individual perceptions from the workplace and the factors and conditions in it; it is also influenced by his personality traits. Meanwhile, investigating the job satisfaction is of great importance in advanced societies. The present study aimed to assess the job satisfaction in the United States and evaluate the hypothesis of existence of job dissatisfaction and the factors affecting it in the studied sample. The various social data, related to job satisfaction and collected by the National Opinion Research Center of the United States, are used in this study. The sample is consisted of different people including male and female samples from nine different states in the United States. For the purpose of this study, the patterns of data were discovered, and factors affecting job satisfaction were identified using the CHAID decision tree data mining method. Finally, it was found that a small percentage of people are dissatisfied with their job.
I. INTRODUCTION
Job satisfaction is a type of attitude according to experts, and they are defined it as the individual's attitude toward the job and simply put, how someone feels about his/her job and its various aspects.
As one of the major issues in organizational literature and one of the most important and most common research subjects in the field of organizational behavior studies [1].more than 5,000 papers and theses have been prepared on job satisfaction [2].There are more than 12,400 studies until 1991 only on the subject of job satisfaction [3].Also, more than 6300 doctoral dissertations have been available in the International PhD dissertation abstract, and more than 3350 research papers have been published in this field [4].
According to the recent studies, personality is one of the components that affects job satisfaction; however, there are inadequate results about the nature, characteristics and traits leading to job satisfaction [5].
Three general categories can be speculated for the factors relating job satisfaction, each of which includes sub-factors [6].
A) Prerequisite factors that are essential for the formation of job satisfaction.Four categories of factors contribute to the formation of job satisfaction.In this case, job satisfaction is considered as a dependent variable that influences 4 factors.
Job characteristics include diversity, identity, tasks importance, autonomy, feedback, and job enhancement, rolerelated characteristics (role conflict and role ambiguity), group and organizational characteristics including group cohesion, community quality, commitment to participation, work pressures, Inequality in workplace, organizational structure, organizational justice, organizational climate and organizational support, the relations with the leader includes the structure of leadership intimacy, leadership considerations, leadership productivity, leader' punishment and encouragement behaviors, and exchange relations between members and leader.
B) Correlational factors of job satisfaction Some organizational concepts, correlated with job satisfaction include organizational commitment, life satisfaction, job pressure and stress, job engagement and and job attitudes.
C) the effects and consequences of job satisfaction These factors include three categories of motivation, civic behavior and latency behaviors such as absenteeism, turnover, and turnover intention; and performance is placed in the last category.
A combination of different categories of these factors, each of which constitute a fields in the database under study, has been used for the purpose of this study.At first, the data mining tool was used to prepare this data and then, using the CHAID decision tree in Clementine 12.0 software, factors affecting job satisfaction were investigated in the sample individuals.
The remainder of this research is structures as follows: a brief discussion of the data mining and decision tree is presented at first.The research field have been examined in the second part.The third part is assigned to the research method and tools.Finally, the extracted results are expressed and suggestions will be given in the fourth section.
A. Data-mining
The data mining process analyzes databases and macro data for the purpose of discovery and dissemination of knowledge using mechanized and semi-mechanized methods.Such studies and explorations can be considered as the same extension and continuation of ancient and comprehensive Predicting the Individuals' job satisfaction and determining the factors affecting it using the CHAID Decision Tree Data Mining Algorithm knowledge of statistics.The distinction is in scale, the breadth and variety of fields and applications, as well as the dimensions and size of today's data that requires machine learning, modeling, and training methods [7].
The term data mining refers to extract hidden information, or patterns and relationships in a large volume of data in one or more large databases.
Data mining also refers to the use of data analysis tools to discover patterns and valid relationships which have not been known until now.These tools may include statistical models, mathematical algorithms, and machine learning methods.Data mining is not exclusive to data collection and management and also includes information analysis and forecasting.Data-mining methods have been widely used in previous studies.In [8], researchers used a data-mining based method for protection of power systems.Many simulation results were conducted and the capability of the data-mining method was verified.In [9,10,11] data mining and operation research method such as Data Envelopment Analysis (DEA) can be addressed through diverse computational, and combinatorial models.
The exploratory applications that examine text or multimedia files to extract data consider various parameters including: Association rules: Patterns upon which an event is connected to another, such as connecting a pen purchase event to the paper purchase event Order: the pattern for analyzing the sequence of events and determining the event that leads to other events such as the birth of a baby and the purchase of milk powder.
Classification: Identifying new patterns such as the concurrency of glue and folder purchases.
Clustering: Discovering and documenting a set of unknown facts such as a geographic location of buying a branded product.
Forecasting: Discovering the patterns by which an acceptable prediction is presented of future events such as the membership in a sports club by attending sports classes.
Data-mining methods use a given dataset and try to find out the relationships between data to perform an accurate classification.Then, experts in the field can interpret the output.As an example, in [12] an intelligent method using the specialists' knowledge is used for generating electricity from waste energy in industry.This method not only shows a great example of intelligent techniques for engineering applications, but also provides the industry with a costeffective approach.
B. Decision Tree
A decision tree is used as a tool for illustrating and analyzing decisions in decision analysis where the expected values of the competition are alternately calculated.A decision tree has three types of nodes: 1) Decision node: that typically represented by a square 2) Random node: that specifies with the circle 3) End node: Determined by a triangle.
The Fig. 1. shows the schematic view of a decision tree.
Very compact and in the form of a diagram, a decision tree can draw attention to the problem and the relationship between events.The square represents the decision, the ellipse represents the activity and the diamond represents the result.Decision trees and decision diagrams have advantages over decision support tools: 1) Simple understanding: Every person can learn the way of working with the decision tree, with little study and training.2) Working with macro and complex data: The decision tree can easily work with complex data and decide on them.3) Easy reuse: If the decision tree is made for an issue, different instances of that problem can be calculated with that decision tree.4) Ability to combine with other methods: The decision tree result can be combined with other decisionmaking techniques and obtain better results.5) Fig. 1.A schematic view of decision tree The NORC clients include government agencies, educational institutions, a variety of charities and nonprofits, and private companies (firms).Most of the studies in the center are in national level, although some local-to-international projects are also conducted in this center.NORC creates a unique value for its customers by developing innovative and innovative solutions.In fact, NORC combines advanced technology with quality social science research with a generic benefit.The GSS project is one of the characterized research conducted by NORC since 1972, which the twenty-eighth time is conducted by 2010 in the United States.In the last third of the century, the GSS has had a profound impact on social change and the complex growth of the American community [13].
GSS is the largest project funded by the National Science Foundation's Sociology Program.In addition, the project is a source for the US Census Center, which is often the source of information analysis in the social sciences.
As said, to insert images in Word, position the cursor at the insertion point and either use Insert | Picture | From File or copy the image to the Windows clipboard and then Edit | Paste Special | Picture (with "Float over text" unchecked).
The authors of the accepted manuscripts will be given a copyright form and the form should accompany your final submission.
III. RESEARCH METHOD AND FINDINGS
The data used in this study are prepared from the GSS research database in 2010.The database includes data on 891 men and 1153 women in 9 different states in the United States, within a SPSS file with 2044 records.This database contains 790 attributes (columns), which used for 40 attributes defined hypothesis.As stated above, the research hypothesis was to find out the amount and factors affecting the individuals' dissatisfaction in the sample and a decision tree algorithm was used for this purpose.To find patterns in the job satisfaction of individuals in the sample, the Clementine 12.0 data mining software was used and the CHAID decision tree model was used to implement this technique.
The research aimed to investigate the effective patterns on the prediction of job individuals' dissatisfaction among in this database.Therefore, the target field in this database is "job satisfaction in general".Apart from the missing values described below, this field is categorized into four categories and coded with four numbers (considering that the English equivalent of each of the fields and their respective categories, are put in the text next to their Persian translation because they are exactly in the software and the database, so that their nature is different from the English equivalent of the keywords mentioned in the footnote).
First class: Complete satisfaction with "all satisfied", code 1.
Third floor: Relative dissatisfaction with "not too satisfied", code 3.
Fourth Floor: Dissatisfaction with "Not at all satisfied", code 4.
At first, the values of the "No Answer", "Do not Know" and "Unacceptable IAP" responses from the target field (job satisfaction) that are not effective in predicting the final goal based on the research hypothesis, were considered as "missing values" with data preparation methods.Then, the records required to explore the job satisfaction model were converted to 1165 records.After making the flow in the Clementine 12.0 software and implementing the CHAID decision tree model, the obtained tree was analyzed.A view of the tree can be seen in Figure 2, in which, there are nodes covering the highest percentage of people with relative job dissatisfaction or complete dissatisfaction (job dissatisfaction).
As shown in Fig. 2, the first node of the tree contains the information for the target field, labeled as Node 1.The information below the "Category" term are the categories related to the values of that specific field (as previously stated, the categories selected on the basis of the hypothesis are considered, and the rest of the categories are considered as missing.the percentages of each responses to each category is shown in general.As we see, the highest percentage of response was to the category of "complete satisfaction", in general.The column 'n' indicates the number of records (individuals) who responded to the corresponding classification.Obviously, this value has a direct relationship with the percentage amount.Finally, the aggregate values are specified in the last row.The cumulative amount of records is 1161.The reason of the reduction of this number is the deletion of records responded to the missing values.The first branch of the parent Node (Target Node) in this tree is related to the field of "Respondent proud to work for employer" which indicates that this field has a greater impact in predicting the target field in this model.As we see in Figure 2, the following fields affect job dissatisfaction: "How likely respondent make effort for new job next year", "How often does respondent find work stressful?""The relation between management and employees" And "respondent proud to work for employer".It is concluded, from the general schema of the tree, that a small percentage of people are dissatisfied with their jobs in general and in comparison with those who are satisfied with their job and this percentage is even much lower for a state of complete dissatisfaction.So that only one node shows the frequency of this issue.However, the same number has emerged due to the existence of various factors that the decision tree has discovered for its patterns.It can be seen, according to the decision model made by the CHAID decision tree, that among all the records (individuals) that are located in the Node, 25 people are strongly disagree with the question of proud to work for employer.Although very few (less than 5%), however 14 out of 25 people, namely 56% are completely dissatisfied with their jobs (lack of satisfaction).
84 respondents were disagree in response to the question of proud to work for employer, of which 33 people (40 percent) have "relative dissatisfaction" with their job (Not at all satisfied); Node 3.Among the 84 people in Node 3, who are disagree with the sense of proud to work for employer, 63 people, namely 75% are looking for a new job next year; Node 13, and among them, 63 people, 31, namely about 50% of them had "relative dissatisfaction" with their job and 30% are "dissatisfied" Among the 63 people looking for a new job, 45 people see their jobs as stressful in most cases.45 people, see their job stressful in most of the times.27 out of 45 people, namely 60% have 'relative dissatisfaction' and 31% are "dissatisfied"; Node 28.
On the other side of this tree, there are people who "agreed" in response to a sense of proud to work for employer; Node 2. These people make up about 53% of the total (613 records out of 1161 records), among which, 34 people considers the relationship between the management of the organization and the employees as "Quite Bad" and "Very Bad"; Node 12 and among these 34 people, 21 people knows their job "always" and" often" stressful.Out of these 21 people, about 50% (10 people) face a "relative dissatisfaction" with the job.
Compared to other tree fields and branches, the percentage of "relative dissatisfaction" or "dissatisfaction" in the nodes affected by other factors is almost zero on average (although in most of the nodes it is more than zero) and other factors have not had much effect on these dissatisfactions, and most people have enjoyed "full satisfaction" and "relative satisfaction" in these nodes.Therefore, these nodes are neglected.
The Gain table in Clementine software shows the various values in tree nodes with different titles based on the selected target category.This table is shown in Figure 3 for reviewing the target category of "dissatisfaction."Fig. 3. Gain table for the target category of Dissatisfaction" in the target field of job satisfaction IV.CONCLUSIONS AND SUGGESTIONS According to the results, it can be concluded that the lack of individuals' pride to work for their employer can have a great impact on the full job dissatisfaction.Similarly, searching for a new job can have a direct relationship with job dissatisfaction and on the other hand, the stressful work environment has had a huge impact on job dissatisfaction classifications.Also, it can be concluded from the last observations at the previous stage that the lack of a good relationship between organization management and employees may lead to job dissatisfaction.
In general, according to the research results, a small percent of people are dissatisfied with their job and this percentage is even much lower in case of people with complete dissatisfaction.However, this number can be reduced to zero in advanced societies, such as the United States by reducing the problems caused by the results obtained in the previous section.
Case Study: the National Opinion Research Center of the United StatesFarhad Sheybani
Fig. 1 .
Fig. 1.A schematic view of decision tree C. The Research field The National Opinion Research Center (NORC), founded in 1941, has headquarter located at the University of Chicago.It also has offices in Chicago, Washington, Bethesda, Maryland and Berkeley, California.Furthermore, the staff of this organization are located throughout the United States.The NORC clients include government agencies, educational institutions, a variety of charities and nonprofits, and private companies (firms).Most of the studies in the center are in national level, although some local-to-international projects are also conducted in this center.NORC creates a unique value for its customers by developing innovative and innovative solutions.In fact, NORC combines advanced technology with quality social science research with a generic benefit.The GSS project is one of the characterized research conducted by NORC since 1972, which the twenty-eighth time is conducted by 2010 in the United States.In the last third of the century, the GSS has had a profound impact on social change and the complex growth of the American community[13].GSS is the largest project funded by the National Science Foundation's Sociology Program.In addition, the project is a source for the US Census Center, which is often the source of information analysis in the social sciences.As said, to insert images in Word, position the cursor at the insertion point and either use Insert | Picture | From File or copy the image to the Windows clipboard and then Edit | Paste Special | Picture (with "Float over text" unchecked).The authors of the accepted manuscripts will be given a copyright form and the form should accompany your final
Fig. 2 .
Fig. 2. The view of the decision tree created by the CHAID algorithm | 4,187.4 | 2019-03-02T00:00:00.000 | [
"Psychology",
"Sociology",
"Computer Science"
] |
Molecularly Imprinted Polypyrrole-Modified Screen-Printed Electrode for Dopamine Determination
This paper introduces a quantitative method for dopamine determination. The method is based on a molecularly imprinted polypyrrole (e-MIP)-modified screen-printed electrode, with differential pulse voltammetry (DPV) as the chosen measurement technique. The dopamine molecules are efficiently entrapped in the polymeric film, creating recognition cavities. A comparison with bare and non-imprinted polypyrrole-modified electrodes clearly demonstrates the superior sensitivity, selectivity, and reproducibility of the e-MIP-based one; indeed, a sensitivity of 0.078 µA µM−1, a detection limit (LOD) of 0.8 µM, a linear range between 0.8 and 45 µM and a dynamic range of up to 350 µM are achieved. The method was successfully tested on fortified synthetic and human urine samples to underline its applicability as a screening method for biomedical tests.
Introduction
Dopamine is a crucial monoamine neurotransmitter in the mammalian central nervous system.Neurotransmitters can be classified into excitatory and inhibitory depending, respectively, on their ability to promote the creation of a nerve impulse in the receiving neuron or to inhibit the impulse; dopamine belongs to both classes and has crucial roles in different body areas [1][2][3].
It is primarily produced in the adrenal medulla and nervous system [4,5].Its biosynthesis involves the hydrolysis of tyrosine by the tyrosine hydroxylase to 3,4-dihydroxyphenylalanine (DA), which is further decarboxylated to dopamine by aromatic l-amino acid decarboxylase.Dopamine is also a precursor of two other catecholamines, i.e., noradrenalin and adrenaline; indeed, dopamine is hydrolyzed by dopamine β-hydroxylase to noradrenalin, while phenylethanolamine N-methyltransferase converts noradrenalin to adrenaline [6].
In the central nervous system, dopamine has a well-defined function in motor control, learning and cognition; on the other end, in peripheral organs, it regulates several functions, such as blood pressure, gastrointestinal motility, sodium levels, and hormone release [7][8][9][10].Over the past few decades, various studies have demonstrated the significant impact of dopamine on immune cell function in both the central nervous system and periphery; this has become very important since dopaminergic immunoregulation can be a possible target for drug discovery and disease monitoring [11].
Abnormal dopamine levels in the human body can lead to several diseases.For example, cardiotoxicity is a signal of a high level of dopamine, which can evolve into cardiac decompensation, tachycardia and hypertension [12].High levels of dopamine can also reflect an overactivity of the sympathoadrenal system, which can be due to different conditions, such as fatigue, anxiety, post-traumatic stress and chronic stress, or it can be correlated to conditions such as neuroblastoma, pheochromocytoma, neuroendocrine disorders and neurodegenerative diseases [13].
Due to dopamine's several physiological and pathophysiological implications, its detection is crucial for clinical diagnosis and monitoring of pharmacologic treatment.
Electrochemical sensing is a viable alternative, and several methods have been developed for dopamine determination.In particular, voltammetric techniques have been exploited, taking advantage of the electrochemical oxidation reaction of dopamine [1,33].However, the relatively high oxidation potential, its adsorption on the electrode surface, which leads to the formation of polymeric film onto the electrode surface and passivation of the electrode, the co-presence in some biological samples of interfering compounds, such as ascorbic acid and uric acid that have similar oxidation potentials, have led researchers to develop dopamine sensors by appropriately modifying the electrode surfaces to avoid all these drawbacks [33].
The functionalization of the electrode with receptors, thus obtaining sensors for the target analyte, is an effective strategy for improving the selectivity and sensitivity of electrochemical methods.
It is well known that bioreceptors are very specific and allow for high-sensitivity sensors; however, they have poor stability, are typically prone to denaturation over time, and can degrade at temperatures higher than 30 • C or in non-neutral pH solutions.
An effective alternative to bioreceptors is the employment of molecularly imprinted polymers (MIPs), which are described as synthetic analogs of biological antibodyantigen systems.
MIPs are obtained by template-assisted synthesis, which involves a functional monomer interacting in solution with the target analyte, which acts as a template to obtain a complex.A cross-linking agent and a polymerization initiator are added, and heating or UV radiation drives the polymerization.After the template molecules are extracted, binding cavities are created that are complementary in dimension and shape to the target analyte [34].
Several electrochemical sensors based on MIPs were developed for a wide range of applications [35][36][37][38].The crucial issue is integrating the polymer with the electrode; different strategies have been exploited.The simplest is the chemical polymerization of a mixture containing the template, monomer, and cross-linker, which is drop-coated or spin-coated on the electrode surface, followed by thermal or UV polymerization to form a thin film.The main drawbacks of this strategy are the scarce reproducibility of the film thickness and sometimes the need for adding a plasticizer (for example, oligourethane acrylates) to give a greater adhesion of the MIP film on the electrode surface [35,39].
The current trend of integrating the MIP with the transducer surface is the use of electropolymerization.In this case, a suitable constant potential or a potential scan is applied to a three-electrode cell in a solution containing the template and an electroactive functional monomer, leading to the formation of the polymeric film on the working electrode surface (e-MIP).This strategy has several advantages: it is a fast and in situ process, it requires small quantities of reagents, and above all, a tuning of the polymeric film thickness is achieved by changing the electrochemical parameters, such as the range of the applied potential, the scan speed or the number of scans if cyclic voltammetry is used [39][40][41][42][43].
Aniline and thiophene derivatives can be employed as electroactive monomers for e-MIP synthesis, but pyrrole is the most commonly used due to its water solubility, which allows for electropolymerization in aqueous solutions and its ease of oxidation [44].A helpful procedure for enhancing the features of polypyrrole-based e-MIP is overoxidation, with which polypyrrole loses its conductive properties.However, alcoholic, carbonylic and carboxylic groups are formed and can establish hydrogen bonds and other electrostatic interactions with the template molecule; so, after the template removal, higher affinity cavities are present inside the polymeric networks [42,45,46].Moreover, overoxidation allows for better control of the film thickness and low and stable background current signals [46][47][48][49].
The literature reports some e-MIP-based voltammetric sensors for dopamine detection; most of them used a conventional three-electrode cell with glassy carbon as the working electrode [50][51][52][53][54].
In the present research, screen-printed cells are employed instead because of their main advantages, such as the low cost, the rapidity of measurement and the unnecessary sample transportation and preparation to a centralized laboratory that allows for direct in-to-the-field analysis.Indeed, the aim is to develop cheap, disposable devices designed for point-of-care and in situ dopamine determination with comparable performances to previously proposed electrochemical sensors [50][51][52][53][54].Moreover, the present e-MIP-modified screen-printed sensor differs from similar devices already developed.For example, S. Chelly et al. [55] proposed a screen-printed voltammetric cell whose working electrode was modified with gold nanoparticles.Despite the pretty good characterization, a low-sensitive technique (linear sweep voltammetry, LSV) was used; the selectivity test is unconvincing and questionable, and samples at high dopamine content without interferent molecules were analyzed.The work of M. Pavličková et al. [56] presents a working electrode of molybdenum disulfide (MoS 2 ) ink screen printed onto conductive fluorinedoped tin oxide substrates for dopamine detection.The paper describes the fabrication procedure and the electrode characterization well; however, the analytical performances are limited to a calibration curve, and no selectivity test or application to real samples has been conducted.In addition, the measuring cell requires a classical reference electrode to Ag/AgCl/3M KCl and a Pt counter electrode with a potentiostat suitable only for laboratory measurements.Another interesting work [57] proposed a smartphone-based electrochemical device using a screen-printed cell whose working electrode was modified with poly (3,4-ethylenedioxythiophene), chitosan and graphene.Being a device wholly designed and developed by the authors, the work showed few analytical results, and no analysis of real samples has been reported.Although it has good potential, it remains a prototype, and the method is not immediately applicable.
Conversely, in the present work, a sensor of simple realization suitable for direct analysis of DA both in the laboratory and in situ is proposed.In detail, molecularly imprinted electrosynthesized polypyrrole is deposited on the graphite-ink working electrode of screen-printed cells by cyclic voltammetry (CV), using dopamine as the template molecule.The better experimental conditions for the e-MIP formation are defined by a design of experiments (DoE).The overoxidation is performed by chronoamperometry at +1.0 V vs. Ag/AgCl-ink pseudoreference electrode, and the template removal is carried out by CV.
Differential pulse voltammetry (DPV) in phosphate buffer at pH 7 is selected for the quantitative analysis.
Interference tests and analysis of synthetic and human urine samples are performed using both the unmodified (bare) electrode and the e-MIP-based one, demonstrating the modified electrode's higher sensitivity and selectivity, highlighting the potential of the developed sensor as a screening tool for in-to-the-field or point-of-care dopamine detection.
Reagents and Instruments
All reagents were from Merk Life Science S.r.l., Milan, Italy.Pyrrole (Py, 98%) was distilled by a Hickman distillation head and kept in darkness at 4 All solutions were prepared with ultrapure water.Screen-printed cells (Topflight Italia S.P.A., Vidigulfo, Pavia, Italy) with carbon-ink working and counter electrodes and an Ag/AgCl-ink pseudo-reference electrode were employed (see the picture in Figure S1, Supplementary Material).
Synthetic Urine Preparation
The artificial urine was prepared according to a proposed protocol [58].Table 1 summarizes the composition of this solution.The appropriate amount of each reagent was dissolved in 1 L of ultrapure water and stored in a fridge at 4 • C before use.
Urine Sample Collection and Pre-Treatment
A urine sample was collected over 8 h from a volunteer in a cleaned polyethylene buttle, acidified at pH 4 with HCl, stored at 4 • C and filtered before analysis (filter paper Whatman n. 42, 2.5 µm particle retention).
e-MIP and e-NIP Preparation: Optimization of the Experimental Condition by DoE
Before the e-MIP (or electropolymerized non-imprinted polypyrrole, e-NIP) electrodeposition, the screen-printed cell was rinsed with ethanol and dried with a small flow of N 2 .
The experimental conditions for the e-MIP preparation were optimized through a design of experiments (DoE); in particular, a face-centered composite design (FCCD) was adopted, considering two variables (number of scans in cyclic voltammetry and ratio template/monomer) at three levels as will be described below.For the data treatment, the software CAT (R version 3.1.2)was used [59].Here is the optimized procedure summarized.
The e-MIP-functionalized working electrode of the cleaned screen-printed cell was electropolymerized by cyclic voltammetry (CV), scanning the potential from −0.6 V to +0.8 V during three cycles (scan speed 0.1 V s −1 ) in aqueous solution of LiClO 4 0.1 M, pyrrole 15 mM and dopamine 3 mM.The overoxidation of polypyrrole imprinted film was obtained by applying a constant potential of +1.2 V for 2 min in phosphate buffer solution (PBS) 0.1 M/KCl 0.1 M at pH 7. The extraction of the template was performed in the same supporting electrolyte solution by 20 scans of CV in the potential range from −1 V to +1 V (scan speed 0.1 V s −1 ).
The electrodeposited, non-imprinted polymer film (e-NIP) was analogously prepared but without adding dopamine to the polymerization solution.
Figure S2 (Supplementary Material) reports a schematic of the procedure.
Bare, e-MIP and e-NIP Working Electrodes Surface Characterization
Before and after the working electrode modification, the active area and the doublelayer capacitance were determined.
The effective area was computed by applying the modified Randles-Sevick's equation to CV voltammograms obtained in K 4 Fe(CN) 6 5 mM/KCl 0.1 M at pH 7, scanning the potential from −0.2 V to +0.6 V and at different scan rates ranging from 0.02 V s −1 to 0.5 V s −1 [60,61].
The double-layer capacitance (C) [61][62][63] was determined by CV at different scan rates, scanning the potential from +0.5 V to −0.5 V in NaCl 0.1 M solution, i.e., in a potential range in which the lowest faradic current is expected.The difference between the anodic and cathodic current registered at +0.02 V was plotted vs. the scan rate; the slope of the straight line corresponds to twice the double-layer capacitance.
Additional characterization of the working electrode surface, before and after modification, was performed by electrochemical impedance spectroscopy (EIS) in K 3 Fe(CN) 6 5 mM/KCl 0.1 M at pH 7 as the probe/supporting electrolyte solution.Measurements were registered in the frequency range from 0.01 Hz to 10 5 Hz with a sinusoidal potential modulation of 0.05 V at a fixed potential of 0.2 V for the bare and 0.1 V for the modified electrodes (equilibrium potential of the redox probe).
Study of the Electrochemical Behavior of Dopamine at the Bare and e-MIP-Modified Electrodes
Dopamine electrochemical oxidation was characterized using cyclic voltammetry.CV voltammograms were registered in PBS 0.1 M/KCl 0.1 M solution at pH 7 with a 2.5 mM dopamine concentration, scanning the potential from −0.3 V to +0.6 V and at different scan rates (from 0.01 V•s −1 to 1 V•s −1 ).
The relations between the peak potential (E p ) vs. log of the scan rate (log v), the log of the peak current (log I p ) vs. log ν and the current function I p •ν −1/2 vs. ν [64,65] were studied.Moreover, the formal potential (E •′ ), the diffusion coefficient (D) and the reaction order were determined as already described [66,67].
Quantification of Dopamine by Differential Pulse Voltammetry (DPV)
Dopamine was quantified by DPV, preparing calibration curves in 10 mL of PBS 0.1 M/KCl 0.1 M solution at pH 7 and applying the following parameters: E start = −0.3V; E end = +0.6V; E step = 0.01 V; E pulse = 0.025 V; t pulse = 0.2 s; and scan speed = 0.02 V/s.Analytical figures of merit, such as the limit of detection (LOD), the limit of quantification (LOQ), the linear range and the dynamic range, were evaluated from the obtained calibration curves.In particular, from the slope of the calibration curve, LOD and LOQ were calculated according to the following equations: where s y/x is the standard deviation of y-residual obtained from the linear regression of the data; this value can be supposed not to be significantly different from the standard deviation of replicates of blank solutions [68,69].Instead, the dopamine quantification in the synthetic and human urine samples was performed using the standard addition method.
Modification and Characterization of the Working Electrode of the Screen-Printed Cell
The film of electropolymerized imprinted polypyrrole over the working electrode surface of the screen-printed cell was obtained via cyclic voltammetry (CV) in an aqueous solution of lithium perchlorate 0.1 M containing pyrrole and dopamine.Aiming to get the optimal performing polymer, two independent variables (factors) were selected: the number of scans in CV and the ratio dopamine (template)/pyrrole (functional monomer) and a face-centered composite design (FCCD), i.e., a response surface design, [70,71] was applied.It was necessary to switch to this technique since a simple factorial design previously adopted as a screening method showed a non-linearity of the model of responses.
Response surface methodology relates a response to the levels of several input variables that affect it.The form of this relationship is generally unknown but can be approximated by a low-order equation, such as a second-order model.Central composite designs are the most common and widely used to estimate second-order response surfaces.
A three-level layout for an FCCD (see Figure 1) is a cube with axial points on the face centers (star points); the center points provide information about the existence of curvature in the response surface, and the axial points allow for efficient estimation of the quadratic terms.The number of experiments required is computed by the formula: Table 2 summarizes the factors selected and the corresponding (minimum level), 0 (center point) and +1 (maximum level).The slope o ibration curve obtained by plotting the current peak (Ip, µA) of DPV v dopamine concentration (µM) was selected as the response.
Table 2. Factors and corresponding levels for the FCCD to optimize the working electrode of the screen-printed cell.The label used in the following g 3) is reported in parenthesis.Table 2 summarizes the factors selected and the corresponding levels coded as −1 (minimum level), 0 (center point) and +1 (maximum level).The slope of a three-point calibration curve obtained by plotting the current peak (I p , µA) of DPV voltammograms vs. dopamine concentration (µM) was selected as the response.
The model equation can be written as follows: Table 2. Factors and corresponding levels for the FCCD to optimize the e-MIP film over the working electrode of the screen-printed cell.The label used in the following graphs (Figures 2 and 3) is reported in parenthesis.As can be observed from the graph in Figure 2 and the values in Table 3, the significant parameters are the number of CV scans (n.CV), which must be set at the lowest value and the ratio DA/Py, which, conversely, has to be set at the highest value.The interactions and the quadratic coefficients are not significant.
The response surface plot of Figure 3 confirms the above sentence.Three replicates at the center point were performed to validate the model.Table 4 summarizes the results (average value, standard deviation, and confidence interval (CI) at a 95% confidence level).The predicted value, i.e., the coefficient b0, is between the two ends of the CI; accordingly, the model is considered validated.The model equation can be written as follows: Figure 2 reports the plot indicating the significance of the model's coefficients and Table 3 their values.
Table 3. FCCD for the e-MIP film preparation: coefficients values and their significance (the possible number of black stars and the relative significance are: * p ≤ 0.05, ** p ≤ 0.01).The number in parenthesis is the standard deviation on the last digit.
Coefficient
Value Significance As can be observed from the graph in Figure 2 and the values in Table 3, the significant parameters are the number of CV scans (n.CV), which must be set at the lowest value and the ratio DA/Py, which, conversely, has to be set at the highest value.The interactions and the quadratic coefficients are not significant.
The response surface plot of Figure 3 confirms the above sentence.Three replicates at the center point were performed to validate the model.Table 4 summarizes the results (average value, standard deviation, and confidence interval (CI) at a 95% confidence level).The predicted value, i.e., the coefficient b 0 , is between the two ends of the CI; accordingly, the model is considered validated.Consequently, the optimal conditions for the e-MIP film preparation are three scans in CV and a ratio DA/Py = 1/5.
Characterization of the Working Electrode Surface before and after Modification
The working electrode surface was characterized before and after modification with the e-MIP (or e-NIP) by determining the active area and the double-layer capacitance; moreover, the electron transfer kinetics was evaluated by electrochemical impedance spectroscopy.
Table 5 summarizes the active area and double-layer capacitance values for the three electrodes.
As can be observed, the active area decreases after modifying the working electrode surface with the polymer; moreover, the active area of the e-NIP-based electrode is lower than that of the e-MIP due to the absence of the recognition cavities, resulting in a lower active surface.This trend agrees with that previously observed for screen-printed or classical microelectrodes covered with molecularly imprinted polymers of different natures [46,61,62,64].
Table 5. Active area and double-layer capacitance values for the screen-printed cells with the working electrode unmodified (bare) and modified with e-MIP and e-NIP.The active area was calculated by Randles-Sevick's equation from CV data obtained in K 3 Fe(CN) 6 5 mM/KCl 0.1 M at pH 7.5 as the probe/electrolyte solution.Potential scan from −0.2 V to +0.6 V, scan rate from 0.02 V s −1 to 0.5 V s −1 .The double-layer capacitance was derived from CV in NaCl 0.1 M. Potential scan from −0.5 V to +0.5 V, scan rate from 0.02 V s −1 to 0.5 V s −1 .The number in parenthesis is the standard deviation on the last digit.Concerning the double-layer capacitance for the bare electrode, the obtained value is very low if compared, for example, to that of glassy carbon electrodes, and the difference can be imputable to the structure of the carbon ink of the screen-printed electrode, with a prevalence of basal planes compared to edge plane of glassy carbon electrodes (i.e., pyrolytic graphite electrodes) that allow for faster electrochemical kinetics, as previously reported [49,63].The double-layer capacitance increased passing from the bare to the overoxidated e-NIP and e-MIP modified electrodes, and this is due to the presence of a polymer layer, which implies an increased accumulation of electrical charges.
Electrode
Cyclic voltammetry was used in this work to evaluate the capacitance, as it showed better reproducibility than other electrochemical techniques, such as EIS, for the same aim.
Conversely, EIS is employed to investigate the processes occurring at the electrolyte/ electrolyte interface of the bare and modified electrodes.Figure 4 shows the Nyquist plots obtained.The bare electrode-solution interface is well modeled by a Randle's circuit composed of two resistors, a capacitor and a Warburg element.The first resistor, representing the solution resistance, R S , is in series with a parallel circuit, consisting of a capacitor, C, or a constant phase element, Q, representing the electrode's double-layer capacitance in parallel to a resistor, R CT , that represents the charge transfer resistance and a Warburg element, W, used to model the transfer of charge between the electrode and the redox probe species in solution and the depletion of the diffusion layer, assuming a semi-infinite linear diffusion.For the e-MIP electrode after template removal and after contact with 20 µM DA solution instead of the classical Warburg component, the equivalent circuit correlated well with the experimental data containing a Warburg short/open element accounting of the non-semi-infinite diffusion (W S ).The equivalent circuits and the component values are reported in Table S1 (Supplementary Material).
As can be seen from the Nyquist plots, the bare electrode shows the lowest charge transfer resistance (short diameter of the semi-circle), and the diffusion contribution is evident since a straight line at 45 • appears.In the modified electrodes, the charge transfer resistance increases due to the presence of a polymeric non-conducting film over the electrode surface (the polypyrrole has lost its conductive properties because it is overoxidized).Obviously, the highest R CT value is registered with the e-NIP-modified electrode since, in this case, the absence of the recognition cavities prevents the probe ions from reaching the electrode surface.For similar reasons, the R CT increases for the e-MIP-modified electrode switching between the free-analyte cavities of the electrode after template removal and the same electrode before extraction of DA from the recognition cavities.idized).Obviously, the highest RCT value is registered with the e-NIP-modified electrod since, in this case, the absence of the recognition cavities prevents the probe ions fro reaching the electrode surface.For similar reasons, the RCT increases for the e-MIP-mod fied electrode switching between the free-analyte cavities of the electrode after templa removal and the same electrode before extraction of DA from the recognition cavities.SEM images (Figure S3a,b, Supplementary Material) are acquired to better display the structure and morphology of the electropolymerized imprinted or non-imprinted polypyrrole film over the electrodes.In the image of the e-MIP-modified electrode, multilayer sheets of the polymer appear, and for both e-MIP and e-NIP-modified electrodes, a lamellar structure is evident; consequently, the differences in the detection ability among the two electrodes are only attributable to the presence of the recognition cavities in the e-MIP rather than the structure and morphology of the polymeric films, as already observed [73].
Electrochemical Behavior of DA at the Bare and e-MIP-Modified Electrode
Although several electrochemical methods for DA detection have been proposed in the literature, a univocal mechanism for the electrochemical oxidation of DA has not yet been well defined and clarified, depending strictly on the operative conditions used.Furthermore, no studies have been presented using screen-printed cells.Aiming to describe this aspect and how and if the presence of the e-MIP on the working electrode surface could interfere with the electrochemical dopamine oxidation, cyclic voltammetry experiments at different scan rates in PBS 0.1 M/DA 2.5 mM at pH 7 were performed, and the redox response was examined.In these conditions, DA is in its protonated form (DAH 3 + ), as evidenced by the fractions of species distribution graph reported in Figure 5.
Figure 6 shows the CV profiles obtained with both the bare and the e-MIP-modified electrodes.Tables S2 and S3 (Supplementary Material) report the peak potential and peak current values.
could interfere with the electrochemical dopamine oxidation, cyclic vo ments at different scan rates in PBS 0.1 M/DA 2.5 mM at pH 7 were p redox response was examined.In these conditions, DA is in its proton as evidenced by the fractions of species distribution graph reported in Figure 5. Fractions of species distribution of DA vs. pH in aqueous solution, ca values reported in [74,75].
Figure 6 shows the CV profiles obtained with both the bare and t electrodes.Tables S2 and S3 (Supplementary Material) report the peak current values.As can be observed, an oxidation peak (EpA) appears at ca. 0.3 V for the bare and 0.25 V for the e-MIP-based electrodes, and a reverse reduction peak (EpC) appears at ca. 0.05 V and 0.07 V, respectively, for the unmodified and functionalized electrodes (here and in the following all potentials are referred to the Ag/AgCl pseudoreference electrode of the screen-printed cells).
It is well known that two-electron oxidation of dopamine to give dopamine oquinone occurs with a reversibility degree dependent on the pH [76].Since, in the present study, the difference between the anodic and cathodic peak potential is larger than ca.30 mV and increases with the scan rate, a quasi-reversible process is suggested.
Moreover, according to previous studies [76,77], the linearity of the graph EpA vs. v 0.5 (square root of the scan rate) indicates an electrochemical reaction coupled with a chemical one, i.e., an EC mechanism (see Figures S4a and S5a, Supplementary Material).
No other peaks are present in both CV graphs, suggesting that the intramolecular cyclization of dopamine o-quinone to leucodopaminechrome (indoline-5,6-diol) does not occur.In fact, leucodopaminechrome is more easily oxidized than dopamine, producing dopaminechrome, and the reduction peak of this compound should have been observed at ca. −0.2 V [76].
A first-order chemical step following the electrochemical reaction is deduced by the slope of the plot log IpA vs. log [DA] (Figures S4b and S5b) [78].
From the graph log IpA vs. log v, a linear trend is verified with a slope near 0.5, suggesting a purely diffusive process and no adsorption phenomena occurring for both
Current/µA
Potential/V 0.01 V/s 0.02 V/s 0.03 V/s 0.04 V/s 0.05 V/s 0.06 V/s 0.08 V/s 0.1 V/s 0.2 V/s 0.3 V/s 0.4 V/s 0.5 V/s 0.6 V/s 0.7 V/s 0.8 V/s 0.9 V/s 1 V/s As can be observed, an oxidation peak (E pA ) appears at ca. 0.3 V for the bare and 0.25 V for the e-MIP-based electrodes, and a reverse reduction peak (E pC ) appears at ca. 0.05 V and 0.07 V, respectively, for the unmodified and functionalized electrodes (here and in the following all potentials are referred to the Ag/AgCl pseudoreference electrode of the screen-printed cells).
It is well known that two-electron oxidation of dopamine to give dopamine o-quinone occurs with a reversibility degree dependent on the pH [76].Since, in the present study, the difference between the anodic and cathodic peak potential is larger than ca.30 mV and increases with the scan rate, a quasi-reversible process is suggested.
Moreover, according to previous studies [76,77], the linearity of the graph E pA vs. v 0.5 (square root of the scan rate) indicates an electrochemical reaction coupled with a chemical one, i.e., an EC mechanism (see Figures S4a and S5a, Supplementary Material).
No other peaks are present in both CV graphs, suggesting that the intramolecular cyclization of dopamine o-quinone to leucodopaminechrome (indoline-5,6-diol) does not occur.In fact, leucodopaminechrome is more easily oxidized than dopamine, producing dopaminechrome, and the reduction peak of this compound should have been observed at ca. −0.2 V [76].
A first-order chemical step following the electrochemical reaction is deduced by the slope of the plot log I pA vs. log [DA] (Figures S4b and S5b) [78].
From the graph log I pA vs. log v, a linear trend is verified with a slope near 0.5, suggesting a purely diffusive process and no adsorption phenomena occurring for both the bare and e-MIP-modified electrodes (Figures S4c and S5c).
Table 6 summarizes the electrochemical parameters characterizing the process for both bare and e-MIP-modified electrodes (the graphs used to estimate these parameters are reported in the Supplementary Materials Figures S4d,e and S5d,e).Table 6.Electrochemical parameters obtained from CV at different scan rates for the bare and e-MIPmodified working electrodes of the screen-printed cell.The number between parentheses refers to the uncertainty on the last digit.
Bare Electrode e-MIP-Modified Electrode
The formal potential E 0 ' and diffusion coefficient D obtained with the bare and modified electrodes do not significantly differ from the values reported in the literature for carbon-based electrodes [77,79].
In both cases, the anodic transfer coefficient (α A ) is similar; summing up, according to our data and previously published data, we believe that the first electron transfer is the rate-determining step.The slope of the Tafel plot also confirms this (see, for example, Figures S4f and S5f).
Scheme 1 shows the hypothesized pathway for dopamine oxidation.
mers 2024, 16, x FOR PEER REVIEW 13 o Table 6.Electrochemical parameters obtained from CV at different scan rates for the bare an MIP-modified working electrodes of the screen-printed cell.The number between parentheses re to the uncertainty on the last digit.
Quantification of DA by Differential Pulse Voltammetry (DPV): Calibrations and Sample Analysis
DPV is the technique selected for dopamine determination in PBS 0.1 M at pH Calibration curves are carried out with bare, e-MIP, and e-NIP-modified electrodes define the method's figures of merit.Figure 7 shows the voltammograms registered solutions containing DA concentrations ranging from 0 (blank) to 180 µM.As can observed, the highest sensitivity is obtained with the e-MIP-functionalized electrode, a as expected, the performances of the e-NIP are poor due to the absence of recognit cavities so that the analyte can reach the electrode surface only through the few unspec pores of the polymer.
Figure 8a shows the comparison among the calibration curves obtained with different types of electrodes; each point corresponds to the mean of the current val registered with three electrodes, with the error bars representing the standard deviati Table 7 summarizes the method's figures of merit for each electrode type.
Table 7. Analytical figures of merit of the DA quantification method using the bare, e-MIP an NIP-modified electrodes.The number between parenthesis is the standard deviation on the digit.Calibration curves are carried out with bare, e-MIP, and e-NIP-modified electrodes to define the method's figures of merit.Figure 7 shows the voltammograms registered in solutions containing DA concentrations ranging from 0 (blank) to 180 µM.As can be observed, the highest sensitivity is obtained with the e-MIP-functionalized electrode, and as expected, the performances of the e-NIP are poor due to the absence of recognition cavities so that the analyte can reach the electrode surface only through the few unspecific pores of the polymer.
Figure 8a shows the comparison among the calibration curves obtained with the different types of electrodes; each point corresponds to the mean of the current values registered with three electrodes, with the error bars representing the standard deviation.Table 7 summarizes the method's figures of merit for each electrode type.The linear range for the e-MIP-modified electrode covers only one order of magnitude, but the dynamic range can also be exploited to quantify DA at higher concentrations.As previously reported, the trend of the dose-response curve for MIP-modified electrodes The linear range for the e-MIP-modified electrode covers only one order of magnitude, but the dynamic range can also be exploited to quantify DA at higher concentrations.As previously reported, the trend of the dose-response curve for MIP-modified electrodes can be well-modeled by the classical Langmuir equation [61,64]: where I p,max is the maximum value of the current at the plateau corresponding to the saturation of the MIP's recognition cavities, and K aff is the affinity constant of the sites for the analyte; it is a conditional parameter since it strictly depends on the method and the experimental conditions used.Both parameters are determined by a nonlinear fitting of the data (OriginPro software version 8.5.1-OriginLab.Corp., Northampton, MA, USA). Figure 8b shows the Langmuir fitting for the dose-response curves obtained with e-MIP-modified electrodes.
(c) The linear range for the e-MIP-modified electrode covers only one order of magnitude, but the dynamic range can also be exploited to quantify DA at higher concentrations.As previously reported, the trend of the dose-response curve for MIP-modified electrodes can be well-modeled by the classical Langmuir equation [61,64]: where Ip,max is the maximum value of the current at the plateau corresponding to the saturation of the MIP's recognition cavities, and Kaff is the affinity constant of the sites for the analyte; it is a conditional parameter since it strictly depends on the method and the experimental conditions used.Both parameters are determined by a nonlinear fitting of the data (OriginPro software version 8.5.1 -Origin Lab.Corp., Northampton, MA, USA).The selectivity of the e-MIP-based sensor is tested for two interferents: ascorbic acid (AA) and uric acid (UA), which are naturally present in biological fluids, such as the human urine here studied, and have an oxidation potential close to that of DA.
Obviously, it is impossible to quantify DA in samples also containing AA and UA with the bare electrode due to the overlap of the peaks (see the voltammograms reported in Figure S6); conversely, the DA determination can be possible when using the e-MIPmodified electrode.As an example, Figure 9a,b show the DPV plots obtained in a solution containing a high concentration of the interferent (350 µM) and subsequent addition of DA from 4 µM to 40 µM.The peak separation is, in both cases, enough to quantify DA correctly.Moreover, the calibration curve for AA and UA performed with the e-MIP-based electrode showed low sensitivity and affinity constant (see Figure S7 and Table S4).
The standard addition method is considered to further confirm the possible quantification of DA in solutions containing both interferents; the recovery percentages are summarized in Table 8. Figure 8b shows the Langmuir fitting for the dose-response curves obtained with e-MIPmodified electrodes.
The selectivity of the e-MIP-based sensor is tested for two interferents: ascorbic acid (AA) and uric acid (UA), which are naturally present in biological fluids, such as the human urine here studied, and have an oxidation potential close to that of DA.
Obviously, it is impossible to quantify DA in samples also containing AA and UA with the bare electrode due to the overlap of the peaks (see the voltammograms reported in Figure S6); conversely, the DA determination can be possible when using the e-MIPmodified electrode.As an example, Figure 9a,b show the DPV plots obtained in a solution containing a high concentration of the interferent (350 µM) and subsequent addition of DA from 4 µM to 40 µM.The peak separation is, in both cases, enough to quantify DA correctly.Moreover, the calibration curve for AA and UA performed with the e-MIPbased electrode showed low sensitivity and affinity constant (see Figure S7 and Table S4).The standard addition method is considered to further confirm the possible quantification of DA in solutions containing both interferents; the recovery percentages are summarized in Table 8.
DA determination in simulated and real human urine without and with a spike of the analyte is performed to prove the developed sensor's applicability in biological fluids analysis.Even in this case, the standard addition method is employed, and the results are also shown in Table 8.DA determination in simulated and real human urine without and with a spike of the analyte is performed to prove the developed sensor's applicability in biological fluids analysis.Even in this case, the standard addition method is employed, and the results are also shown in Table 8.
The recovery percentage ranges from 87% to 103% for all samples, confirming the method's suitability for DA quantification in biological fluids even in the presence of high content of interferents and without the need for sample pretreatment before analysis.
Long-time repeatability and stability tests are not performed since screen-printed electrochemical cells are disposable.They do not require particular maintenance procedures, and after modification with the e-MIP or e-NIP film, no more than two experiments (calibration or sample analysis) can be carried out with the same screen-printed cell without compromising the measurements.
Conclusions
A molecularly imprinted polypyrrole (e-MIP)-modified screen-printed sensor for dopamine (DA) determination is proposed.Conversely to previously proposed screenprinted devices, a fast, low-cost, selective method applicable in situ is here developed.
Figure 2
Figure 2 reports the plot indicating the significance of the model's coefficients andTable 3 their values.
Figure 2 .
Figure 2. FCCD for the e-MIP film preparation: coefficients plot.Higher values (regardless of the sign) and little black stars suggest a significant influence of the respective parameter or interaction (the possible number of black stars and the relative significance are: * p ≤ 0.05, ** p ≤ 0.01).
Figure 3 .
Figure 3. FCCD for the e-MIP film preparation: response surface plot.
Figure 3 .
Figure 3. FCCD for the e-MIP film preparation: response surface plot.
Figure 4 .
Figure 4. Nyquist plot of the bare electrode (blue dots), e-MIP-modified electrode before templa removal (green dots), e-MIP-modified electrode after template removal (grey dots), e-MIP-modifi electrode after rebinding with DA 0.02 mM (yellow dots), e-NIP-modified electrode (red dot Measurements were performed in 5 mM K4Fe(CN)6/0.1 M KCl solution.Frequency range 100 kH 10 mHz with a sinusoidal potential modulation of 0.05 V superimposed on a dc potential of 0.2 for the bare and 0.1 V for the modified electrodes (equilibrium potential of the redox probe).
Figure 4 .
Figure 4. Nyquist plot of the bare electrode (blue dots), e-MIP-modified electrode before template removal (green dots), e-MIP-modified electrode after template removal (grey dots), e-MIPmodified electrode after rebinding with DA 0.02 mM (yellow dots), e-NIP-modified electrode (red dots).Measurements were performed in 5 mM K 4 Fe(CN) 6 /0.1 M KCl solution.Frequency range 100 kHz-10 mHz with a sinusoidal potential modulation of 0.05 V superimposed on a dc potential of 0.2 V for the bare and 0.1 V for the modified electrodes (equilibrium potential of the redox probe).
Figure 6 .
Figure 6.CV profile for DA (2.5 mM) in PBS 0.1 M at pH 7 at different scan rates: (a) bare electrode; (b) e-MIP-modified electrode.The arrows indicate the potential scan direction.
Scheme 1 .
Scheme 1. Possible pathway for the electrochemical oxidation of DA in PBS 0.1 M at pH 7.
Scheme 1 .
Scheme 1. Possible pathway for the electrochemical oxidation of DA in PBS 0.1 M at pH 7.
3. 4 .
Quantification of DA by Differential Pulse Voltammetry (DPV): Calibrations and Sample Analysis DPV is the technique selected for dopamine determination in PBS 0.1 M at pH 7.
Figure 8 .
Figure 8.(a) DA calibration curves obtained with the bare, e-MIP and e-NIP-modified electrodes.Each point is the mean of the current values registered with three electrodes, and the error bars are the standard deviation.(b) Langmuir fitting for the calibration curve of DA with the e-MIP-modified electrode: I pmax = 10.8 (1) µA; K aff =8.4 (2) × 10 3 M −1 ; R 2 = 0.994.
Table 4 .
FCCD for the e-MIP film preparation: model validation by three replicates at the center point.
Table 8 .
Recovery test in simulated and real samples.Three replicates for each sample using three different e-MIP-modified electrodes.The number between parentheses is the standard deviation on the last digit.
Table 8 .
Recovery test in simulated and real samples.Three replicates for each sample using three different e-MIP-modified electrodes.The number between parentheses is the standard deviation on the last digit. | 9,251.2 | 2024-09-01T00:00:00.000 | [
"Chemistry",
"Materials Science"
] |
On the Continuous Cancellative Semigroups on a Real Interval and on a Circle and Some Symmetry Issues
: Let S denote the unit circle on the complex plane and (cid:63) : S 2 → S be a continuous binary, associative and cancellative operation. From some already known results, it can be deduced that the semigroup ( S , (cid:63) ) is isomorphic to the group ( S , · ) ; thus, it is a group, where · is the usual multiplication of complex numbers. However, an elementary construction of such isomorphism has not been published so far. We present an elementary construction of all such continuous isomorphisms F from ( S , · ) into ( S , (cid:63) ) and obtain, in this way, the following description of operation (cid:63) : x (cid:63) y = F ( F − 1 ( x ) · F − 1 ( y )) for x , y ∈ S . We also provide some applications of that result and underline some symmetry issues, which arise between the consequences of it and of the analogous outcome for the real interval and which concern functional equations. In particular, we show how to use the result in the descriptions of the continuous flows and minimal homeomorphisms on S .
Introduction
Let (X, •) and (X, * ) be groupoids (i.e., X is a nonempty set and •, * : X 2 → X are binary operations in X). If there exists a bijection h : X → X such that then we say that the pair (h, * ) induces •. It is easily seen that (1) is equivalent to the following property h(x • y) = h(x) * h(y) for x, y ∈ X, which means that h is an isomorphism of (X, •) into (X, * ).
Recall yet that a groupoid (X, •) is cancellative if operation • is cancellative, i.e., for all x, y, z ∈ X with y = z. A groupoid (X, •) is a semigroup if operation • is associative, i.e., x • (y • z) = (x • y) • z for every x, y, z ∈ X. Let S denote the unit circle of the complex plane C and · be the usual multiplication of complex numbers. The main result of this paper is an elementary construction of homeomorphisms h : S → S such that the pair (h, ·) induces a given binary, continuous, associative, and cancellative operation on S, where the topology in S is induced by the usual topology of the plane. As far as we know it is the only such proof.
This paper is partly of expository type because we also show several interesting applications of the main outcome and underline some symmetry issues concerning functional equations, which arise between those applications and some consequences of the following well known result of J. Aczél [1,2] (analogous result was obtained in Reference [3]; also see Reference [4] for a simpler proof). Theorem 1. Let I be a real nontrivial interval and • : I × I → I be a binary operation that is continuous, associative, and cancellative. Then, there exist an infinite real interval J and a homeomorphism H : I → J with x • y = H −1 (H(x) + H(y)) for x, y ∈ I.
Note that property (4) means, in particular, that H(x) + H(y) ∈ J for every x, y ∈ I, i.e., z + w ∈ J for every z, w ∈ J. Therefore, J must be either the set of reals R or of one of the following forms: (a, ∞), [a, ∞), (−∞, −a), (−∞, −a] with some real a ≥ 0. Next, it is clear that every operation of form (4) must be cancellative. So, without the cancellativity assumption, the result is not true, and, for instance, the natural operation x • y := max{x, y} for x, y ∈ I, which is associative and continuous, but not cancellative, cannot be represented in form (4). The same is true for x • y := min{x, y} for x, y ∈ I.
In the terms of functional equations, Theorem 1 is about continuous solutions A : I 2 → I (if we write A(x, y) := x • y) of the following associativity equation: A(x, A(y, z)) = A(A(x, y), z) for x, y, z ∈ I. (6) In this case, the cancellativity of • is equivalent to the injectivity of A with respect to either variable, which means the strict monotonicity with respect to either variable (because of the assumed continuity of A). Similar problem without the assumption of strict monotonicity (i.e., the cancellativity of the corresponding binary operation), was considered in Reference [5] (also see Reference [6]), and, under some additional assumptions, the following representation was obtained A(x, y) = g(H(x) + H(y)) for x, y ∈ I, with some continuous injection H : I → R, where g : [0, ∞] → I is a pseudoinverse of H (see Reference [5] for more details). Clearly, representation (4) is actually (7) with g = H −1 . For further, more general investigations of that subject, we refer to References [7][8][9]. Let us add that solutions A : [0, 1] 2 → [0, 1] to the associativity Equation (6) are important in statistical metric spaces and are called triangular norms or shortly t-norms (cf., e.g., References [6,10,11]); we also refer to Reference [12] for the notion of copulas.
A result analogous to Theorem 1, for the unit circle, has the subsequent form.
Theorem 2. Let a binary operation : S × S → S be continuous, associative, and cancellative. Then, there exist exactly two homeomorphisms F 0 , G 0 : S → S such that In particular, F 0 (x) = G 0 (x) for x ∈ S, wherex is the complex conjugate of x.
Theorem 2 can be derived (with some additional reasoning) from Reference [13] (Theorems 1.10 and 1.13) (continuous, compact and cancellative semigroup is a topological group) and Reference [14] (Theorem 2) (connected topological group, which contains a neighbourhood of the neutral element that is homeomorphic to an open real interval, is isomorphic either to the additive group of reals (R, +) or to the factor group (R/Z, +), where Z denotes the set of integers).
However, as we will see in the section with applications, it is useful to know how to construct the function F 0 (and thus also G 0 ). In the case of Theorem 1, the form of H can be deduced from the proofs in References [2,4]. We show in Section 5 that a symmetric reasoning, with a reasonably elementary and simple construction of the functions F 0 and G 0 , works for Theorem 2.
Finally, let us mention that the form of A in (7) reminds about a well-known problem of representing functions with several variables by functions in one variable, whether under some regularity assumptions or not (see References [5,[15][16][17] for more details).
For further information on topological semigroups (also on historical background) we refer to References [13,18,19].
The paper is divided into 6 sections. The next section contains some observations concerning symmetry issue arising between Theorems 1 and 2. In Section 3, we present several applications of both theorems. Section 4, titled Auxiliary Results, contains a series of definitions, lemmas, remarks, and corollaries, leading to the final reasoning contained in Section 5, titled The Proof of Theorems 2, and being the proper proof of the theorem with a description of function F 0 . The sixth section (the last one) presents some concluding remarks.
Some Remarks on Symmetry Issues
Let us observe that the statements of Theorems 1 and 2 imply that both operations, • and , must be commutative (symmetric). Moreover, in the next section we demonstrate that some applications of those theorems yield several somewhat symmetric results concerning functional equations. But, first, let us note some symmetry deficiencies between Theorems 1 and 2. Remark 1. S is compact, while the interval J in Theorem 1 must be infinite and therefore not compact, which means that I cannot be compact. So, every continuous semigroup on a compact interval is not cancellative.
Remark 2.
In Theorem 2 there exist exactly two functions, F 0 and G 0 , satisfying (8). The situation in the case of Theorem 1 (for a real interval) is somewhat different. Namely, let F : I → R also be a continuous injection such that x • y = F −1 (F(x) + F(y)) for x, y ∈ I (which means, in particular, that F(x) + F(y) ∈ F(I) for x, y ∈ I). Then which means that the function h = F • H −1 : J → J is a homeomorphism satisfying Consequently, h(z) ≡ cz with some real c = 0 (see, e.g., Reference [20]), which means (with z = H(x)) that Remark 3. Note that, even if I = R in Theorem 1, (I, •) does not need to be a group, contrary to the situation on the circle. This happens in the case when H(R) = (0, ∞).
Applications
Now, we show some simple applications of Theorems 1 and 2 in functional equations. They show some symmetries and some lack of symmetry between those two cases of the continuous cancellative semigroups on a real interval and on the unit circle. As they are clearly visible, we do not discuss them in detail.
Let us begin with the following auxiliary useful result in Reference [21] (p. 155).
Lemma 1.
Let K ∈ {R, C}. Then, a continuous function g : K → C satisfies the functional equation if and only if (I) in the case K = R, there is a ∈ C with g(s) = e as for s ∈ R; (II) in the case K = C, there are b, c ∈ C with g(s) = e cs+bs for s ∈ C.
The next lemma seems to be well known, but, for the convenience of readers, we present a short proof of it. Lemma 2. Let A : S → S be a continuous solution of the equation Then, there is a real constant d such that A(e 2πit ) = e 2πidt for t ∈ R.
Proof. Define a function B : R → S by B(t) = A(e 2πit ) for t ∈ R. Then, B is continuous and, by (14), B(t + s) = A(e 2πi(t+s) ) = A(e 2πit ) · A(e 2πis ) = B(t) · B(s) for s, t ∈ R. Consequently, by Lemma 1, there is a ∈ C such that B(t) = e at for t ∈ R. Since B(R) ⊂ S, the real part of a must be equal 0. So d := a 2πi ∈ R and consequently B(t) = e 2πidt for t ∈ R. Hence, A(e 2πit ) = B(t) = e 2πidt for t ∈ R. Now, we are in a position to present the mentioned before applications. As far as we know, they are new results, never published so far. Proposition 1. Let a binary operation : S × S → S be associative, continuous and cancellative. Let K ∈ {R, C} and F 0 : S → S be a homeomorphism with Then, a continuous function f : K → S fulfills the equation if and only if there is d ∈ K such that, (a) in the case K = R, f (s) = F 0 (e ids ) for s ∈ R; (b) in the case K = C, f (s) = F 0 (e i(ds+d s ) ) for s ∈ C.
Proof. Let f : K → S be a continuous function that fulfills Equation (16). Then, by (15), whence the function h : K → S, h(t) = F −1 0 ( f (t)) for t ∈ K, fulfills Equation (13). Hence, by Lemma 1, conditions (I) and (II) are valid. Clearly, e as ∈ S for s ∈ R if and only if ai ∈ R. Moreover, it is easy to check that e cs+bs ∈ S for s ∈ C if and only if c = b. This implies statements (a) and (b).
Now, suppose that f : R → S has the form depicted by (a). Then, in view of (15), The case of (b) is analogous.
Proposition 2.
Let I be a nontrivial real interval and a binary operation • : I × I → I be associative, continuous, and cancellative. Let K ∈ {R, C} and H : I → R be a continuous injection satisfying (4). Then, the functional equation Proof. Let f : K → I be a continuous solution of Equation (19). Then, the function h : K → I, h(t) = H( f (t)) for t ∈ K, fulfills the equation Moreover, h is continuous.
In the case K = R there is d ∈ R such that h(s) = ds for s ∈ R, which implies that either d = 0 or H is surjective and f (s) = H −1 (ds) for s ∈ R.
In the case K = C there are d 1 , d 2 ∈ K such that h(s) = d 1 s + d 2 s for s ∈ C, where s and s denote the real and imaginary parts of a complex number s. Write d = 1 2 (d 1 − id 2 ). Then, it is easily seen that h(s) = ds + d s for s ∈ C, which implies that either d = 0 or H is surjective and Now, we see that, in the case where H is bijective, we obtain statements (i) and (ii). Moreover, it is easy to check functions depicted in statements (i) and (ii) are solutions to (19).
Next, suppose that Equation (19) has a non-constant continuous solution f : K → I. Then, as we have already observed, in the case K = R there is d ∈ R such that H( f (s)) = ds for s ∈ R. Since f is not constant and H is injective, this means that d = 0 and consequently H is surjective. If K = C, then we show the surjectivity of H in a similar way.
Finally, note that, if H is bijective, then functions depicted by statements (i) and (ii), with d = 0, are non-constant continuous solutions to (19).
Proposition 3.
Let I be a nontrivial real interval and binary operations • : I × I → I and : S × S → S be associative, continuous and cancellative. Let H : I → R and F 0 : S → S be such that (4) and (15) hold. Then, the following two statements are valid.
(A) A continuous function f : I → S fulfills the functional equation if and only if there is d ∈ R such that f (t) = F 0 (e idH(t) ) for t ∈ R.
(B) Every continuous function f : S → I fulfilling the functional equation is constant.
Proof. First we prove (A). So, let f : I → S be a continuous function that fulfills functional Equation (21). Then, in view of (4) and (15), which means that the function h : Let h 0 : R → S be the solution of The converse is easy to check. Now, we prove (B). Let f : S → I be a continuous solution to (22). Then, in view of (4) and (15), whence the function h : S → R, h(u) = H( f (F 0 (u))) for u ∈ F −1 0 (S) = S, is continuous and satisfies the equation Thus, the set h(S) is compact (because S is compact and h is continuous) and it is a subgroup of (R, +). The only such subgroup is the trivial one {0}, which means that h(S) = {0}. Since F 0 is bijective and H is injective, f must be constant.
Using Theorem 2 and some results from Reference [22] we also get the following proposition on the minimal homeomorphisms on S. Let us recall (see Reference [22] S does not contain any non-empty, proper, closed f -invariant subset (see References [23,24] for some related results).
Proposition 4.
A function T : S → S is a minimal homeomorphism if and only if there exist a ∈ S and a continuous, associative, and cancellative binary operation : S × S → S such that a = a n for n ∈ N, n > 1, where a 1 = a and a n+1 = a n a for n ∈ N (N denotes the set of positive integers).
Proof. Assume that T : S → S is a minimal homeomorphism. Then, there exist an irrational real number c and a homeomorphism ψ : S → S with ψ(T(x)) = e 2πic · ψ(x) for x ∈ S (see Reference [22] (Ch.3 §3 Th.1,3)). Define a binary operation : S × S → S by and put a := ψ −1 (e 2πic ). Then, which means that (30) holds. Now, suppose that T : S → S is of form (29), where : S × S → S is a continuous, associative, and cancellative binary operation. In view of Theorem 2, there exists a homeomorphism F 0 : S → S such that (15) is valid. Consequently, T is a homeomorphism and F −1 Now, we show that c is irrational. So, for the proof by contradiction suppose that there are n, k ∈ N with (n − 1)c = k. Then, a n = F 0 (e 2πinc ) = F 0 (e 2πik · e 2πic ) = F 0 (e 2πic ) = a. This is a contradiction to (30).
Thus, we have proved that c is irrational. Hence, it follows that the set {e 2πinc : n ∈ Z} is dense in S; thus, by (33), the set {T n (x) : n ∈ Z} is dense in S for every x ∈ S, which means that T is a minimal homeomorphism. This ends the proof.
Before our last proposition in this section, let us remind the notion of continuous flow on a topological space X. Namely (cf. References [13,25]), if f t : X → X for t ∈ R is a family of maps such that f t+s = f t • f s for all t, s ∈ R, f 0 is the identity map on X and the mapping Φ : X × R → X, given by Φ(x, t) = f t (x), is continuous, then we say that the family f t : X → X for t ∈ R is a continuous flow on X.
Observe that, if f t : X → X for t ∈ R is a continuous flow on a topological space X, then Φ : , is a continuous solution of the translation functional equation The last proposition in this section shows that the continuous, associative, and cancellative binary operations on S can be used in a description of the continuous flows on S (cf. References [26,27]) and therefore also in a description of the continuous solutions Φ : S × R → S to the translation functional equation.
Proposition 5.
A family of continuous functions {T t : S → S : t ∈ R} is a continuous flow such that either T 1 = id (the identity function on S) or T 1 (x) = x for x ∈ S if and only if there exist a continuous, associative, and cancellative operation : S × S → S and a continuous solution f : R → S of the functional equation Proof. Let {T t : S → S : t ∈ R} be a continuous flow such that either T 1 = id or T 1 (x) = x for x ∈ S. Then, by [13] (Theorem 2), there exist d ∈ R and an orientation preserving homeomorphism ψ : S → S with T t (x) = ψ −1 (ψ(x) · e idt ) for x ∈ S, t ∈ R. Let : S × S → S be the operation given by (31). Then, is continuous and (S, ) is a group. Next, the function f : R → S, f (t) = ψ −1 (e idt ) for t ∈ R, fulfills Equation (35) (because ψ( f (t)) = e idt for t ∈ R) and Conversely, suppose that (36) holds. Then, it is easily seen that {T t : t ∈ R} is a continuous flow. Further, by Theorem 2, (S, ) is a group. So, if there is x 0 ∈ S such that T 1 (x 0 ) = x 0 , then, by (36), x 0 f (1) = x 0 , whence f (1) is the neutral element of (S, ). Thus, (36) implies that T 1 = id. This completes the proof.
Finally, let us mention that somewhat related issues in particle physics can be found in Reference [28]. Moreover, it seems that Theorem 2 can be applied in finding continuous solutions f : S → S of some suitable functional equations analogously as Theorem 1 has been used in References [29][30][31] for real functions.
Auxiliary Results
In this section, we provide information and several observations necessary in the proof of Theorem 2 for the construction of function F 0 . They are presented in a series of lemmas, remarks, and corollaries. We start with some notations and definitions.
We define an order in S as follows: for u, v ∈ S, where Arg u ∈ [0, 2π) stands for the argument of the complex number u. For every It is known (cf., e.g., Reference [22] (Chap. 2)) that, for every homeomorphism T : S → S, there exists a unique homeomorphism f : R → R satisfying T(e 2πix ) = e 2πi f (x) for x ∈ R. If f is increasing (decreasing), then we say that homeomorphism T preserves (reverses) orientation. Note that every homeomorphism T : S → S preserves or reverses orientation. Now, we will prove several auxiliary lemmas, corollaries and remarks. So, let (S, ) be as in Theorem 2. Then, (S, ) is a topological group (see Reference [13] (Theorems 1.10 and 1.13)). Denote by e the neutral element of (S, ) and define • : S × S → S by Remark 4. It is easily seen that the mapping γ : S → S, γ(u) = e · u for u ∈ S, is an isomorphism from (S, •) onto (S, ). So, (S, •) is a group with the neutral element equal 1. Next, γ is a homeomorphism and whence (S, •) is a topological group. For every u ∈ S, by u −1 , we always denote the inverse element to u in the group (S, •) and J : S → S is defined by J In what follows, u 1 := u and u n+1 := u • u n for u ∈ S and n ∈ N. Clearly, by the associativity of the operation, we have u n+m = u n • u m for u ∈ S, n, m ∈ N.
Next, given u ∈ S, we define functions L u , R u : S → S by the formulas: Remark 5. Let u ∈ S \ {1}. Since (S, •) is a topological group, L u and R u are homeomorphisms without fixed points and therefore preserve orientation (see Reference [13] (Remark 3)).
Proof. First, suppose that u = u −1 for every u ∈ S, that is u 2 = 1 for every u ∈ S. Let u, v ∈ S and 1 ≺ u ≺ v. Then, by and Remark 5, On the other hand, c(1, u) • u = a(1, u), and, consequently, v • u ≺ u, a contradiction because
Proof. It follows from Lemma 3 that there exists
= J(u 0 ) and J(1) = 1, J reverses orientation. Thus, the assertion follows from the fact that 1 ∈ J(a(u, v)).
From Lemma 5 (for u = 1), we obtain the following Corollary 1. If s, v ∈ S \ {1} and s ≺ v −1 , then s ≺ v • s and s ≺ s • v.
Lemma 6.
Let v ∈ S, v ≺ v −1 . Then: Proof. Assertion (a) results from Corollary 1 (with v = s). For the proof of (b), fix e ≺ t ≺ s ≺ v. Then, by Lemma This ends the proof.
Lemma 7.
Assume that u ∈ S and u ≺ u −1 . Then, for every x ∈ S, x ≺ u 2 , there exists exactly one y ∈ S, y ≺ u, such that y 2 = x. Moreover, y ≺ x.
Proof. The proof is by induction with respect to n. The case n = 1 results from Lemma 7. Fix n ∈ N and assume that A n (u) = ∅ and there exists the smallest element p n (u) of A n (u) with p n (u) ≺ u. Then, according to Lemma 7, there is exactly one element y ≺ p n (u) such that y 2 = p n (u). Note that y 2 n+1 = (y 2 ) 2 n = [p n (u)] 2 n = u. Thus, y ∈ A n+1 (u).
Proof. In view of Remark 6, for every x ∈ D 0 , there exist unique k(x) ∈ N, m(x) ∈ N 0 , and x 1 , . . . , The proof is by induction with respect to p(x, y) := max{k(x), k(y)}. The case p(x, y) = 0 is trivial because, then, So, fix n ∈ N 0 and assume that the assertion holds for every x, y ∈ D with p(x, y) ≤ n. Take x, y ∈ D with p(x, y) = n + 1. Since, by (47), d(x) • d(y) = d(y) • d(x), it suffices to consider the following two cases. Case I: k(x) = k(y) = n + 1. Then, with x n+1 = y n+1 = 1. Write Clearly max{k(x 0 ), k(y 0 ), k(x 0 + y 0 )} ≤ n. So, in view of Remark 6, Lemma 8, and the induction hypothesis, we have (53) Hence, by Remark 6 and the induction hypothesis, we obtain Lemma 10. Let D and d : D → S be the same way as in Lemma 9. Then, d(x) ≺ u 0 for every x ∈ D, x < 1.
The proof is by induction with respect to k(x). The case k(x) = 1 results from Lemma 8. Fix n ∈ N and assume that the assertion holds for every x ∈ D 0 , x < 1, with k(x) ≤ n. Let y ∈ D 0 , k(y) = n + 1, and z = 2 −n−1 . From the induction hypothesis and Lemma 4, we obtain because Lemma 8 yields p n (u 0 ) ≺ u 0 . By Lemma 8, we also get p n+1 (u 0 ) ≺ p n (u 0 ) ≺ u 0 ≺ u −1 0 ≺ p n (u 0 ) −1 . Hence, in view of the induction hypothesis, Lemma 5 and Lemma 9, we have This completes the proof.
Lemma 11. Let D and d : D → S be the same way as in Lemma 9, x, y ∈ D and x < y < 1. Then, d(x) ≺ d(y). Proof. From Lemma 11, we get immediately M ≥ 1. We prove that M is a finite number. Suppose that u n 0 ≺ u −1 0 for every n ∈ N. Then, on account of Corollary 1, u n 0 ≺ u n+1 0 for n ∈ N and consequently there exists s = lim n→∞ u n 0 u −1 0 . If s = u −1 0 , then, in view of the continuity of the operation, we have
Lemma 13. Let D and d : D → S be the same way as in Lemma 9 and M be given by (57). Then, the following two statements are valid.
Proof. (a) Put r := inf{p n (u 0 ) : n ∈ N} (with respect to the order ≺) and suppose that 1 ≺ r. From Lemma 8 we get r = lim n→∞ p n (u 0 ) and, by Lemma 7 with u = r, p 1 (r) ≺ r ≺ u 0 ≺ u −1 0 ≺ r −1 . Thus, there exists t ∈ S with p 1 (r) ≺ t ≺ r. Hence, on account of Lemma 6b with v = r, r ≺ t 2 . By the definition of r and Lemma 8, there exists m ∈ N such that r ≺ p m (u 0 ) ≺ t 2 . Next, in view of Lemma 7, with u = t, there exists q ∈ a(1, t) with q 2 = p m (u 0 ), whence, by Lemma 8, p m+1 (u 0 ) q ≺ t ≺ r, which contradicts the definition of r.
(b) For the proof by contradiction, suppose that there exist v 1 , v 2 ∈ S such that 1 ≺ v 1 ≺ v 2 and U ∩ a(v 1 , v 2 ) = ∅. First, consider the case where t ≺ v 1 for every t ∈ U. Put s := sup U (with respect to the order ≺) and s 0 := min{s, s −1 }. By a) and Lemma 11, there exists m ∈ N such that d(2 −m ) ≺ s 0 .
Let D := [0, M + 2 −m ) ∩ D and fix x, y ∈ D such that x < y, 2 −m < y. Put x 0 = max{2 −m , x}. Then, by Lemma 12, there exist x , y ∈ [0, M) ∩ D and z, w ∈ D such that w ≤ z < 2 −m ≤ x < y and x 0 = x + w, y = y + z, which, in virtue of the definition of M and Lemma 4, means that d(x ) ≺ d(y ), and d(x ) ≺ d(z) −1 . Thus, on account of Lemmas 5 and 9, we obtain Since, according to the definition of M, d(x) ≺ d(y) for x, y ∈ D, x < y ≤ 2 −m , in this way, we have proved that d(x) ≺ d(y) for every x, y ∈ D with x < y. This contradicts the definition of M.
Now, consider the case where there is t ∈ U such that v 2 ≺ t. Put It results from a) and Lemma 11 that there exists q ∈ N with d( Moreover, Lemma 4 and Corollary 1 give Clearly, t 0 = d(r) for some r ∈ (0, L] ∩ D. Since 2 −q + r < M and, by Lemma 9, where D and d : D → S are the same way as in Lemma 9 and M is given by (57).
As an immediate consequence of the definition and Lemma 13, we have the following. Proof. Using Lemma 13a and Corollary 2, it is easily seen that d is continuous at 0. Next, we show that d is continuous at M. So, fix an increasing sequence {r n } n∈N in D such that lim n→∞ r n = M. Then, according to the definitions of M and d and Lemma 13b, for every t ∈ S, there is n ∈ N with t ≺ d(r n ), which means that lim n→∞ d(r n ) = 1. Hence, by Corollary 2, d is continuous at M.
To complete the proof suppose that d is discontinuous at a point x 0 ∈ (0, M). Then, by Corollary 2 and Lemma 13b, there exist z 1 , z 2 ∈ (0, M) with z 1 < z 2 and x 0 ∈ [z 1 , z 2 ], and a sequence {r n } n∈N in D ∩ ((0, M) \ {x 0 }) such that lim n→∞ r n = x 0 and Hence, r n < z 1 < z 2 < r n + 2 −n for n ∈ N. Since we obtain a contradiction. This ends the proof.
Proof. (a) Let {x n } n∈N , {y n } n∈N be sequences in D \ {x, y} such that x n + y n < M for n ∈ N and lim n→∞ x n = x, lim n→∞ y n = y. By the continuity of • and Lemmas 9 and 14, we have (b) Let {x n } n∈N , {y n } n∈N be increasing sequences in D \ {x, y} with lim n→∞ x n = x, lim n→∞ y n = y. Take a sequence {m n } n∈N in D ∩ (0, M) with lim n→∞ m n = M and write z n := x n + y n − m n for n ∈ N. Then, z n ∈ D for n ∈ N and, by Lemmas 9 and 14, we may write This ends the proof. Define a function F : S → S as follows:
Proof of
Then, by Corollary 2, F is strictly increasing (with respect to the order ≺ in S) and, on account of Lemma 14, it is easily seen that F is continuous. Moreover, it results from Lemma 15 that, for every v, w ∈ S, Consequently, F is a homeomorphism fulfilling Write F 0 := γ • F, where γ : S → S is defined as in Remark 4. Then, F 0 : S → S is a homeomorphism and, by (68), To complete the proof of Theorem 2 suppose that G 0 : S → S is also a homeomorphism such that x y = G 0 (G −1 0 (x) · G −1 0 (y)) for x, y ∈ S. Then, for every v, w ∈ S, whence G −1 0 (F 0 (v · w)) = G −1 0 (F 0 (v)) · G −1 0 (F 0 (w)). Putting A(u) = G −1 0 (F 0 (u)) for u ∈ S we get Moreover, A is a homeomorphism. Hence, in view of Lemma 2, A(u) ≡ u or A(u) ≡ u. Since it is easily seen that (69) yields this completes the proof.
Conclusions
Given a binary, continuous, associative, and cancellative operation : S 2 → S, we presented an elementary construction of all continuous isomorphisms from the group (S, ·) onto the semigroup (S, ), where S is the unit circle on the complex plane and · is the usual multiplication of complex numbers. There are exactly two such isomorphisms F, G : S → S, and they uniquely determine the form of operation in the following way: x y = F(F −1 (x)F −1 (y)) = G(G −1 (x)G −1 (y)) for x, y ∈ S.
Moreover, F(x) = G(x) for all x ∈ S. Using this result, we have easily determined all continuous solutions f : K → S of the functional equation f (x + y) = f (x) f (y) for x, y ∈ K, where K is either the set of reals R or the set of complex numbers C. We also provided some further applications of that result in functional equations and showed how to use it in the descriptions of the continuous flows and minimal homeomorphisms on S. In particular, we underlined some symmetry issues, which arise between the consequences of the result and of the analogous outcome for the real interval.
It would be interesting to investigate in the future to what extent the statements of Theorems 1 and 2 remain valid if the cancellativity is replaced by the one side cancellativity; for instance, by the left-cancellativity (a groupoid (X, •) is left-cancellative if x • y = x • z for all x, y, z ∈ X with y = z).
The next step might be to investigate how much the associativity assumption can be weakened; for instance, to what extent the assumption can be replaced by the square symmetry defined by the formula: (x • x) • (y • y) = (x • y) • (x • y). A natural example of square symmetric operation in R, which is not associative, is given by: x y = ax + by + c for x, y, z ∈ R, where a, b, c ∈ R are fixed and ac = bc, or a 2 = a, Such new results would have interesting applications in functional equations in a similar way as Theorem 1 in References [29][30][31]. | 8,675.8 | 2020-11-29T00:00:00.000 | [
"Mathematics"
] |
Low Frequency Waves Detected in a Large Wave Flume under Irregular Waves with Different Grouping Factor and Combination of Regular Waves
This paper describes a set of experiments undertaken at Universitat Politècnica de Catalunya in the large wave flume of the Maritime Engineering Laboratory. The purpose of this study is to highlight the effects of wave grouping and long-wave short-wave combinations regimes on low frequency generations. An eigen-value decomposition has been performed to discriminate low frequencies. In particular, measured eigen modes, determined through the spectral analysis, have been compared with calculated modes by means of eigen analysis. The low frequencies detection appears to confirm the dependence on groupiness of the modal amplitudes generated in the wave flume. Some evidence of the influence of low frequency waves on runup and transport patterns are shown. In particular, the generation and evolution of secondary bedforms are consistent with energy transferred between the standing wave modes.
Introduction
Extreme storms may significantly affect the coastal environment, especially in terms of erosion and sediment transport.They can provoke disastrous consequences such as sediment transport beyond the surf zone to unusual depths [1].Waves reaching a coastline release the majority of their energy and momentum within the surf zone as intense turbulence generated at the front face of the breaker.However, a portion of that energy is transferred to low frequency modes [2], like Low Frequency Waves, longshore currents, rip-currents and shear waves, that are oscillations generated by a shear instability of the mean longshore current profile, especially on barred beaches [3,4].Low-frequency waves can be generated from intense interaction between short waves and between short waves and long waves at the surf-swash boundary [5,6].
Cross-shore standing long wave swash oscillations are usually forced by infragravity frequency (f < 0.05 Hz) waves [7][8][9][10][11][12][13][14] as are swash oscillations due to waves traveling or oscillating along the water edge, parallel to the mean-water line (edge waves) [15][16][17][18][19].Although wind-waves or short-waves (typical frequency of about 0.1 Hz) are the major force behind the swash zone (SZ) dynamics, the importance of the SZ for the generation/transformation of low frequency motions has been recognized [6,[20][21][22][23].In the SZ, in fact, while the final dissipation of short-wave (wind and swell) energy occurs, the Low Frequency Wave (LFW hereinafter) energy (typical wave frequencies between 0.03 and 0.003 Hz) is, generally, reflected back seaward.In addition, intense interaction between short waves and between short waves and long waves at the surf-swash boundary can lead to the generation and reflection of further LFW [5,6].Superficial SZ hydrodynamics, subsurface SZ hydrodynamics, sediment dynamics and co-related beachface morphodynamics determine and are strongly determined by the frequency of swash motion in no-tidal seas [24][25][26][27][28][29][30][31].Nonlinearity is a major mechanism responsible for LFW.However, amplitude modulation of waves surface elevation (also termed wave grouping or groupiness) represents another predominant feature in short wave processes affecting long wave generation.
The Groupiness Factor (GF) is a measure of the degree of grouping or rather than of the amplitude modulation of the incident wave field.It is defined as the standard deviation of Smoothed Instantaneous Wave Energy History (SIWEH) normalized with respect to its mean value.A technique which involves no low-pass filtering to compute the low frequency part of the square of the water surface elevation is the Hilbert transform technique.Groupiness Factor is a global measure of the variance contained, in the entire low frequency, part of the square of the water surface elevation [32,33].Some questions still remain as to which mechanisms dominate under different surf zone conditions.For example, while it is well known that more regular waves (swell waves) tend to promote recovery but irregular waves (sea waves) tend to promote erosion, it is not clear if this is related to changes in nonlinearity or groupiness or to changes in energy [34][35][36].
In this context, the project SUSCO (Swash zone response under grouping Storm Conditions) used the Hydralab III facility at the large wave flume of the Maritime Engineering Laboratory (LIM), Catalonia University of Technology (UPC) [37].The objectives of the project were to compare the shoreline response and the SZ hydrodynamics of various wave regimes.In particular, the beach response between monochromatic conditions and wave groups were investigated as a direct result of the wave groupiness.The effects of forced and free long waves induced by the groupiness were also examined.
The data provided by the SUSCO campaign represent a comprehensive and controlled series of tests for evaluating in detail many complex phenomena affecting the hydro-morphodynamic in the surf and SZ.Starting from that dataset, the main aim of this study is to compare the effect of various wave regimes on very low frequencies generations.In particular, wave flume seiching is investigated as a potential effect of low-frequency energy during the experiments.As known, wave generation in an enclosed flume could cause seiching owing to wave reflections or wave grouping effects that can transfer wave energy to low-frequencies [38].
Often wave flume experiments involve wave reflection generated by structure/beaches.Nowadays this problem is solved by controlling the paddle through an active reflection system, which is not applicable to long wave absorption.When active absorption is applied, the reflected waves approaching the generator are predicted in real time and paddle control signals are modified to absorb the waves approaching the generator.The result is that the control of the incident waves is maintained throughout the test.The wave resonance in the wave flume is generated by the reflection phenomena occurring when the wave frequency begins to be equal to the fundamental or harmonic resonant frequency of the flume.Lengthwise oscillations of long waves in a flume can be troublesome as they take a long time to dissipate, owing to their high reflectivity [39].
The wave generator does not have the capability to absorb long wave energy, it is first necessary to see if any of the long wave activity is due to resonance of certain frequencies with the wave flume (seiche).The natural frequencies of the wave flume have to be determined by eigenvalue analysis.
Bellotti et al. [40], on the basis of data from tests performed at the same large wave flume of LIM, found a non-negligible low-frequency component that can be addressed to seiches of the flume.Resonant seiche response in a wave flume (or wave tank) represents an unfortunate drawback, especially for mobile bed experiments.In a resonant condition, the excitation of low-frequency (resonant) modes of the wave flume results in components whose amplitudes can increase to be of the same order as the primary waves [41].Some authors are concerned about resonant cross-tank seiching that would occur in a wave flume under different wave conditions [42,43].The variation in seiching has been found to modify the final equilibrium profile and the bar-trough morphology [35,36].
For these reasons, it is important to quantify any influence of the low-frequency for erosive and accretive wave conditions especially concerning the interpretation of different hydro-morphodynamic effects.Furthermore, among the generated waves of the experimental tests described in the Section 2, the results for random and combination tests are presented here.
Low frequencies are studied through the Spectral and Eigen analysis as presented in Section 3, estimating the frequency of the dominant seiches in the wave flume.Furthermore, the influence of the low frequencies on the wave energy spectra is investigated using a numerical solution, which adopts as input the measured data of the beach profiles of the case studies.Finally, combined analysis between modes for surface displacement and net sediment transport has been used to clarify the influence of LFW on morphodynamics.The results of these analyses are shown in Sections 4 and 5; then the main findings are summarized in Section 6.
Experimental Setup
In this paper, wave interaction with a rouble mound breakwater has been modeled using numerical simulations.The CIEM (Canal d'Investigació i Experimentació Marítima) large-scale wave flume is 100 m long, 3 m wide and up to 4.5 m deep.The beach consisted of commercial well-sorted sand with a medium sediment size (d 50 ) of 0.25 mm, with a narrow grain size distribution (d 10 = 0.154 mm and d 90 = 0.372 mm) and a measured settling velocity (w s ) of 0.034 m/s.The experimental profile and equipment distribution is presented in Figure 1.The x-coordinate origin is at the wave paddle at rest condition before starting the waves and positive toward the shoreline.The movable bed profile started after 31 m of concrete with a section 1:20 slope from x = 31 to 37 m prior to a plane bed, from x = 37 to 42 m, followed by a 1:15 slope plane beach (Figure 1).Prior to running each wave condition, the water depth was performed by manual reshaping and then compacted by running 10 minutes of 'smoothing' wave conditions, in order to return almost the same initial profile.The water depth at the toe of the wedge-type wave paddle was 2.5 m [44].
Water 2018, 10, x FOR PEER REVIEW 3 of 20 seiching that would occur in a wave flume under different wave conditions [42,43].The variation in seiching has been found to modify the final equilibrium profile and the bar-trough morphology [35,36].
For these reasons, it is important to quantify any influence of the low-frequency for erosive and accretive wave conditions especially concerning the interpretation of different hydromorphodynamic effects.Furthermore, among the generated waves of the experimental tests described in the Section 2, the results for random and combination tests are presented here.
Low frequencies are studied through the Spectral and Eigen analysis as presented in Section 3, estimating the frequency of the dominant seiches in the wave flume.Furthermore, the influence of the low frequencies on the wave energy spectra is investigated using a numerical solution, which adopts as input the measured data of the beach profiles of the case studies.Finally, combined analysis between modes for surface displacement and net sediment transport has been used to clarify the influence of LFW on morphodynamics.The results of these analyses are shown in Sections 4 and 5; then the main findings are summarized in Section 6.
Experimental Setup
In this paper, wave interaction with a rouble mound breakwater has been modeled using numerical simulations.The CIEM (Canal d'Investigació i Experimentació Marítima) large-scale wave flume is 100 m long, 3 m wide and up to 4.5 m deep.The beach consisted of commercial well-sorted sand with a medium sediment size (d50) of 0.25 mm, with a narrow grain size distribution (d10 = 0.154 mm and d90 = 0.372 mm) and a measured settling velocity (ws) of 0.034 m/s.The experimental profile and equipment distribution is presented in Figure 1.The x-coordinate origin is at the wave paddle at rest condition before starting the waves and positive toward the shoreline.The movable bed profile started after 31 m of concrete with a section 1:20 slope from x = 31 to 37 m prior to a plane bed, from x = 37 to 42 m, followed by a 1:15 slope plane beach (Figure 1).Prior to running each wave condition, the water depth was performed by manual reshaping and then compacted by running 10 minutes of 'smoothing' wave conditions, in order to return almost the same initial profile.The water depth at the toe of the wedge-type wave paddle was 2.5 m [44].
Among all the instruments installed and used in the controlled area [45], 6 Acoustic Wave Gauges (AWGs), 10 Resistant Wave Gauges (WGs), and 1 beach profiler were acquired.These instruments were sampled at 20 Hz according to the characteristics of the acquisition system (Figure 1).The bottom profile information was acquired by means of a mechanical bed profiler that measures the emerged and submerged profile along a central line of the flume.The mechanical profiler consists of a wheel at the end of a pivoting arm which is mounted on a moving platform Among all the instruments installed and used in the controlled area [45], 6 Acoustic Wave Gauges (AWGs), 10 Resistant Wave Gauges (WGs), and 1 beach profiler were acquired.These instruments were sampled at 20 Hz according to the characteristics of the acquisition system (Figure 1).
The bottom profile information was acquired by means of a mechanical bed profiler that measures the emerged and submerged profile along a central line of the flume.The mechanical profiler consists of a wheel at the end of a pivoting arm which is mounted on a moving platform along the flume.A computer records the velocity of the platform and the angle rotation of the arm.Those data are used to extract the X and Z flume bathymetry.The overall vertical profile accuracy is therefore estimated to be ±10 mm.Such accuracy is comparable with more recent experimental study on beach profile evolution [46], where the standard deviation of the vertical profile was found to be equal to 0.1 cm and 0.2 cm for the emerged and submerged beach profile, respectively.The vertical datum for the profile data is the sea water level (SWL in Figure 1).The horizontal datum is that of the reference point in the flume, which is 7.4 m from the wavemaker and approximately 36 m seaward of the beach toe.
As previously mentioned, the tests were carried out for two different levels of energy flux, defined as "erosive" and "accretive", covering a broad range of wave amplitudes, short and long wave frequencies, modulation rates and group frequencies.The behavior of the two wave regimes was forecasted by mean of the dimensionless sediment fall velocity number (Dean number) [47] and on the basis of previous experiments in the CIEM flume using similar wave conditions.The final profiles and net sediment transport are consistent with these initial estimates.Therefore, hereinafter "erosive test" and "accretive test" refer to wave conditions able to produce morphological patterns (over the whole beach profile) in which erosive or accretive conditions dominate, respectively.The following waves were appointed: • random waves with different Grouping Factors (GF); • combination of free partial standing long waves plus monochromatic short waves (hereinafter, combined waves); • regular monochromatic; • bichromatic waves (including bound long waves).
Among all the tests generated during the test campaign, random and combination waves in both cases erosive and accretive are here examined.In the Tables 1 and 2 the wave characteristics such as wave height H (m), related to the different wave components, and wave period T (s) for those tests in erosive and accretive conditions are listed.Considering the measured sediment fall velocity of 0.034 m/s, the beach was morphologically characterized by intermediate and reflective conditions for erosive and accretive waves regimes respectively [48].According to Wright and Short [49], the beach state is a function of breaker height, period and of the sediment size.Four random wave trains, RE_1, RE_2, RA_1, RA_2, were generated with the same variance based on wave height and the same peak frequency as their corresponding monochromatic pair.On the other hand, to generate long-wave short-wave combinations, the monochromatic conditions have been perturbed by small amplitude long waves, added to the control signal.Hence, cases CE_1, CE_2, CA_1, and CA_2 represent the addition of free long waves to otherwise monochromatic wave conditions (Tables 1 and 2).Controlled wave generation was achieved by a wedge-type wave paddle, particularly suited for intermediate-depth waves.The wave generation software used for controlling the wave paddles was AWASYS5 [50].The groupiness was slightly varied as well as the phases, obtaining random waves of identical energy spectrum (conforming to JONSWAP spectrum with γ = 3.3) [48].The Grouping Factor, which measures the degree of grouping, or rather than of the amplitude modulation of the incident wave field, is computed according with Hald [51] (Equation ( 1)) as the standard deviation σ of half the squared wave surface elevation envelope curve E(t) relative to the squared variance σ 2 of the surface elevation η(t) as follows: In particular, the half squared envelope signal is computed by the Hilbert transformation.The standard approach of wave generation is to use random uncorrelated phases which in the average leads to GF = 1.0 along with σ 2 ≈ 0.13 for 500 waves.[48][49][50][51].Greater values of GF suggest that small waves tend to be succeeded by small waves, and large waves by other large waves.
All tests started from a similar initial 1/15 handmade slope.For the erosive conditions the reshaping took place along the active profile, while the reshaping occurred from the landward edge of the bar trough to the run-up limit on the accretive conditions.The tests were composed of four steps which included five bottom profiles.Each test lasted 24 min and was repeated 6 times.Consequently, the final profile (P4) was generated after a total active wave time equal to 144 min.
Different bottom profiles where acquired during each wave testing condition.One at the beginning of the experiments (P0), and consecutively at the end of the 1st (P1), 2nd (P2), 4th (P3) and 6th (P4) runs for each of the 13 wave conditions.
Tests were a compromise between the desire to reach an equilibrium profile and the available experimental time.
Methods
Data treated in this work concern the first step of 24 min duration for each test.The wave conditions started with a similar initial beach profile for both wave conditions, accretive and erosive, respectively, as shown in Figures 2 and 3.For this reason, the results discussed below can be considered without significant influence from morphodynamic feedback during the measurements.Such conditions afford to obtain reliable observations of hydrodynamic behaviors of LFW also in movable bed experiments.
Furthermore, before carrying out the computation of net sediment volume variation ∆V SZ , bed elevation data have been corrected.In fact, it is worth mentioning that the total beach volume of each profile was not the same.Due to profiler measurement errors, in particular the inability to accurately measure ripple volumes and some non-uniformity of the profile, calculation of ∆V along the whole profile never returns identically zero.Therefore, errors in the calculated ∆V were corrected by distributing the mismatch in sediment volume along the whole profile, leading to a zero value of ∆V.Generally, the error distributed is of a few millimeters, hence the correction does not significantly affect the volume computation.This approach derives from the method proposed by and Baldock et al. [18] for calculation of the net time-averaged sediment transport.The analysis assumed a depth of closure for the sediment transport calculations at x = 60 m (at a water depth of approximately 1 m), and applied the sediment continuity correction over the active profile, that is, landward of x = 60 m or shallower than 1 m.This enables greater resolution and improved accuracy in the ∆V calculations.Closure errors corresponded to a mean error in vertical elevation across the profile that ranged from 3 mm to 15 mm, with an average of 9 mm over all tests, which is consistent with the estimated accuracy of the bed profiler.However, other methodologies could be applied to reconstruct transverse profiles and to study coastal evolution, in particular performing a comparison of aerial photographs according to Muñoz-Pérez et al. [52].
In the present paper, two kinds of analysis have been carried out, as described in the following two sub-sections.
Water 2018, 10, x FOR PEER REVIEW 6 of 20 3 mm to 15 mm, with an average of 9 mm over all tests, which is consistent with the estimated accuracy of the bed profiler.However, other methodologies could be applied to reconstruct transverse profiles and to study coastal evolution, in particular performing a comparison of aerial photographs according to Muñoz-Pérez et al. [52].
In the present paper, two kinds of analysis have been carried out, as described in the following two sub-sections.
Spectral Analysis
On the basis of the time series of surface elevations the spectral analysis has been performed to evaluate the energy characteristics induced by wave motion.The response of low frequency variations in the large wave flume for random and combination tests has been analyzed.In addition, the spectra response is observed taking into account the time series of the water surface elevation in accretive and erosive wave conditions.
According to Molloy [53], the spectral density represents the momentum corresponding to the specific frequencies.A higher momentum or spectral density corresponds to a larger amplitude oscillation.
Operatively, a Fast Fourier Transform (FFT) is applied to execute spectral analysis for each test.Once the peaks in the spectra corresponding to low-frequencies and harmonics are identified, they are later compared with wave modes determined by the Eigen analysis, as described in the following.3 mm to 15 mm, with an average of 9 mm over all tests, which is consistent with the estimated accuracy of the bed profiler.However, other methodologies could be applied to reconstruct transverse profiles and to study coastal evolution, in particular performing a comparison of aerial photographs according to Muñoz-Pérez et al. [52].
In the present paper, two kinds of analysis have been carried out, as described in the following two sub-sections.
Spectral Analysis
On the basis of the time series of surface elevations the spectral analysis has been performed to evaluate the energy characteristics induced by wave motion.The response of low frequency variations in the large wave flume for random and combination tests has been analyzed.In addition, the spectra response is observed taking into account the time series of the water surface elevation in accretive and erosive wave conditions.
According to Molloy [53], the spectral density represents the momentum corresponding to the specific frequencies.A higher momentum or spectral density corresponds to a larger amplitude oscillation.
Operatively, a Fast Fourier Transform (FFT) is applied to execute spectral analysis for each test.Once the peaks in the spectra corresponding to low-frequencies and harmonics are identified, they are later compared with wave modes determined by the Eigen analysis, as described in the following.
Spectral Analysis
On the basis of the time series of surface elevations the spectral analysis has been performed to evaluate the energy characteristics induced by wave motion.The response of low frequency variations in the large wave flume for random and combination tests has been analyzed.In addition, the spectra response is observed taking into account the time series of the water surface elevation in accretive and erosive wave conditions.
According to Molloy [53], the spectral density represents the momentum corresponding to the specific frequencies.A higher momentum or spectral density corresponds to a larger amplitude oscillation.
Operatively, a Fast Fourier Transform (FFT) is applied to execute spectral analysis for each test.Once the peaks in the spectra corresponding to low-frequencies and harmonics are identified, they are later compared with wave modes determined by the Eigen analysis, as described in the following.
Eigen Analysis
It is first necessary to study if any of the long wave activity is due to resonance of specific frequencies with the wave flume (hereinafter "seiche").According to Rabinovich [54] the resonance occurs when the dominant frequencies of the external forcing match the Eigen frequencies of the flume.
In particular, seiche is long-period standing oscillation in an enclosed basin.The resonant (eigen) frequency of seiche is determined by basin geometry and water depth.The set of eigen frequencies and associated modes are a fundamental property of a specific basin [54].The mode of a seiche is the number of nodes it has within the system.The period of a seiche with "n" nodes is given by Merian's formula [55].This assumes that the basin is rectangular, with a uniform depth.The related period can be computed as: where: Tn is the period of an nth mode seiche; L is the wavelength of the seiche (length of the basin); n is the number of nodes/modes of the seiche; g is the acceleration due to gravity 9.81 m/s 2 , and h is the average water depth.The period derived from Equation ( 2) is the time it takes for the waveform to oscillate from one end of the basin to the other and back.That is, to travel a distance of twice the basin length.Obviously, the different seiche modes are not mutually exclusive.Seiches with various different modes can occur together in a system.However, the fundamental oscillation is usually dominant, as first shown by Wilson [56].
In the next section the results of the calculated Eigenvalues and Eigenmodes through a numerical approach are shown.This method was presented by Kirby et al. [57] to determine the family of Eigenmodes for measured wave flume geometry.The resulting matrix eigenvalue problem, derived from the next equation (Equation ( 3)), is solved using the EIG routine in MATLAB™.
where q = zu is the volume flux, u the horizontal velocity, z the water depth, λ = ω 2 /g represents the Eigenvalue for the problem and ω the angular frequency.Equation ( 3) is finite differenced using centered second-order derivatives.The corresponding expansion, orthogonality condition, and dispersion relation are given by: in which F n and F m represent the family of eigenmodes of order n and m respectively.In this study, the four families of eigenmodes, F 1 , F 2 , F 3 and F 4 , have been determined by solving numerically Equations ( 3)-(6).
Results and Discussion
Spectral power density (expressed in m 2 /Hz) and water surface elevation signal plots (related to the wage gauge WG5 at 21.58 m in the wave flume) are presented in Figure 4.It is possible to note the great variability of the oscillation modes even in the region of low frequencies, according to Cáceres and Alsina [58].In fact, the authors found that random waves need longer time periods to reach a stationary position.These are wave conditions with wider breaking areas.Furthermore, they found that the wave group and short wave period ratio plays a significant role in the suspended sediment fluxes through the generation of harmonics with longer periods than the wave group.This is due to the fact that the time evolution of morphological features is affected by the length of the breaking area.
Focusing on frequencies under 0.1 Hz, the power spectral density peak values and the corresponding frequencies for random tests in both the erosive and accretive case are reported in Tables 3 and 4. Those values are related to the first, second and third harmonic.Water surface elevation time series are between −0.2 and 0.2 m meanly for all the studied tests.
Furthermore, it is noted that increasing the grouping factor the spectral density associated to the 1st harmonic strongly decrease in random waves.In fact, for cases RA_2 and RE_2 the lowest frequency peak is relatively less pronounced (Tables 3 and 4).That energy density is moved in the higher part of the spectra, as further demonstrated by Table 5, where power levels associated with various sections of the spectra are reported.In particular, for erosive tests, no relevant difference in the 2nd and 3rd harmonic energy density is observed.Moreover, beyond the first harmonic, a monotone trend in wave spectra is found, while for accretive conditions the spectral density slightly increases moving toward lower frequencies.It is notable as for tests RE_1 and RE_2 the third It is possible to note the great variability of the oscillation modes even in the region of low frequencies, according to Cáceres and Alsina [58].In fact, the authors found that random waves need longer time periods to reach a stationary position.These are wave conditions with wider breaking areas.Furthermore, they found that the wave group and short wave period ratio plays a significant role in the suspended sediment fluxes through the generation of harmonics with longer periods than the wave group.This is due to the fact that the time evolution of morphological features is affected by the length of the breaking area.
Focusing on frequencies under 0.1 Hz, the power spectral density peak values and the corresponding frequencies for random tests in both the erosive and accretive case are reported in Tables 3 and 4. Those values are related to the first, second and third harmonic.Water surface elevation time series are between −0.2 and 0.2 m meanly for all the studied tests.
Furthermore, it is noted that increasing the grouping factor the spectral density associated to the 1st harmonic strongly decrease in random waves.In fact, for cases RA_2 and RE_2 the lowest frequency peak is relatively less pronounced (Tables 3 and 4).That energy density is moved in the higher part of the spectra, as further demonstrated by Table 5, where power levels associated with various sections of the spectra are reported.In particular, for erosive tests, no relevant difference in the 2nd and 3rd harmonic energy density is observed.Moreover, beyond the first harmonic, a monotone trend in wave spectra is found, while for accretive conditions the spectral density slightly increases moving toward lower frequencies.It is notable as for tests RE_1 and RE_2 the third harmonic represents the highest peak before the carrier wave (frequency of about 0.25 Hz).For accretive conditions a slight peak appears at about 0.11 Hz.In the case of combination waves the 1st harmonic does not decrease strongly, as for the combination cases, but at about 0.8 Hz.Moreover, a power spectral response corresponding to twice the wave frequency is noted, especially for the erosive wave regimes (Figure 4e-h).In Tables 6-8 the peak frequencies, periods and power spectral density are summarized, also related to various sections of the spectra.In particular, the difference for the combination waves in accretive and erosive conditions is higher when is considered the power spectral density at frequencies less than 0.03 Hz.For tests RA_1 and RA_2 a sort of spreading around the carrier wave should be noted (frequency of about 0.158 Hz and 0.167 Hz respectively).In particular, this spreading takes the form of a second downshifted peak, more evident for RA_2.The shift from the main wave is of about ±0.045 Hz, that is, an amount approximately equal to the second harmonic.It is worth noting that the presence of 2nd harmonic disturbance on the carrier wave is also perceptible for RE_2.The latter turns out to be important looking at Figure 5, where spectral density ratio between third and second mode and between second and first harmonic are summarized.A heuristic explanation for the singular behavior of RE_2 and the influence of the second harmonic could be addressed to the enhanced non-linear interactions due to wave groupiness forcing a long wave at the frequency of second harmonic.In this vein, it is worth considering that the largest wave height (both maximum and significant) compared to others wave conditions with the same energy flux is formed precisely during RE_2.And just like the undertow largely follows the instantaneous wave height [59], the larger waves in the groups could dominate the whole hydrodynamics.For tests RA_1 and RA_2 a sort of spreading around the carrier wave should be noted (frequency of about 0.158 Hz and 0.167 Hz respectively).In particular, this spreading takes the form of a second downshifted peak, more evident for RA_2.The shift from the main wave is of about ±0.045 Hz, that is, an amount approximately equal to the second harmonic.It is worth noting that the presence of 2nd harmonic disturbance on the carrier wave is also perceptible for RE_2.The latter turns out to be important looking at Figure 5, where spectral density ratio between third and second mode and between second and first harmonic are summarized.A heuristic explanation for the singular behavior of RE_2 and the influence of the second harmonic could be addressed to the enhanced non-linear interactions due to wave groupiness forcing a long wave at the frequency of second harmonic.In this vein, it is worth considering that the largest wave height (both maximum and significant) compared to others wave conditions with the same energy flux is formed precisely during RE_2.And just like the undertow largely follows the instantaneous wave height [59], the larger waves in the groups could dominate the whole hydrodynamics.The ratios between spectral density level at different harmonics of the combination tests show a similar behavior in the cases of erosive and accretive regime for the monochromatic waves perturbed with larger (CE_1 and CA_1) and smaller (CE_2 and CA_2) long waves, respectively (Figure 6).
In order to clarify the origin of these waves, the frequencies of the longest standing waves in the flume have been evaluated and compared with the measured lowest frequencies.The natural frequencies of the wave flume are determined by Eigenvalue analysis.The calculated periods and frequencies using formulas found in the literature were published in recent research conducted by Riefolo et al. [60].Measured and numerically predicted mode periods are compared in Figures 7 and 8, where a linear deviation from the bisector line is highlighted (see black dashed line).The frequencies calculated with the formula proposed by Merian [55] are shown in Table 9. Appreciably, correlation between the measured and calculated frequencies has been found.These tests, in fact, show a good correspondence between the calculated and measured frequencies especially for the first and second harmonic, for which the influence of wave flume-generated seiching can be definitively highlighted in the case of random waves (Figure 7).On the other hand, the The ratios between spectral density level at different harmonics of the combination tests show a similar behavior in the cases of erosive and accretive regime for the monochromatic waves perturbed with larger (CE_1 and CA_1) and smaller (CE_2 and CA_2) long waves, respectively (Figure 6).
In order to clarify the origin of these waves, the frequencies of the longest standing waves in the flume have been evaluated and compared with the measured lowest frequencies.The natural frequencies of the wave flume are determined by Eigenvalue analysis.The calculated periods and frequencies using formulas found in the literature were published in recent research conducted by Riefolo et al. [60].Measured and numerically predicted mode periods are compared in Figures 7 and 8, where a linear deviation from the bisector line is highlighted (see black dashed line).The frequencies calculated with the formula proposed by Merian [55] are shown in Table 9. Appreciably, correlation between the measured and calculated frequencies has been found.These tests, in fact, show a good correspondence between the calculated and measured frequencies especially for the first and second harmonic, for which the influence of wave flume-generated seiching can be definitively highlighted in the case of random waves (Figure 7).On the other hand, the monochromatic wave perturbed with the larger long waves for the erosive condition (CE_1) gives different variation, in the measured and calculated Eigenmodes, of the case in the accretive conditions (CA_1).Instead, a similar variation of the measured and calculated Eigenmodes for the tests CA_2 and CE_2 is highlighted where the monochromatic wave was perturbed with smaller long waves, for both analyzed conditions, accretive and erosive, respectively (Figure 8).This similarity has been confirmed by the ratios between spectral density level at different harmonics of the combination tests, as previously described in Figure 6.
Water 2018, 10, x FOR PEER REVIEW 12 of 20 different variation, in the measured and calculated Eigenmodes, of the case in the accretive conditions (CA_1).Instead, a similar variation of the measured and calculated Eigenmodes for the tests CA_2 and CE_2 is highlighted where the monochromatic wave was perturbed with smaller long waves, for both analyzed conditions, accretive and erosive, respectively (Figure 8).This similarity has been confirmed by the ratios between spectral density level at different harmonics of the combination tests, as previously described in Figure 6.
Table 9. Eigen values calculated through the formula proposed by Merian [55].different variation, in the measured and calculated Eigenmodes, of the case in the accretive conditions (CA_1).Instead, a similar variation of the measured and calculated Eigenmodes for the tests CA_2 and CE_2 is highlighted where the monochromatic wave was perturbed with smaller long waves, for both analyzed conditions, accretive and erosive, respectively (Figure 8).This similarity has been confirmed by the ratios between spectral density level at different harmonics of the combination tests, as previously described in Figure 6.
Table 9. Eigen values calculated through the formula proposed by Merian [55].
Additional Considerations
The effects of the LFWs have been assessed by a set of the four lowest computed modes (F1-F4) for volume flux in the wave flume, shown in Figures 9 and 10.Mode periods are computed by the eigen analysis based on water depth z.Very interesting results can be identified: Frequencies plot of measured versus calculated Eigenmodes with formula for rectangular basin for combination tests in the case of accretive and erosive condition.
Table 9. Eigen values calculated through the formula proposed by Merian [55].
Additional Considerations
The effects of the LFWs have been assessed by a set of the four lowest computed modes (F 1 -F 4 ) for volume flux in the wave flume, shown in Figures 9 and 10. are computed by the eigen analysis based on water depth z.Very interesting results can be identified:
Additional Considerations
The effects of the LFWs have been assessed by a set of the four lowest computed modes (F1-F4) for volume flux in the wave flume, shown in Figures 9 and 10.Mode periods are computed by the eigen analysis based on water depth z.Very interesting results can be identified: Moreover, the spreading around the carrier wave seen for RE_2, and probably due to the 2nd harmonic disturbance, leads to more homogeneous erosion phenomena compared to RE_1, where the formation of a well visible bar is recognizable.Final profiles and net sediment transport were consistent with these initial estimates [48].Despite the fact that RE_1, RE_2, CE_1 and CE_2 have identical spectral energy, their local effect on beach profile can be significantly different due the presence of LFWs effectively influencing the spectra.The greater concentration of power at certain frequencies due to larger wave grouping could promote a stronger influence of LF motions in the SZ, in particular for two reasons: 1. specific eigenmode of the wave flume (generated seiches) induces spreading or downshift of carrier wave frequency, as foreseen; 2. grouping of short waves in the inner surf zone could directly induce low-frequency oscillations of the shoreline.Moreover, the spreading around the carrier wave seen for RE_2, and probably due to the 2nd harmonic disturbance, leads to more homogeneous erosion phenomena compared to RE_1, where the formation of a well visible bar is recognizable.Final profiles and net sediment transport were consistent with these initial estimates [48].Despite the fact that RE_1, RE_2, CE_1 and CE_2 have identical spectral energy, their local effect on beach profile can be significantly different due the presence of LFWs effectively influencing the spectra.The greater concentration of power at certain frequencies due to larger wave grouping could promote a stronger influence of LF motions in the SZ, in particular for two reasons: 1. specific eigenmode of the wave flume (generated seiches) induces spreading or downshift of carrier wave frequency, as foreseen; 2. grouping of short waves in the inner surf zone could directly induce low-frequency oscillations of the shoreline.Moreover, the spreading around the carrier wave seen for RE_2, and probably due to the 2nd harmonic disturbance, leads to more homogeneous erosion phenomena compared to RE_1, where the formation of a well visible bar is recognizable.Final transport were consistent with these initial estimates [48].Despite the fact that RE_1, RE_2, CE_1 and CE_2 have identical spectral energy, their local effect on beach profile can be significantly different due the presence of LFWs effectively influencing the spectra.The greater concentration of power at certain frequencies due to larger wave grouping could promote a stronger influence of LF motions in the SZ, in particular for two reasons: 1.
specific eigenmode of the wave flume (generated seiches) induces spreading or downshift of carrier wave frequency, as foreseen; 2.
grouping of short waves in the inner surf zone could directly induce low-frequency oscillations of the shoreline.
As termed by Russell [61] and Smith and Mocke [62], LFWs are powerful agents of sediment transport as they remove large amounts of the sediment which is put into suspensions by the short (wind) waves.
Influence on Morphodynamic
Figures 11 and 12 summarize the net sediment volume variation, ∆V SZ , in the SZ (here approximated as the emerged beach) between the test start and end.These were obtained using the changes in bed elevation between profiles, above z = 0. Focus on the emerged SZ, during the analyzed first step of experiments, all wave conditions lead to a landward net sediment transport, except cases RE_2 and CE_1.For these wave conditions, the energy density associated with the second harmonic has been found about ten times the power level of the first mode, increasing the disturbance effect of the 2nd harmonic itself.Although erosive random waves have the same wave height and mean period, their morphological effect is quite different.It is noted that the grouping factor could promote the presence of multiple low-frequency motions responsible of nonlinear interactions.For this reason, there is a significant contribution that determines the evolution of surface changes.Unfortunately, the case CE_1 formed part of the tests that developed lateral cross-flume asymmetry; this does not significantly alter the wave height on WG 5, but the profile correction may not be sufficient for SZ sediment volume here computed.Hence, results for that case are not reported in Figure 12.
It is important to note as case CE_2 shows a positive net sand volume variation in the emerged SZ greater than accretive cases.Such conditions, however, are not in contrast with the typical erosion pattern along the whole profile (see [20]) of case CE_2.The reason for such strong "apparent" accretive behaviour in the SZ should be addressed to the initial beach profiles that, in such experiments, were usually the same or very similar for all wave conditions.Hence, in erosive conditions the beach is moving more rapidly toward an approximate equilibrium profile, and the mean beachface slope change induced by erosion of sediment provide some local deposition/positive slope change.
As termed by Russell [61] and Smith and Mocke [62], LFWs are powerful agents of sediment transport as they remove large amounts of the sediment which is put into suspensions by the short (wind) waves.
Influence on Morphodynamic
Figures 11 and 12 summarize the net sediment volume variation, ΔVSZ, in the SZ (here approximated as the emerged beach) between the test start and end.These were obtained using the changes in bed elevation between profiles, above z = 0. Focus on the emerged SZ, during the analyzed step of experiments, all wave conditions lead to a landward net sediment transport, except cases RE_2 and CE_1.For these wave conditions, the energy density associated with the second harmonic has been found about ten times the power level of the first mode, increasing the disturbance effect of the 2nd harmonic itself.Although erosive random waves have the same wave morphological is quite different.It is noted the grouping factor could promote the presence of multiple low-frequency motions responsible of nonlinear interactions.For this reason, there is a significant contribution that determines the evolution of surface changes.Unfortunately, the case CE_1 formed part of the tests that developed lateral cross-flume asymmetry; this does not significantly alter the wave height on WG 5, but the may not be sufficient for sediment Hence, case are not reported in 12.
It is important to note as case CE_2 shows a positive net sand volume variation in the emerged SZ greater than accretive cases.Such conditions, however, are not in contrast with the typical erosion pattern along the whole profile (see [20]) of case CE_2.The reason for such strong "apparent" accretive behaviour in the SZ should be addressed to the initial beach profiles that, in such experiments, were usually the same or similar for all wave conditions.Hence, in erosive conditions the beach is moving more rapidly toward an approximate equilibrium profile, and the mean beachface slope change induced by erosion of sediment provide some local deposition/positive slope change.
Influence on Swash Hydrodynamics
A tentative correlation between detected LFWs and runup is reported in the following.The measurements of runup have been conducted by a PC-based acquisition of digital camera records.Data processing has been carried out by using Image Processing Toolbox™ in MATLAB™ (version 2017, Natickk, MA, USA).The camera was mounted on the profiler's carriage at rest, close to the board of the beach and focused on the wave flume.The camera was calibrated every time that it was removed from the waterproof housing (i.e., when profiler was operating).The calibration procedure consists of determining the relative position of a minimum of four points ("targets") in order to find the initial location and orientation of the camera relative to the calibration frame, as described by [63][64][65][66][67]. Since the position of instruments (captured in the video records) was known, eight "target" points were used, enhancing the quality of analysis.Then, the process was automatically completed by the definition of all remaining pixel position on the frame.In particular, the relative position of runup (as separation between dry and wet zones) was identified.
Runup measurements took place within the first three minutes of each step in order to identify the role of spectral wave components and wave grouping on runup, starting from the same underlying beach conditions.
The good agreements between runup values form video analysis and the ones derived by the extrapolation of measurements from micro-acoustic wave gauges (which only provide the height of the swash lens) on some uprush/backwach cycle give confidence in the technique.
The maximum runup measured are reported in Table 10.Looking at the table, some preliminary considerations can be drawn: • "accretive" conditions do not necessarily involve smaller runup; • despite comparable energy levels, random waves give a runup twice higher than combination cases; • the higher the grouping factor the higher the maximum runup.
Hence, the data demonstrate that increasing differences in the spectral wave components through perturbations of monochromatic waves with long waves or increasing the grouping factor in random waves promotes the enhancement of wave run up, except for case R_E2.
Influence on Swash Hydrodynamics
A tentative correlation between detected LFWs and runup is reported in the following.The measurements of runup have been conducted by a PC-based acquisition of digital camera records.Data processing has been carried out by using Image Processing Toolbox™ in MATLAB™ (version 2017, Natickk, MA, USA).The camera was mounted on the profiler's carriage at rest, close to the board of the beach and focused on the wave flume.The camera was calibrated every time that it was removed from the waterproof housing (i.e., when profiler was operating).The calibration procedure consists of determining the relative position of a minimum of four points ("targets") in order to find the initial location and orientation of the camera relative to the calibration frame, as described by [63][64][65][66][67]. Since the position of instruments (captured in the video records) was known, eight "target" points were used, enhancing the quality of analysis.Then, the process was automatically completed by the definition of all remaining pixel position on the frame.In particular, the relative position of runup (as separation between dry and wet zones) was identified.
Runup measurements took place within the first three minutes of each step in order to identify the role of spectral wave components and wave grouping on runup, starting from the same underlying beach conditions.
The good agreements between runup values form video analysis and the ones derived by the extrapolation of measurements from micro-acoustic wave gauges (which only provide the height of the swash lens) on some uprush/backwach cycle give confidence in the technique.
The maximum runup measured are reported in Table 10.Looking at the table, some preliminary considerations can be drawn: • "accretive" conditions do not necessarily involve smaller runup; • despite comparable energy levels, random waves give a runup twice higher than combination cases; • the higher the grouping factor the higher the maximum runup.
Hence, the data demonstrate that increasing differences in the spectral wave components through perturbations of monochromatic waves with long waves or increasing the grouping factor in random waves promotes the enhancement of wave run up, except for case R_E2.
Conclusions
Experimental data from CIEM large-scale wave flume were specifically used to investigate low frequencies generated by random and combination waves in accretive and erosive conditions.
By examining the spectral distribution of wave power, a secondary downshifted peak around the carrier wave has been noted for conditions (both erosive and accretive) with higher grouping factor.This spreading takes the form of a second downshifted peak of about ±0.045 Hz, an amount approximately equal to the second harmonic.Through the Eigen analysis it has been possible to estimate the periods of the dominant seiches in the wave flume and detect the low frequencies for combination and random waves.Then, comparing measured and calculated Eigenvalues for a rectangular wave flume, a good correspondence for the first and second harmonic has been found.Consequently, the wave flume-generated nature of first and second harmonic has been definitively clarified.
During the analyzed first step of experiment, all wave conditions lead to a landward net sediment transport in the swash zone, except cases RE_2 and CE_1 (Figures 11 and 12).For these wave conditions, the energy density associated to the second harmonic has been found to be about ten times the power level of the first mode, increasing the disturbance effect of the 2nd harmonic itself.However, case CE_1 contained errors due to cross-flume asymmetry of the evolving beach profile.Direct comparison of this test with the other erosive conditions is, therefore, not possible.
In the case of the erosive random waves, the presence of multiple low-frequency motions is responsible for nonlinear interactions, promoted by the grouping factor.Therefore, LFWs contribute significantly to the evolution of surface displacements.
The hydro-morphodynamic effects of the LFWs have been assessed by a volume flux Eigenmode analysis.
A clear opposite pattern of volume flux Eigenmode depending on accretive or erosive behaviour of wave conditions has been detected, except for the test CE_2.Despite the fact that the final profile and net sediment transport are consistent with the erosive trend, this test has shown an unexpected response on swash zone hydro-morphodynamics; the behavior is closer to a typical accretion wave condition.
A pattern of energy exchange between modes has been identified for all the tests in proximity of the breaking zone.
Results observed for swash zone volume variations measured during the first step (i.e., in comparable morphodynamic initial condition between tests) could be directly addressed to resonance phenomena in the wave flume.They may be also representative of real resonant conditions which could be generated landward of a submerged breakwater or in a natural enclosed basin (e.g.pocket beach with large secondary bar).
Clearly, the magnitude of net sediment volume in the swash zone suggests that low frequency motions could have a significant influence only on generation/development of secondary bedforms.In fact, morphodynamic effects of the seiches seem recognizable and significant only at a certain scale of observation.On the other hand, hydrodynamic influence on runup is more evident, but further analysis is required.Future works will consider the last steps of each test, in order to investigate the low frequencies when affected by morphodynamic effects and vice versa (e.g.bedforms migration).
Figure 1 .
Figure 1.Longitudinal cross-section of the Catalonia University of Technology (UPC) flume and detail of acoustic wave gauges and wave gauges location (length in m).
Figure 1 .
Figure 1.Longitudinal cross-section of the Catalonia University of Technology (UPC) flume and detail of acoustic wave gauges and wave gauges location (length in m).
Figure 2 .
Figure 2. Initial beach profiles of the accretive tests, random and combination, respectively.
Figure 3 .
Figure 3. Initial beach profiles of the erosive tests, random and combination, respectively.
Figure 2 .
Figure 2. Initial beach profiles of the accretive tests, random and combination, respectively.
Figure 2 .
Figure 2. Initial beach profiles of the accretive tests, random and combination, respectively.
Figure 3 .
Figure 3. Initial beach profiles of the erosive tests, random and combination, respectively.
Figure 3 .
Figure 3. Initial beach profiles of the erosive tests, random and combination, respectively.
Figure 5 .
Figure 5. Ratio between spectral density level at different harmonics of the random tests.
Figure 5 .
Figure 5. Ratio between spectral density level at different harmonics of the random tests.
Figure 6 .
Figure 6.Ratio between spectral density level at different harmonics of the combination tests.
Figure 7 .
Figure 7. Frequencies plot of measured versus calculated Eigenmodes with formula for rectangular basin for random tests in the case of accretive and erosive condition.
Figure 6 .
Figure 6.Ratio between spectral density level at different harmonics of the combination tests.
Figure 6 .
Figure 6.Ratio between spectral density level at different harmonics of the combination tests.
Figure 7 .
Figure 7. Frequencies plot of measured versus calculated Eigenmodes with formula for rectangular basin for random tests in the case of accretive and erosive condition.
Figure 7 .
Figure 7. Frequencies plot of measured versus calculated Eigenmodes with formula for rectangular basin for random tests in the case of accretive and erosive condition.
Water 2018 , 20 Figure 8 .
Figure 8. Frequencies plot of measured versus calculated Eigenmodes with formula for rectangular basin for combination tests in the case of accretive and erosive condition.
•
A strong non-linear pattern of F1-F4 is identified for all the tests in proximity of breaking zone; • Clear opposite behaviours of volume flux eigenmodes are shown for accretive and erosive wave conditions in the case of random and combination waves, except for the test CE_2; • A different variation of the Eigenmodes for the combination tests in the erosive, clearly, due to the non-linearity effects; •The monochromatic wave perturbed with the larger long waves for the erosive condition (CE_1) has an opposite variation of the Eigenmodes, than the monochromatic wave perturbed with smaller long waves (CE_2).
• 20 Figure 8 .
Figure 8. Frequencies plot of measured versus calculated Eigenmodes with formula for rectangular basin for combination tests in the case of accretive and erosive condition.
•
A strong non-linear pattern of F1-F4 is identified for all the tests in proximity of breaking zone; • Clear opposite behaviours of volume flux eigenmodes are shown for accretive and erosive wave conditions in the case of random and combination waves, except for the test CE_2; • A different variation of the Eigenmodes for the combination tests in the erosive, clearly, due to the non-linearity effects; •The monochromatic wave perturbed with the larger long waves for the erosive condition (CE_1) has an opposite variation of the Eigenmodes, than the monochromatic wave perturbed with smaller long waves (CE_2).
Figure 11 .
Figure 11.Net sand volume variation in the emerged swash zone (water depth > 0) after 24 min of wave generation: positive values represent accretion or landward transport; negative values represent erosion or seaward transport, for random tests.
Figure 11 .
Figure 11.Net sand volume variation in the emerged swash zone (water depth > 0) after 24 min of wave positive values represent accretion or landward negative values represent erosion or for random tests.
Figure 12 .
Figure 12.Net sand volume variation in the emerged swash zone (water depth > 0) after 24 min of wave generation: positive values represent accretion or landward transport for combination tests.
Figure 12 .
Figure 12.Net sand volume variation in the emerged swash zone (water depth > 0) after 24 min of wave generation: positive values represent accretion or landward transport for combination tests.
Table 2 .
Wave characteristics for erosive conditions.
Table 3 .
Peak frequencies, periods and power spectral density for RA_1 and RA_2 tests.
Table 4 .
Peak frequencies, periods and power spectral density for RE_1 and RE_2 tests.
Table 5 .
Power spectral density for various sections of the spectra for the random tests.
Table 6 .
Peak frequencies, periods and power spectral density for CA_1 and CA_2 tests.
Table 7 .
Peak frequencies and power spectral density for CE_1 and CE_2 tests.
Table 8 .
Power spectral density for various sections of the spectra for the combination tests.
Table 8 .
Power spectral density for various sections of the spectra for the combination tests.
Table 10 .
Maximum runup measured within the first three min of each step. | 12,867.4 | 2018-02-23T00:00:00.000 | [
"Physics"
] |
Goldratt ’ s Theory Applied to the Problems Associated with the Mode of Transportation , Storage and Sale of Fresh Fruits & Vegetables in Nigeria
Current practices used in the cultivation, harvesting, handling, transportation of fresh fruits and vegetables in Nigeria have resulted in vast losses especially due to the seasonality and perishability of these produce, with the resulting situation being that there are times when there are excess produce, which are wasted in most cases due to the fact that the excess are not being processed or stored properly, and times of scarcity when people hardly have enough produce. It is obvious that there are a lot of inherent problems involved in the way and manner in which these produce are being handled. The purpose of this paper is to use Goldratt’s Theory of Constraints and Thinking Process, which has been used in many cases, to improve performance and resolve problems in many business processes, to identify the core problem responsible for these losses and wastage with a view to generate useful information necessary to seek ways in which to improve the current situation and minimize the current losses being encountered which would be beneficial to the grower, traders, consumers and ultimately the Nigerian economy.
Introduction
The Federal Republic of Nigeria is located in West Africa and is estimated to be the most populous country in Africa with an estimated population of 155,215,573 inhabitants (July 2011CIA World fact book.)Geographically Nigeria is located between longitude 4° and 14°N and latitude 2° and 14°E, making it a country with a tropical climate type where seasons are damp and very humid.The country has two broad vegetation types namely: forest and savannah.The relatively long rainy season, resulting in high annual rainfall above 2000mm in the southern part of the country ensures adequate supply of water which makes the land fertile and luxuriant for vegetation.Indeed before the country's independence in 1960, Nigeria was self-sufficient in terms of food; however, with the discovery of oil, rural urban migration and emphasis on industry coupled with drastic population growth, Nigeria now imports most of its food.
With the rich soil and favorable climatic conditions, the country is able to produce diverse crops and produce.Indeed enormous quantities of fruits and vegetables are produced and staggering figures are sometimes given as estimated annual production.For example, figures like 3.8 million tons of onions, 6 million tons of tomatoes, 15 million tons of plantain and 35 million tons of citrus have been quoted as annual production levels for some fruits and vegetables, which are really large quantities of food crops (Oyeniran, 1988;Erinle, 1989).The production of bulk of the fresh tomato and pepper fruits in Nigeria especially Roma variety is in the Northern part of the country whereas the consumption and utilization is done all over the country.They are very important in the preparation of many of Nigerian staple and are either used fresh or processed into paste, puree, ketchup etc. (Idah, Ajisegiri & Yisa, 2007).
Unfortunately, the current situation is that a large quantity of the produce are often lost in the post-harvest stages due to the mishandling, spoilage, pest infestation, use of obsolete farming practices, physiological breakdown due to natural ripening processes; environmental conditions such as heat, drought, improper postharvest sanitation, poor storage and packaging practices and mechanical damage during harvesting, handling and transportation.( (Jones, Holt &Schoorl, 1991;Idah et. al., 2007).
Review of Literature
Fruits and vegetables are living things and carry out their physiological function of respiration by absorbing and releasing gases and other materials from and to their environment.(Idah et al., 2007) Most of the produce often has to be transported from where they are grown to the market over hundreds of kilometers of bad and poorly maintained road under unsatisfactory conditions such as high temperature, high humidity, improper storage etc.This tend to hasten the deterioration of the produce, a large amount is spoilt during post-harvest handling, transportation and storage.Also because most of the farmers are small scale subsistent farmers they often lack the knowledge of the proper handling techniques or are unable to afford the expense of proper handling, storage, packaging, and transportation.The traders, retailers and wholesalers are also equally handicapped.
Transporting the produce from the village and farms where the produce is cultivated to the towns is the first hurdle.Rail transportation system is at best epileptic and therefore the produce have to be transported by road over hundreds of kilometers of often poorly maintained roads over a period of days in open trucks.A significant portion of the produce is therefore lost during the transportation process.
Often produce has to sit for days while waiting for trucks, which in most cases are open and sometimes dilapidated; often owned and/or operated by third party transporters.These trucks are sometimes not available resulting in harvested produce being kept in un-conducive conditions or the handlers resorting to the use of unconventional mode of transportation such as passenger buses in order to push the produce to market.Their attempt to get the produce to market as soon as possible can result in the produce being closely stacked together in order to "optimize the available space." This unfortunately results in compression, bruising and poor ventilation leading to mechanical damage and the onset of product rot.
An assessment carried out by Idah et al., (2007) at the Ipata market in Ilorin for example showed an average of 1041.67 kg or 13.89% of a 7,500 kg capacity lorry load of fresh tomato fruits were damaged during transportation which is quite significant, which in monetary value was reported to be about twenty thousand Naira (an equivalent of about 127.11 dollars) per trip.
Although losses are naturally expected and according to Food and Agriculture Organization of the United Nations (FAO) Assistant Director-General Agriculture Department, C.H Bonte -Friedheim " Both quantitative and qualitative food losses of extremely variable magnitude occur at all stages in the post-harvest system from harvesting, through handling, storage, processing and marketing to final delivery to the consumer" (FAO,1989).Postharvest losses of fruits and vegetables being experienced in Nigeria is of a great magnitude.It has been estimated that losses as high as about 40 -50% of tomatoes and about 20 -30% of bell and hot pepper are lost at post-harvest stage every year (Okhuoya, 1995).This ultimately translates to loss of income which is not good either for the economy nor for the people in the industry who after all the effort being put into growing the produce see a large percentage being wasted due to improper and obsolete practices.
Further due to the fact that modern technology and agricultural practices are not being implemented and due to the seasonality of the produce there are times where there is excess produce, which are in most cases lost due to lack of preservation techniques and lack of proper storage and transportation facilities.There are also times when the produce are scarce, however studies have shown that if the right practices are used, the length of availability can be stretched such that produce are available for a longer period leading to consistent availability of produce.For instance, use of particular type of seedlings, delaying ripening, harvesting, preservation of excess produce for future use, use of proper storage facilities etc.
Olayemi, Adegbola, Bamishaiye & Daura (2010) observed and confirmed, based on their study, that the current farming system of tomatoes, bell pepper and hot pepper are inadequate.According to them, the farmers lack some fundamental knowledge and facts about post-harvest handling practices and young people are no longer interested in the farming industry, probably because they consider farming and trading of farm produce as a low level job and because of the lure of better paying white collar and blue collar jobs.Some other problems identified also included: lack of suitable packaging containers, farm structure and so on.They therefore made the following recommendations, some of which have also been incorporated as injections for the core problem: • Provision of extension services on postharvest handling to the farmers by extension agent of Agriculture Development Programme (ADP) with relevant research inputs.
• Adoption of technologies of some research institutes that will benefit them.
• Provision of farm structures and materials relevant to post-harvest handling and their adoption.
• Encourage youths to farm by making resources available.
It is therefore essential to employ the best practices possible in order to combat these problems, look for ways to extend the produce life and to employ modern farming and preservation techniques which can be used to extend the life of the produce and prevent avoidable deterioration and wastage.(Hall,1968;Adeniyi, 1977;Agboola,1980).
The Thinking Process and the Theory of Constraints
During the 1980's, Goldratt [1992-b] wrote a book entitled The Goal.In this book, he conveys the story of a plant manager struggling to keep his plant afloat while searching for a way to improve the plant's performance.With the help of an old college professor, the manager learns how to improve the performance of his plant while also learning a method for resolving problems to the point of a win-win situation.
Goldratt's Theory of Constraints (TOC) focuses on the efficiency of all the processes as a whole rather than the efficiency of any one single process.While the TOC was developed for manufacturing through Goldratt's Thinking Process, the Thinking Process system can be used to work through many other business processes and problems.
In Goldratt's TOC, a given group of processes will have a weakest link and the weakest link controls the entire systems production rate.
In order to maximize the system production, the weakest link must be improved and all other links in the processes regulated to the speed of the weakest link.The weakest link is the constraint and all steps must be examined together to determine the constraint; the core problem for termination.
Since the constraint is not always obvious, Goldratt [1992-b] developed the Thinking Process.This is a series of steps used to locate the constraint (What to Change?), determine the solution (What to change to?) and how to implement the solution (How to make the change?).These steps are actually referred to as the Thinking Process.Goldratt's next book It's Not Luck [1994] describes the Thinking Process in much more detail.
What to Change?
If the symptoms of a core problem are undesirable effects (UDEs), then the undesirable effects are merely symptoms brought on by the core problem itself.This core problem needs to be determined and eliminated.The methodology employed in the search for a core problem is based on a cause and effect relationships.These cause and effect relationship are used to uncover the core problem associated with the UDEs.
The core problem is also the weak link in the operation when it concerns obtaining the goal of the company.
Undesirable Effects
According to Goldratt [1994] the first step in the Thinking Process is to develop a list of at least 10 -12 undesirable effects that currently apply to the problem at hand.A total of 18 undesirable effects (UDEs) were identified from the current mode of Transportation, Storage and Sale of Fresh Fruits & Vegetables in Nigeria as listed below: There can be a scarcity of produce
The Current Reality Tree
After organizing the Undesirable Effects in an effect-cause-effect relationship analysis, a tree took shape that identified UDE #16, "Outdated/inadequate propagation, transportation, processing and preservation techniques are used."as the core problem.The core problem will be located at the bottom of the tree with all other UDEs leading from the core problem.The Current Reality Tree is read from the bottom starting with the core problem and progressing upward through the tree using if . . .then statements in a logical order.
The tree reads as follows (See Figure 1 -Current Reality Tree): processing and preservation techniques are used then the availability of produce is inconsistent from season to season.If the availability of produce is inconsistent from season to season and there are seasons of abundant produce then Period of excess produce exist.If period of excess produce exist and there is lack of exposure to modern farming technology by producers, wholesaler and retailers in the industry and outdated/inadequate propagation, transportation, processing and preservation techniques are used, then there can be significant post-harvest losses.If there can be significant post-harvest losses and wastage of produce can exist and the produce could have been turned into income, then a loss of income can occur.If a loss of income can occur, then there will be an adverse impact on economy.
• If
Outdated/inadequate propagation, transportation, processing and preservation techniques are used, then the availability of produce is inconsistent from season to season.If the availability of produce is inconsistent from season to season and there are season of produce Journal of African Research in Business & Technology 6 shortage then, there can be a scarcity of produce.If there can be a scarcity of produce, then consumers have to settle for different varieties or specie they are not used to which in most cases are substandard.If consumers have to settle for different varieties or species they are not used, to which in most cases are substandard then, consumers are dissatisfied.
• If outdated/inadequate propagation, transportation, processing and preservation techniques are used, then the availability of produce is inconsistent from season to season.If the availability of produce is inconsistent from season to season and there are season of produce shortage then there can be a scarcity of produce.If there can be a scarcity of produce, and scarcity of produce exists, then traders do not have the produce to sell.If traders do not have the produce to sell and Produce is readily available from importation, then Importation of produce, which otherwise is readily available locally can be used in its place.If Importation of produce occurs and importation of the produce causes the price to increase, then the price of the imported produce can increase.If the price of the imported produce can increase, then consumers have to buy produce at exorbitant prices.If consumers have to buy produce at exorbitant prices then Consumers are dissatisfied.
• If Graduates of Agricultural Sciences and relevant courses prefer to take more lucrative white collar jobs and People who study agricultural Sciences and relevant courses hardly have enough financial capacity to start their own business, then, experienced / educated people are not involved in the marketing system of the industry.If experienced / educated people are not involved in the marketing system of the industry and most traders in developing countries are uneducated and small scale then, traders have limited financial capacity to purchase the produce.
If traders have limited financial capacity to purchase the produce and the price of the imported produce can increase, then traders can only afford limited quantity to sell.If traders can only afford limited quantity to sell, then the income and profits to the traders will be reduced.If the income and profits to the traders will be reduced, then there will be an adverse impact on economy.harvest losses, and the produce could have been turned into income and wastage of produce can exist, then a loss of income can occur.If a loss of income can occur, then there will be an adverse impact on economy.
What to Change to?
Once the Current Reality Tree is formed a conflict emerges and pulls the situation in two directions.The most common way of managing conflict is to compromise in some way.However, if compromise were a true solution for the problem, the conflict would have been eliminated a long time ago.Therefore the tendency to look for a compromise to handle the situation should be overcome and the true core problem should be eliminated.
Goldratt [1992-a] stated that since a vacuum does not exist, eliminating the core problem means creating a new reality, in which the opposite of the core problem exists.To eliminate the core problem, a tool known as the Evaporating Cloud (EC) should be used.An EC, according to Goldratt [1993] lets a person precisely present the conflict facilitating the core problem and then helps find a solution by challenging the assumptions causing the conflict.The EC starts with an objective that is the opposite of the core problem.From the objective, the requirements (minimum of two) are listed.Each requirement will have at least one prerequisite.It is the prerequisite that depicts the conflict.All of the requirements and prerequisites are based on assumptions that have been ingrained into our minds over time.It is these assumptions that keep us in the conflicted environment.This is the first step in freeing ourselves from the binding controversy.
Evaporating Cloud
Goldratt contends that compromising does not solve the core problem though shortterm success may be realized.He suggests using the Evaporating Cloud to search for real solutions that will break the conflict that bring about a win-win solution for everyone.
The core problem is "Outdated/inadequate propagation, transportation, processing and preservation techniques are used."so the objective of the EC will be "Achieve proper propagation, transportation, and processing and preservation techniques in the industry."Next, we must list a minimum of two requirements.Each requirement will have at least one prerequisite.It is the prerequisites that depict the conflict.The zigzag arrow between the two prerequisites represents the conflict.To read the EC one would use "in order to …we (they) must" syntax.
The Evaporating Cloud reads as follows (See Figure 2 -Evaporating Cloud): • In order to achieve proper propagation, transportation, processing and preservation techniques in the industry, People in the industry must be willing to become educated to adopt and implement new techniques and at the same time, the people in agricultural extension agencies must be willing to educate the people in the industry to adopt and implement new techniques.
• In order for People in the industry to be willing to become educated to adopt and implement new techniques, the people in industry must be in charge and the people in the industry must put their funds into the act of actual production and selling of their products.
• In order for the people in agricultural extension agencies to be willing to educate the people in the industry to adopt and implement new techniques the people in education must be in charge and the people in the industry must put their funds into education to improve the actual production and selling of their products.
Figure 2 -Evaporating Cloud (EC)
The injections in this instance are: 1. We make the industry more attractive to educated individuals, through provision of adequate training, financial aid or loans from Government or financial institutions.
2. Make the proper propagation, transportation, processing and preservation techniques and equipment readily available through Government loans or grants.
Figure 3 -Evaporating Cloud with Injections (EC w/injections)
This tool will logically show that once the injections are implemented, the desirable effects can be accomplished.When the EC is broken, the FRT is built using the injections from the EC.The injections are connected with the cause-and-effect logic and "clarities" Achieve proper propagation, transportation, processing and preservation techniques in the industry.
The people in agricultural extension agencies must be willing to educate the people in the industry to adopt and implement new techniques.
People in the industry must be willing to become educated to adopt and implement new techniques.
The people in education must be in charge / the people in the industry must put their funds into education to improve the actual production and selling of their products.
. The people in industry must be in charge / the people in the industry must put their funds into the act of actual production and selling of their products.The people in agricultural extension agencies must be willing to educate the people in the industry to adopt and implement new techniques.
People in the industry must be willing to become educated to adopt and implement new techniques.
The people in education must be in charge / the people in the industry must put their funds into education to improve the actual production and selling of their products.
. The people in industry must be in charge / the people in the industry must put their funds into the act of actual production and selling of their products.
and "insufficiencies" may be used where additional information is required.This process tests the solution and is enhanced by criticism and negative comments.If criticisms, negative comments and UDEs can be overcome by the proposed solution then this provides proof of the solution and leads to the next step in the process.This process taps into the natural tendencies of criticism and negativity.
How to Cause the Change
Next consider whether the injections will direct desirable effects.An injection allows for an acceptable resolution to one side of the conflict.With the injections and the logical based common sense cause and effect relationships, the desired effects can be connected and the future outcome developed.This technique is called building the Future Reality Tree (FRT).The FRT according to Goldratt [1993] is the thinking process that enables a person to construct a solution that, when implemented, replaces the existing undesirable effects by desirable effects without creating devastating new ones.Goldratt [1992-b] goes on to add, the analytical method of the FRT is used to construct and scrutinize such a solution.This tool will logically show that once the injections are implemented, the desirable effects can be accomplished.When the EC is broken, the FRT is built using the injections from the EC.The injections are connected with the Effect-Cause-Effect logic and clarities and insufficiencies are used where additional information is required.This process tests the solution and is enhanced by criticism and negative comments.If criticisms, negative comments and UDEs can be overcome by the proposed solution then this provides proof of the solution and leads to the next step in the process.This process taps into the natural tendencies of criticism and negativity.
Future Reality Tree
A FRT was then constructed in an effort to ensure that all of the UDEs would be eliminated using the resolution identified in the EC.The FRT is essentially the same as the CRT; however the injection(s) identified in the EC are placed into the tree to create a vision of the "future reality."The FRT is read from the bottom up using if…then statements in a logical format just as the CRT.
The tree reads as follows (See Figure 4 -Future Reality Tree): • If ewe make the industry more attractive to educated individuals through provision of adequat training, financial aid or loans from Government or financial institutions, then Graduates of agricultural Sciences and relevant courses would be interested in working in the industry and people who study Agricultural Sciences and relevant courses would have the financial capacity to start their own business.If Graduates of agricultural Sciences and relevant courses are interested in working in the industry and people who study Agricultural Sciences and relevant courses have the financial capacity to start their own business, then experienced and educated people would get involved in the marketing system of the industry.If experienced and educated people would get involved in the marketing system of the industry, then there would be adequate knowledge on the proper handling of produce.If there would be adequate knowledge on the proper handling of produce, then there would be less damage to the produce.If there would be less damage to the produce then, wastage of produce would be vastly reduced.If the wastage of produce would be vastly reduced and post-harvest losses are significantly reduced and more produce would be available for sale then, more income would be generated.If more income would be generated, then there would be a positive impact on economy.
• If we make the industry more attractive to educated individuals, through provision of adequate training, financial aid or loans from Government or financial institutions, then Graduates of Agricultural Sciences and relevant courses are would be interested in working in the industry and people who study Agricultural Sciences and relevant courses would have the financial capacity to start their own business.If Graduates of Agricultural Sciences and relevant courses would be interested in working in the industry and people who study.
• Agricultural Sciences and relevant courses would have the financial capacity to start their own business, then experienced and educated people would get involved in the marketing system of the industry.If experienced and educated people would get involved in the marketing system of the industry and modern techniques are being used, then Producers, wholesaler, retailers in the industry would be exposed to modern technology.If Producers, wholesaler, retailers in the industry would be exposed to modern technology and Period of consistent produce would exist and modern propagation, transportation, processing and preservation techniques would be used, then post-harvest losses would be significantly reduced.If postharvest losses are significantly reduced and more produce would be available for sale and turned into income and the wastage of produce would be vastly reduced, then more Income would be generated.If more Income would be generated then, there would be a Positive impact on the economy.
• If we make the proper propagation, transportation, processing and preservation techniques and equipment readily available through Government loans or grants then, modern propagation, transportation, processing and preservation techniques would be used.If modern propagation, transportation, processing and preservation techniques would be used, then, the availability of local produce would be more consistent from season to season.If the availability of local produce would be more consistent from season to season then Period of consistent produce would exist.If Period of consistent produce would exist and producers, wholesaler, retailers in the industry would be exposed to modern technology and modern propagation, transportation, processing and preservation techniques would be used, then Post-harvest losses would be significantly reduced.If Post-harvest losses would be significantly reduced and more produce would be available for sale and turned into income and the wastage of produce.would be vastly reduced, then more income would be generated.If more income would be generated, then there would be positive impact on economy.
• If we make the proper propagation, transportation, processing and preservation techniques and equipment readily available through Government loans or grants then, modern propagation, transportation, processing and preservation techniques would be used.If modern propagation, transportation, processing and preservation techniques would be used, then, the availability of local produce would be more consistent from season to season.If the availability of local produce would be more consistent from season to season then, scarcity of local produce would be significantly reduced.If scarcity of local produce would be significantly reduced then consumers would be able to get the variety of produce they want.If consumers would be able to get the variety of produce they want, then consumers would be satisfied.
• If we make the proper propagation, transportation, processing and preservation techniques and equipment readily available through Government loans or grants then, modern propagation, transportation, processing and preservation techniques would be used.If modern propagation, transportation, processing and preservation techniques would be used, then, the availability of local produce would be more consistent from season to season.If the availability of local produce would be more consistent from season to season then, scarcity of local produce would be significantly reduced.If scarcity of local produce would be significantly reduced then traders would have local produce to sell.If traders would have local produce to sell, then there would be little or no need to import produce because they will be readily available locally most of the time.If there would be little or no need to import produce because they will be readily available locally most of the time and the steady supply of produce locally would cause the price to be steady and affordable then, the price of the produce would be affordable.If the price of the produce is affordable and consumers are able to get the variety of produce they want, then consumers are satisfied.
• If we make the proper propagation, transportation, processing and preservation techniques and equipment readily available through Government loans or grants then, modern propagation, transportation, processing and preservation techniques would be used.If modern propagation, transportation, processing and preservation techniques would be used, then, the availability of local produce would be more consistent from season to season.If the availability of local produce would be more consistent from season to season then, scarcity of local produce would be significantly reduced.If scarcity of local produce would be significantly reduced then traders would have local produce to sell.If traders would have local produce to sell, then there would be little or no need to import produce because they will be readily available locally most of the time.If there would be little or no need to import produce because they will be readily available locally most of the time and the steady supply of produce locally would cause the price to be steady and affordable then, the price of the produce would be affordable.If the price of the produce is affordable.If the price of the produce would be affordable and retailers would have the financial capacity to purchase enough produce then, retailers can afford to buy a larger quantity to sell.If retailers can afford to buy a larger quantity to sell, then retailers would be able to generate a higher income with reasonable profits.If retailers would be able to generate a higher income with reasonable profits then there is positive impact on economy.
• If we make the industry more attractive to educated individuals through provision of adequate training, financial aid or loans from Government or financial institutions, then Graduates of agricultural Sciences and relevant courses would be interested in working in the industry and people who study Agricultural Sciences and relevant courses would have the financial capacity to start their own business.If Graduates of agricultural Sciences and relevant courses are interested in working in the industry and people who study Agricultural Sciences and relevant courses have the financial capacity to start their own business, then experienced and educated people would get involved in the marketing system of the industry.If experienced and educated people would get involved in the marketing system of the industry, then then traders would be better educated and are able to grow their business.If traders would be better educated and able to grow their business, then traders would have the financial capacity to purchase enough produce.If traders would have the financial capacity to purchase enough produce and the price of the produce would be affordable, then traders can afford to buy a larger quantity to sell.If retailers can afford to buy a larger quantity to sell, then traders would be able to generate a higher income with reasonable profits.If traders would be able to generate a higher income with reasonable profits then there would be a positive impact on economy.
Conclusion
If steps can be taken to implement the two injections that were used to break the evaporative cloud, then the prevailing problems in the fruits and vegetables industry can be easily resolved i.e.: i. Make the industry more attractive to educated individuals, through provision of adequate training, financial aid or loans from Government or financial institutions and; ii. Provide Government loans or grants and make it easily accessible to the people in the industry so that people in the industry can afford to buy modern equipment and implement modern propagation, transportation, processing and preservation techniques.
These actions, if implemented, would effectively lead to a great improvement in the fruit and vegetable sector.The produce will be better managed, and post-harvest losses, scarcity, wastage etc will be significantly reduced.This will also lead to increased income to the farmers; subsequently make the farming profession more attractive to young and educated people who are thinking of starting their own business, because the cultivation and production of produce will become more viable and would ultimately be beneficial to Nigeria's economy.
Summary
This procedure, although somewhat different from the normal methods of analysis, is so practical, that it can be applied to any problem anywhere and anytime.According to Goldratt [1992-b], you start with an effect in reality, and then hypothesize a plausible cause for the existence of that effect.Since the aim is to reveal the underlying causes that govern the entire subject, try to validate the hypothesis by predicting what else this hypothesis must cause.
Once such predictions are found, concentrate efforts to verify whether or not each prediction holds water by asking questions.If it turns out that one of the predictions doesn't hold up, find another hypothesis.If all of them hold up, continue until the entire subject is understood through the bonds of cause and effect.
Bob Fox [1989], President of the Goldratt Institute, states: I do not believe any longer that the challenge is the technology of what to do.That has been well developed -maybe not disseminated very well yet, but developed.The issue is the resistance to change once we know what to do, and I believe there is a solution to that.This method of problem solving requires ability that everyone has and stems from the systematic methods and thinking processes.It provides you with the framework necessary to direct these efforts and to verbalize your intuition to gain a better understanding of managements "intestinal sensations." Everyone has self-doubt.This self-doubt makes it very difficult to use the scientific method of analysis.
Goldratt [1992-b] reveals, "the scientific method involves reaching into the unknown; speculating a cause and determining predicted effects probably requires an awkward personality that thrives on the unknown".But we are dealing with the known, with current reality.There must be an equivalent method, a thinking process that facilitates building a current reality tree within the known, and we can effectively use it on any subject that we have intuition for and care about.This cause and effect approach is used in many areas of Science and Mathematics.The demonstrated thinking process is what managers need the most.To carry out a successful process of ongoing improvement there is nothing more important than the ability to answer: "What to change?", "What to change to?", and "How to cause the change?"The results are well worth the required investments.
INJ 1 :
We make the industry more attractive to educated individuals, through provision of adequate training, financial aid or loans from Government or financial institutions INJ 2: Make the proper propagation techniques and equipment readily available through Government loans or grants Achieve proper propagation, transportation, processing and preservation techniques in the industry.
Table 1 : List of Undesirable Effects UDE Number
Experienced / educated people are not involved in the marketing system of the industry UDE # 12 People who study agricultural Sciences and relevant courses hardly have enough financial capacity to start their own business UDE # 13 A loss of income can occur UDE # 14 There can be significant post-harvest losses UDE # 15 Lack of exposure to modern farming technology by producers, wholesaler, retailers in the industry UDE # 16 Outdated/inadequate propagation, transportation, processing and preservation techniques are used UDE # 17 Period of excess produce exists UDE # 18 | 8,078 | 2012-12-29T00:00:00.000 | [
"Agricultural and Food Sciences",
"Economics"
] |
Dynamical entropic measure of nonclassicality of phase-dependent family of Schrödinger cat states
The phase-space approach based on the Wigner distribution function is used to study the quantum dynamics of the three families of the Schrödinger cat states identified as the even, odd, and Yurke–Stoler states. The considered states are formed by the superposition of two Gaussian wave packets localized on opposite sides of a smooth barrier in a dispersive medium and moving towards each other. The process generated by this dynamics is analyzed regarding the influence of the barrier parameters on the nonclassical properties of these states in the phase space below and above the barrier regime. The performed analysis employs entropic measure resulting from the Wigner–Rényi entropy for the fixed Rényi index. The universal relation of this entropy for the Rényi index equal one half with the nonclassicality parameter understood as a measure of the negative part of the Wigner distribution function is proved. This relation is confirmed in the series of numerical simulations for the considered states. Furthermore, the obtained results allowed the determination of the lower bound of the Wigner–Rényi entropy for the Rényi index greater than or equal to one half.
The application of space-phase methods to the study of quantum systems initiated by Wigner allows looking at the theory of these systems as the statistical theory in which observables characterizing them form a noncommutative algebra 1-3 .This observation is the cornerstone of the space-phase formulation of quantum theory and has given the impulse to develop more rigorous rudiments of this approach [4][5][6][7][8] .As a result of these studies, quantum mechanics emerged as a deformation of the symplectic structures characteristic of classical mechanics formulated in the phase-space language.Hence, the description of quantum phenomena is based on ordinary c-number functions on the phase space, and Planck's constant is treated as a measure of a Poisson algebra deformation 9 .Historically, the first formulation of quantum mechanics in this way is based on the Wigner distribution function (WDF), which plays the role of the quantum system state within this approach 1 .Since then, this approach has attracted a lot of attention because of its applicability in many modern quantum problems, including quantum entanglement 10,11 , quantum computing 12 , or quantum metrology 13 .In addition, numerous applications can also be found in quantum optics 14 , atomic physics 15,16 , electrodynamics 17 , and plasma physics 18 , condensed matter 19,20 , gravitation and cosmology [21][22][23] or field theory 24 .Furthermore, this approach has many interdisciplinary applications 25 , e.g. in quantum electronics 26,27 , quantum chemistry 28 , quantum biology 29,30 , or signal processing [31][32][33] .Besides that, Wigner's idea is also influential in developing some branches of mathematics, e.g.non-commutative geometry 34,35 , geometrical quantization 36 , or the theory of pseudo-differential operators 37,38 .
One of the characteristic properties of the WDF is its negativity in some regions of the phase space.This property renders the WDF an example of a non-classical distribution function since the rest of this function's properties are consistent with its Kolmogorov counterpart in probability theory.The negativity of the WDF has been the subject of numerous discussions and interpretations.Among these various proposals, the approach assuming that the WDF is treated as a wave function defined on the phase space deserves special attention [39][40][41] .The immediate consequence of this approach is an interpretation of this function as the probability amplitude on the phase space.Thus, the square of the absolute value of this wave function is treated as the probability density on the phase space, and just like the WDF this quantity is symplectically covariant 42 .However, this probabilistic interpretation is restricted to pure states only.Nevertheless, let us note that this new look at the WDF is free of its sign problem.On the other hand, numerous studies on the negativity of the WDF led to the conclusion that this part of the WDF is a hallmark of the state nonclassicality which can be expressed quantitatively by the nonclassicality parameter introduced by Kenfack and Życzkowski 43 .Of course, this measure is not perfect because it does not detect nonclassical states described by the WDF-positive states.Nevertheless, the advantage of the aforementioned nonclassicality parameter is its simplicity and clear interpretation because it provides information on the fraction of the phase space occupied by the negative part of the WDF.
Let us note that looking at the phase-space formulation of quanta as a statistical theory immediately inclines one to introduce the concept of entropy to the considerations as an essential component of such a statistical approach.However, at the beginning of the deliberations on the introduction of this state function, we encountered fundamental conceptual difficulties related to the negativity of the WDF.One of the first attempts to define the entropy of quantum states using the WDF has been based on the concept of coherent states, regarded as the most classical quantum states 44 .Such states are usually represented by the Gaussian functions, which minimize the uncertainty principle, and the corresponding WDF being also a Gaussian is always nonnegative, according to Hudson's theorem 45 .In this case, considering the concept of a quantum state in the definition of entropy seems fully justified.A major step in the development of the concept of quantum state entropy in the phasespace approach based on WDF was also the work of Manfredi and Feix 46 , in which the authors introduced and discussed properties of the so-called quantum linear entropy, utilizing for this purpose the functional of the WDF square being an invariant quantity under symplectic transformation.Recently, a slightly different approach was proposed by van Herstraeten and Cerf 47 .These authors introduced the concept of Wigner quantum entropy based on quantum states with the positive definite WDF.In turn, their studies have been generalized to the case of arbitrary absolutely integrable Wigner functions by Dias et al. 48The results presented in these last works partially motivate our studies because their authors operated with the WDF interpreted as the wave function on the phase space.Another way to introduce the entropy within the phase-space approach for quantum states requires projecting a quantum state on the coherent state and then taking the square of the absolute value of this function, which leads to another phase-space quasi-distribution function called the Husimi distribution function 49 .This approach brought Wehrl to the concept of the von Neumann entropy in the phase space, which is currently regarded as the quantity which is the closest to the classical understanding of the entropy.In turn, the Wehrl result can be regarded as a particular case of the Rényi-Wehrl entropies 50 commonly used to measure quantum state localization in the phase space.It is also worth to mention the Tsallis entropy, which can be seen as a linearization of the Rényi entropies.This observation was used by Sadeghi et al. 51 to conclude that the nonclassicality parameter is related to the Tsallis entropy based on the Wigner function with the Tsallis index equal one.However, these authors did not give any quantitative characteristics in the form of a formula linking these two quantities.Instead of this, they examined several examples of states and showed the coincidence of the nonclassicality parameter and the Tsallis entropy of the Wigner function in graphs.As a result, this allowed them to conclude that these two considered quantities have similar properties.Following these results, we have found the exact relation between the nonclassicality parameter and Wigner-Rényi entropic measure of |ρ(x, p, t)| 2 for a fixed, fractional Rényi index value.Therefore, we can interpret the Wigner-Rényi entropy for this fractional index in terms of the nonclassicality parameter.A detailed analysis of this issue is one of the subjects which we consider in this paper.
Another issue discussed in this paper concerns the interaction of the phase-dependent family of the Schrödinger cat (SC) states with the potential barrier in a one-dimensional dispersive channel.This continuation of our previous studies on the SC states in the phase-space formalism 52 allows us to generalize them to the case of the well-separated bimodal state formed by the coherent superposition of two Gaussian wavepackets, each of which moves with an opposite momentum value.It means that each wave packet that is a part of this coherent superposition approaches the other in the configurational space, with an obstacle in the form of a Gaussian barrier between them acting as a scattering centre.For the described situation, we performed studies in the above-and below-barrier regimes, simultaneously analyzing the influence of the relative phase encoded in these states on the nonclassicality, which is expressed in terms of the appropriate entropy.Such framework can serve as a model for scattering in the constricted semiconductor nanowires 53 .On the other hand, the process of double-sided barrier penetration has recently gained increased interest with its use in the field of quantum electron optics [54][55][56][57][58][59] .
In this work, we use three families of the SC states, i.e. even SC states, odd SC states, and Yurke-Stoler (YS) states.All of them have been extensively studied in quantum optics 60 , including quantum spectroscopy methods 61,62 , quantum computing and information theory 63 , and even quantum theory of gravity 64 .It is also worth mentioning that some experimental methods exist for creating these states.Among them, we can note intense laser-matter interaction 65 , diamond mechanical resonations 66 , levitating ferromagnetic particles 67 or following the original Yurke and Stoller proposal with a nonlinear Kerr interaction in the superconducting circuit [68][69][70] .Recently, a method of generating SC states based on cavity electro-optic systems was also considered 71,72 .Moreover, continuous advances in electron quantum optics make it possible to think of solidstate quantum information processing based on flying qubits with controlled phases.
The purpose of the presented study is twofold; first, we show that the nonclassicality parameter in the form proposed by Kenfack and Życzkowski is related to the Rényi entropy for the Rényi index equal one half.Thereby, it allows us to interpret this quantity as an entropic measure of the area occupied by the negative part of the WDF in the phase space.Secondly, we investigate the influence of the relative phase encoded in the Schrödinger cat states on the dynamics of the WDF in the presence of the scattering center, assuming that the Gaussian wavepackets that form this bimodal state move in opposite directions.We quantitatively describe this dynamical problem in terms of the aforementioned fractional Rényi entropy.
Theoretical framework
Here we introduce only indispensable facts concerning the phase-space formulation of the quantum theory which are directly used in this work.More details about this formulation can be found in the extensive literature related to this issue 14,49,[73][74][75][76][77][78] .In the phase-space formulation of the quanta, the isolated one-particle quantummechanical system is characterized by the Weyl symbol of the Hamiltonian, Ĥ(x, p) = p2 /(2m) + U(x) , which is given by the formula The Weyl symbol of the Hamiltonian for the considered system is equivalent to the classical Hamilton function, i.e.H W (x, p) = H(x, p) = p 2 /(2m) + U(x) , where p and x are the Weyl symbols of the momentum and position, respectively, and they are consistent with the classical counterparts; U(x) is the potential energy and m denotes the mass.In turn, the state of the considered quantum-mechanical system is represented by the WDF, which corresponds to the Weyl symbol of the density operator, ρ(t) , rescaled by the factor 1/(2π ) .Especially for the state represented by such a density operator, the corresponding WDF can be expressed as follows, Based on this expression, we conclude that the time evolution of the quantum system state in the phase space follows from the time dependence of the WDF.The equation of motion of this function for the isolated system was initially derived by Wigner in his original paper 1 .However, only further studies conducted by Moyal allowed introducing the structure called the 'sine-Poisson' bracket 2 .On the one hand, this discovery was an essential step in understanding the non-commutativity of the Poisson structure of the quantum phase space (p ase space), and on the other hand it allowed one to reformulate the Wigner's original form of the equation of motion to the new form which is now called Moyal equation where the symbol {•, •} ⋆ denotes the Moyal bracket of two phase-space functions 79 .Let us note that this expression can be reduced to the Poisson bracket in the classical limit, symbolically understood as → 0 .Consequently, the Moyal equation reduces to the Liouville equation, and quantumness is preserved only in the initial condition, while the movement is consistent with classical dynamics generated by the Liouville flow.It is also worth noting that the Moyal and Liouville dynamics are consistent in the case of the potential energy expressed by a polynomial of the order less than or equal to 2. However, for computational reasons, it is more convenient to formulate the equation of motion (3) slightly differently.Namely, using the Bopp shifts 80 , the Moyal equation can be transformed into the following form 52 .
where the first term in the RHS of this equation corresponds to the kinetic term, while the second term in the RHS represents the potential term which is inherently nonlocal 81 .The structure of this equation explicitly alludes to the Liouville equation, although the potential term has an intricate form.Nevertheless, finding the time evolution of the WDF requires specifying the initial condition for the Eq. ( 4) and solving the Cauchy problem for it.
After this nutshell foreword to the phase-space formulation of quanta, we now discuss one of the primary results of our work, namely the relationship between entropy and the nonclassicality parameter.We prove that for pure states the Rényi entropy for the Rényi index equal to one half can be considered the logarithmic measure of state nonclassicality.The assumption of the purity of the state is significantly important because it allows us to look at the WDF as the amplitude of the probability density in the phase space according to the arguments presented in Refs. 39,41, i.e. the WDF represents the wave function on the phase space for the pure state being a particular solution of the Schrödinger equation written in the phase-space representation.On this basis, it can be concluded that the symplectically covariant squared modulus of the WDF is regarded as the probability density in the phase space, and the corresponding WDF norm with respect to the space L 2 (R 2 ) is given by the formula Using this norm, along with the auxiliary function, ρ(x, p, t) , introduced by Dias et al. 82 , to ensure correct normalization, we can express the Rényi entropy of the probability density, | ρ(x, p, t)| 2 , in the phase space for the Rényi index α in the form ( The obtained result can be regarded as an extension of Dias et al.'s result given by Eq. ( 22) in their work 82 to the Rényi entropy.However, we are still restricted by the purity of the state represented by the WDF, which is interpreted as the wave function in the phase-space representation.On the other hand, taking into account that |ρ(x, p, t)| = [ρ 2 (x, p, t)] 1/2 , we get the result similar to the quantum Wigner-Rényi entropy defined for Wigner- positive states recently introduced by van Herstraeten and Cerf 47 .Moreover, this last result is also extended by Dias and Prata 48 to the case of the arbitrary integrable Wigner functions associated with the Feichtinger states 83 .After taking into consideration Eqs. ( 5) and ( 6), we can transform Eq. ( 7) into the form Let us note that the Wigner-Rényi entropy expressed by Eq. ( 8) for the fixed Rényi index α refers to the prob- ability density in the phase space defined by Eq. ( 6).In the case of the Rényi index α = 1/2 , the Wigner-Rényi entropy ( 8) can be algebraically reduced to the form This observation is an important result because it allows us to conclude that the Wigner-Rényi entropy for the Rényi index α = 1/2 can be regarded as the entropic measure based on the WDF with a clear physical inter- pretation.To see this, it is enough to use the definition of the nonclassicality parameter introduced by Kenfack and Życzkowski 43 , namely Then, comparison of Eqs. ( 9) and ( 10) leads to the following relation between these two quantities, Hence, we can conclude that the entropy S 1/2 can be regarded as a logarithmic measure of the nonclassicality of the state that is represented by the WDF and which is understood as the wave function of the pure state in the phase space.On the other hand, the nonclassicality parameter can be expressed by the entropy, S 1/2 (t) , accord- ing to the formula This result clearly shows that the nonclassicality of the state can be measured by the Wigner-Rényi entropy of order α = 1/2 .In particular, we can conclude that for any Wigner-positive states, i.e. δ(t) = 0 , we always obtain the constant value of this entropy, namely S 1/2 (t) = ln (2π ) which represents the logarithm of the quantum cell volume in the phase space.Moreover, analyzing the formula for entropy S 1/2 as a function of the nonclassical- ity parameter δ in time, we can conclude that this quantity may be a nonmonotonic function, generally.This is because during the time evolution of the WDF in dispersion media, where quantum phenomena play the essential role, changes in the nonclassicality parameter are observed in the form of its initial increase and subsequent decrease to a specific value.We often come across such situations in transport processes.
Let us return to the Wigner-Rényi entropy given by Eq. ( 8).This expression can be used to discuss the entropic uncertainty relation in the phase space 46,50,[84][85][86][87] .A natural question arising from this issue concerns the lower bound of the Wigner-Rényi entropy with the Rényi index α 1/2 .For the study of this problem, we use the results of Lieb's work 88 , where the author proved that the lower bound for the L q -norm of the WDF with 1 q < 2 and the upper bound with q > 2 , are saturated only by the Gaussian states.Let us define the function I 2α (t) for α > 0 in such a way that it corresponds to the L q -norm considered in Lieb's work, namely, Following the result of Theorem 1 from Lieb's work 88 , we can find the upper bound of the integral (13) for α > 1 .Let us note that taking the natural logarithm scaled with 2 α−1 (π ) 1−α and then rescaling it with 1/(1 − α) , we reconstruct the expression for the Wigner-Rényi entropy given by Eq. (8).Because for α > 1 , the term 1/(1 − α) in Eq. ( 8) is negative, and the logarithm is a monotonically increased function, the inequality from Theorem 1 changes sign, resulting in the lower, not upper bound for the Wigner-Rényi entropy for α > 1 .Referring to Theorem 2 of Lieb's work 88 , we arrive at the lower bound for 1/2 α < 1 .Now, we follow the same reasoning as in the previous proof for α > 1 .This result gives us a complete lower boundary for the Wigner-Rényi entropy with α 1/2 .The boundary can be expressed in the form of the following inequality which according to Lieb saturates iff S α (t) is calculated for the Gaussian WDF.Moreover, we note that for α = 1/2 , we reproduce our previous result, i.e. S 1/2 = ln 2π , but with some qualitative difference, namely, (8) www.nature.com/scientificreports/we obtained this result for the class of the Wigner-positive states ( δ = 0 ) of which the Gaussian states are a particular case.Finally, let us note that the formula given by Eq. ( 14) extends the results of Dias et al. 82 to the case α 1/2 .However, this extension is a succession of the used interpretation of the WDF.Secondly, the authors only consider the Shannon entropy corresponding to Rényi index α = 1.
At the end of this discussion it is worth pointing out that Dias and Prata 48 proposed the lower bound of Rényi-Wigner entropy of |ρ(x, p, t)| for two separate cases of Rényi index, namely α ∈ (1, 2) and α 2 .In the same work, they proved the van Herstraeten-Cerf conjecture of the Wigner-positive states for α 2.
Computational method
The general solution of the Moyal equation ( 4) can be written in the exponential form, such that where Û (t) is the time evolution operator.For the Moyal equation, the operator Û (t) is given by the following formula where T = −i p∂ x /m and Û = U x + i ∂ p /2 − U x − i ∂ p /2 are kinetic and potential operators, respectively.
In the numerical calculations, the solution can be obtained by acting on WDF repeatedly with the operator Û (�t) , where t = 10 a.u. is the time increment.Calculation-efficient form of the time evolution operator can be derived by applying the symmetric Strang splitting formula [89][90][91] Using the partial Fourier transforms in the first or second variable defined as follows, one can obtain a formula for a single step of the time evolution of the WDF in the form where the auxiliary function U � (x, y) is defined as the central difference of the potential energies, namely Since Eq. ( 4) is defined for x, p ∈ R , in order to perform numerical computation the phase space is limited to the box of size [−L x , L x ) × [−L p , L p ) with periodic boundary conditions imposed by the numerical method.Thus the used values of L x and L p are large enough to assure that the WDF vanishes in the vicinity of the boundary during the whole simulation.The computational box is discretized into the grid of size of N x × N p in the following way, where m ∈ {0, 1, . . ., N x − 1} , n ∈ {0, 1, . . ., N p − 1} , and the steps on the computational grid are � x = 2L x /N x , � p = 2L p /N p .Numerical calculations were performed in atomic units ( = e = m = 1 ) and with the follow- ing parameters of the computational grid: N x = N p = 1024 , L x = 1500 a.u. and L p = 0.5 a.u.For the efficient calculation of the Fourier transforms the Fast Fourier Transform (FFT) algorithm was used.
Initial state
Numerical determination of the time evolution of the WDF from Eq. ( 22) requires establishing the initial condition.For this purpose, we assume the initial WDF in the form corresponding to the superposition of two Gaussians with the same widths, δ x , representing coherent states localized at different phase space points denoted by (x 1 , p 1 ) and (x 2 , p 2 ) .According to this description, the general expression for this WDF can be written in the form (15) ρ(x, p; t) = Û (t)ρ(x, p; 0), ( 16) www.nature.com/scientificreports/where parameter β controls the amplitude ratio of the states, and θ is the relative phase between them.Besides this, the normalization factor A equals This form of the initial condition can be regarded as a generalization of the result presented in Ref. 52 for the coherent superposition, and we refer to this as the Schrödinger cat state 10 (SC state).For further investigation, we assume that both Gaussians move in the dispersive medium in opposite directions with the same value of the initial momentum, i.e. p 2 = −p 1 .On account of that, we can simplify expression (26) to the following form with the normalization factor A given by the formula Referring to the previous study 52 , we assume the following values of the parameters which characterize this initial condition, namely β = 0.5 , δ 2 x = 500 a.u., x 1 = −300 , x 2 = 300 and p 1 = 0.15 a.u.Owing to this selection of the parameters, the presented initial condition (27) is the phase-space representation of the SC state given by the superposition of two well-separated and well-localized Gaussians approaching each other with the same momenta.Let us note that the relative phase still remains to be a free parameter of the initial condition, creating the possibility of researching its influence on the WDF dynamics generated by solving the Moyal equation in inhomogeneous dispersive media.Of course, the problem formulated in this way is too general; hence we decided to conduct a study on the three-element class of the considered bimodal states, which are well recognized the literature.The class of these states consists of the odd and even SC states for which the relative phases are θ = 0 and θ = π , respectively, and the Yurke-Stoler state with θ = π/2 .The influence of θ on the value of the normalization factor given by Eq. ( 28) has been found to be negligibly small due to the numerical values of the prefactors exp −2p
Results and discussion
We now turn to the results of the numerical simulation, in which we investigate the dynamics of the three-element family of SC states described by the WDF in the form given by Eq. ( 27), which move in a dispersive medium with the repulsive potential barrier.The barrier has the form of the Gaussian function, where U 0 is the strength of the barrier having width w, centered at X B .The barrier is assumed to be located in the center of the simulation region ( X B = 0 ) whereas the remaining parameters take on the following values: U 0 = 0.008 a.u., w 2 = 50 a.u.; we will refer to this set of parameters as the standard parameters.An explanatory figure of the proposed setup is presented in Fig. 1.
We solved the Moyal equation ( 4) numerically by applying the second-order split-operator method according to Eq. ( 17).
The resulting time evolution of the probability density on the phase space, |ρ(x, p, t)| 2 , is presented in Fig. 2 containing snapshots at t = 1.5 , 2.2, 2.9 and 3.6 × 10 3 a.u.At these time instants, the probability density on the phase space occupies mostly that part of the space in the neighbourhood of the potential barrier, and the ( 25) dynamics of the state is therefore determined by the characteristics of the potential.Earlier, that is before the situation illustrated in Fig. 2, the state evolves freely (details of the dynamics in terms of the WDF are presented in the Supplementary Information S1).When the interaction with the barrier starts, all three states retain their original symmetry, as visible in Fig. 2a,e,i.Then, for the assumed standard parameters of the barrier, interaction with the barrier takes place in the above-barrier regime resulting in changes of the probability density on the phase space depending on the chosen initial state.For the SC states, both odd (Fig. 2b-c) and even (Fig. 2j-k), interaction with the barrier does not break the symmetry of the initial state.In the case of the odd SC state, the fringes visible in Fig. 2b are the result of quantum interference.A similar phenomenon occurs for the even SC state; due to the small amplitude, it is not visible in Fig. 2j, but snapshots of the even SC state showing this phenomenon are available in the Supplementary Information S1.The situation is considerably different for the YS state interacting with the barrier in the above-barrier regime.As shown in Fig. 2f-g, such interaction of the YS state with a symmetric potential barrier leads to asymmetry of the initial state resulting in most of the probability density on the phase space remaining to the left of the barrier.Evolution of the YS state produces interference fringes visible in Fig. 2f.After interaction with the barrier, the free evolution takes place again, as shown in Fig. 2d,h,l.The SC states remain symmetrical, while YS states stay asymmetric.Although the time evolution of the even and odd SC states has the same effect in terms of keeping the symmetry of the initial state, the interaction of these states with the barrier proceeds differently.It is worth noting that although during the interaction with the barrier the even and odd SC states preserve the initial symmetry of the probability density on the phase space, the dynamics of the interaction is different.That difference can be identified with help of two quantities, the nonclassicality parameter δ and the Wigner-Rényi entropy of order α = 1/2 .Both are shown in Fig. 3 for the even and odd SC states and the Yurke-Stoler state.As visible in Fig. 3, the Wigner-Rényi entropy and the nonclassicality parameter change almost in the same manner for each considered case, which is in accordance with the derived relation ( 12) linking δ and S 1/2 .Both of those dynamical characteristics, S 1/2 and δ , of the odd SC state ( θ = 0 ) have maximal values larger than in the case of the even SC state ( θ = π ).In addition, the characteristics of the even SC state approach the final constant value much faster.
The system was investigated for varying values of the parameters U 0 and w 2 to find out how this asymmetry depends on the strength and width of the barrier.First, for the fixed standard w 2 = 50 a.u. the strength of the barrier was varied in the range of U 0 ∈ {0.004, 0.008, . . ., 0.02} .Then, for the standard U 0 = 0.008 a.u. the bar- rier width was varied in the range of w 2 ∈ {50, 148, 216, 298, 500} .The investigated values of w 2 correspond to uniform changes of the barrier width w.
Figures 4 and 6 show the influence of the width and height of the barrier on the dynamical characteristics of the odd and even SC states.The nonclassicality parameter and the Wigner-Rényi entropy S 1/2 are also presented for the Yurke-Stoler state (Fig. 5) for various sizes of the barrier.In all figures, bold, dashed black lines indicate the Wigner-Rényi entropy for the standard barrier parameters that correspond to interactions in the above-barrier regime.The results presented in Fig. 4 show that for the even SC state, if barriers are higher than U 0 = 0.012 a.u. and wider than w 2 = 148 a.u., the peak values of the Wigner-Rényi entropy is the largest among the considered states, which also corresponds to the high nonclassicality expressed by the nonclassicality parameter δ .For barrier height equal U 0 = 0.004 a.u. the entropy not only has the lowest maximal value, but also it is the fastest to reach the constant final value.While in the case of variable barrier height, shown in Fig. 4a, the entropy and nonclassicality parameter take on the final value equal to the initial one, the situation changes for variable barrier width.As shown in Fig. 4b, only for the standard parameters of the barrier the initial and final values of S 1/2 and δ are the same.With an increase in the width of the barrier, the final values are rising.This allows us to conclude that increasing the width of the potential barrier results in an increasingly nonclassical state at the end of the simulation.It should also be noted that, in contrast to the change in height, the wider the barrier, the faster the entropy and nonclassicality parameters approach the final constant value.In the case of the YS state, it can already be observed for a barrier with standard parameters that the final value of the entropy and the nonclassicality parameter is different-smaller-than the initial value.This is related to the asymmetrization of the initial state.As can be seen in Fig. 5 lowering the barrier to U 0 = 0.004 a.u.results in bringing the final value closer to the initial value, leading to the conclusion that lowering the barrier has a positive effect on maintaining the initial symmetry of the YS state.Increasing the barrier height results in a larger asymmetry, up to U 0 = 0.02 a.u.where the trend reverses, and the final values of the entropy and nonclassicity parameter are close to the final values obtained for the standard barrier parameters.A similar effect on the symmetry of the final state can be observed when the barrier width increases.This means that the strength of asymmetrization can be modified by controlling the potential parameters.
Figure 6 shows the effect of changes in the potential parameters on the Wigner-Rényi entropy and the nonclassicality parameter of the odd SC state.Although the effect of the barrier height U 0 is similar to that for the case of the even SC state shown in Fig. 4, it should be noted that for all of the considered barrier heights the entropy reaches the lowest maximal values among the considered states.This means that the evolution of the odd SC state is less nonclassical than for the even SC state.Unlike the even SC state, in the case of changes in the width of the potential barrier for the odd SC state, there is no change in the final value of S 1/2 and δ relative to the initial value.Regardless of the parameters of the potential, the initial symmetry of the odd SC state will be always preserved after interaction with the barrier.As for the even SC state, we observe a faster flattening for lower U 0 , but the same trend is also preserved for w 2 changes.In contrast to the even SC state, in the case under consideration the wider the potential barrier, the later the Wigner-Rényi entropy and the nonclassicality parameter flatten out.
Concluding remarks
The phase-space formulation of quantum theory based on the Wigner distribution function allows us to take an alternative view of the quantitative description of the dynamical aspect of quantum systems.Using this approach, we have analyzed the dynamics of the three-element family of Schrödinger cat states in the phase space.The distinction between members of this family is based on the adoption of the established values of the relative phase encoded in the general form of the Schrödinger cat state.In the present studies, we have focused on a quantitative description of dynamic changes in the nonclassicality of the considered states during their interaction with the repulsive barrier in terms of the fractional Wigner-Rényi entropic measure.The results obtained by us can be divided into two parts.
In the first part of the presented studies, we have introduced the Wigner-Rényi entropy using the square of the modulus of the Wigner function.This quantity is interpreted as the probability density in the phase space only for pure states.Due to this observation, we have found a relationship between the Wigner-Rényi entropy for the index Rényi α = 1/2 and the nonclassicality parameter introduced by Kenfack and Życzkowski, which is regarded as a measure of the area in the phase space occupied by the negative part of the Wigner function, and which is also considered as an indicator of quantumness.Furthermore, we have found a lower bound for the Wigner-Rényi entropy generated by the modulus squared of the Wigner function.
In the second part of our studies, the previously introduced concepts have been used to study the dynamics of the three-element family of Schrödinger cat states consisting of the odd, even and Yurke-Stoler states, with a scattering center in the form of the Gaussian barrier.At the initial moment, the considered family of quantum states has been modeled by a coherent superposition of two wave packets located on opposite sides of the barrier with appropriately selected relative phases in such a way as to reproduce the above-mentioned family.In our studies, we have used the wave packets having equal but oppositely directed momenta, moving towards a scattering barrier placed exactly in the middle of the distance between the packets.As a result of the performed simulations, we have noticed that independently of the choice of the relative phase, the Wigner-Rényi entropy measuring the nonclassicality of the considered states in the region of their interaction with the barrier changes significantly.Simultaneously, we have observed that the degree of nonclassicality for the even and odd Schrödinger cat states is preserved, while for the Yurke-Stoler states, we have observed a decrease of the degree of nonclassicality with respect to the initial moment.The obtained result is related to the emergence of an asymmetry in the interaction of packets forming the Yurke-Stoler state with the Gaussian barrier.
We have carried out calculations in two energy regimes, i.e. we have considered the case of scattering of states over the barrier and their tunneling.The transition between these regimes has been possible by controlling the height of the barrier while maintaining all the parameters of the Schrödinger cat states.As a result, we have observed that the considered fractional Wigner-Rényi entropy has the same value before the odd and even states enter the region of interaction with the potential barrier and after they leave.In contrast, for the Yurke-Stoler states, we have not observed any change during scattering over the barrier, while during the tunneling the Wigner-Rényi entropy decreases from its initial value.In addition, we have also investigated the influence of the Gaussian barrier width on the dynamic changes of the fractional Wigner-Rényi entropy.In this case, we have noticed that increasing the barrier width causes an increase in the Wigner-Rényi entropy for all the considered states during their interaction with the barrier.In contrast, after leaving this area, the even and Yurke-Stoler states are characterized by greater fractional Wigner-Rényi entropy compared to the initial value.Nevertheless, we do not observe such a change in the odd state's fractional Wigner-Rényi entropy.
In conclusion, the main objective of the presented studies has been to show that the Wigner-Rényi entropy for the Rényi index equal one half can be interpreted as the logarithmic measure of the area occupied by the negative part of the Wigner function in the phase space.The obtained result is significant because it shows the direct relationship between the considered entropy and the nonclassicality parameter of Kenfack and Życzkowski 43 .We have demonstrated that this fractional entropy can be interpreted in terms of the quantumness of states expressed by the negativity of the Wigner function.At the same time, we have found the lower bound of the Wigner-Rényi entropy defined for the probability density distribution in the phase space understood as the squared module of the Wigner function representing the pure state.In the limiting case of the Rényi index equal one the result is consistent with the known lower bound for the Shannon entropy for pure states.The results obtained from the numerical simulations prove the validity of the introduced tool, and we expect that it will be an introduction to further research on fractional Wigner-Rényi entropies for pure states.
Figure 1 .
Figure 1.Conceptual sketch of the Schrödinger cat state interacting with the repulsive potential barrier with varying parameters.
Figure 2 .
Figure 2. The phase space snapshots of the probability density on the phase space, |ρ(x, p, t)| 2 , for the even SC state (first row), YS state (second row) and odd SC state (third row) at different times during interaction with the barrier in the form of a Gaussian potential.The equipotential lines of the classical Hamiltonian of the system under consideration are indicated by the grey contour lines.
Figure 3 .
Figure 3. Influence of the relative phase θ on the Wigner-Rényi entropy of order α = 1/2 , for the standard parameters of the barrier.Inset shows the nonclassicality parameter δ. | 9,245.6 | 2023-09-27T00:00:00.000 | [
"Physics"
] |
Brain Imaging and Overall Survival after Allogeneic Hematopoietic Cell Transplantation
This work was carried out in collaboration between all authors. Authors BTC and HO designed the study, wrote the protocol and managed the data analysis. Author AD did the statistical analysis. Authors BTC, AOO and HO wrote the first draft of the manuscript and did the literature research. Authors CT and PP contributed to data collection and clinical correlation. All authors revised the manuscript and approved the final manuscript. ABSTRACT Aim: We conducted a retrospective review of all brain imaging studies in the first year after allogeneic haematopoietic cell transplantation (HCT) to determine (a) the percentage of patients with CNS neurological complications based solely on undergoing brain imaging, (b) transplant-related risk factors of undergoing brain imaging, and (c) overall survival in the patients with neurological compared to those transplant who did not have brain Comparisons between patient groups with brain imaging and without brain imaging were tested using the Pearson chi-square test. Survival analyses with at date of transplant and used Kaplan-Meier methods. Results: Of 543 HCT recipients, 128 patients (24%) underwent brain imaging during the first year after transplantation. There was a greater risk of brain imaging in unrelated donor transplants and in lymphoid as opposed to myeloid malignancies (respective hazard ratios 1.45 and 1.43, P =0.04). Overall survival was significantly worse in unrelated donor transplants (hazard ratio 1.42, P =0.003) and in cord blood transplants (hazard ratio 1.68, P =0.02). Landmark survival analysis of patients alive 1 year after HCT showed worse survival over the next 5 years in those who had brain imaging in the first post transplant year ( P <0.0001). Conclusion: These results suggest that development of neurological symptoms or a sign sufficient to prompt clinicians to order brain imaging early after HCT identifies a poor prognosis in transplant population.
INTRODUCTION
Reported incidences of neurological complications after allogeneic hematopoietic cell transplantation (HCT) vary widely in the transplantation literature. The highest reported incidence is 46% in adult patients who underwent unrelated-donor HCT [1], and 46% in paediatric patients who had their transplants in the early years between 1976 and 1983 [2]. A more recent study of paediatric transplants reported a much lower incidence of 15.8% [3].
Most clinical studies are based on retrospective chart reviews. One prospective study of 115 consecutive transplants in the early 1990s reported neurological complications in 17% of patients in the first 3 months after transplant [4]. A difficulty in making comparisons to studies that reported lower incidences of neurological complication [5,6] may be one of definition. The clinician may have restricted the diagnosis of neurological complication to more severe CNS problems or to structurally apparent diagnoses.
Recently we reported brain imaging results in 128 consecutive transplant patients who underwent brain magnetic resonance imaging (MRI) or computed tomography (CT) imaging in the first year after HCT [7]. Indications for brain imaging in these patients included encephalopathy or confusion (40%), headache (20%), focal neurological signs (12%), seizures (10%), trauma or fall (3%), other indications (16%). In the present report, we analyze data from these patients along with consecutive HCT patients seen at the same time from the same centre who did not have brain imaging. Our purpose is to determine (a) the incidence of central nervous system (CNS) neurological complications based solely on ordering of brain imaging, (b) transplant-related risk factors that lead to brain imaging, and (c) overall survival in those patients with neurological complications compared to those patients who did not have brain imaging.
MATERIALS AND METHODS
Subjects were 543 consecutive recipients (August 2004-August 2007) of allogeneic HCT for haematological malignancy or a related disorder. Donors were HLA-matched siblings or unrelated. Subjects were followed for overall survival for up to 6 years after HCT through October 2010. This study was approved by our Institutional Review Board.
Comparisons between patient groups with brain imaging and without brain imaging were tested using the Pearson chi-square test for categorical data, and the two-sample t-test for continuous measurements. Tests were two-sided and the cut-off for statistical significance was 0.05. Survival analyses with outcome time-to-brain-scan started at date of transplant and used Kaplan-Meier and Cox proportional-hazards methods. Regressions were reported as models with univariate predictors, and as reduced multivariable models including just the significant predictors. Risk ratios were reported with 95% confidence intervals. Timedependent Cox proportional-hazards model regressions with outcome of overall survival (time to death or last contact) were done along with Landmark Kaplan-Meier analysis starting at one year after transplant on all patients surviving one year.
The results of the categorical analysis on brain imaging data are shown in Table 2. Of the 543 patients, 128 (24%) had brain imaging during the first year after HCT. A total of 173 CTs and 103 MRIs were done in these 128 patients. Median time between transplant and the first brain image was 1.33 months, with a range of 0.03 to 11.93 months. Patients who received HCT from HLA-matched unrelated donors had a marginally higher percentage of brain imaging studies (27%, 71 of 263 patients) than those who received HCT from sibling donors (20%, 57 of 280 patients). This difference was not statistically significant by categorical analysis (P=0.07). But statistical significance was shown in one predictor of the two-predictor final multivariate Cox model with a hazard ratio of 1.45 ([95% CI 1.02-2.06], P=0.04). A greater number of patients with lymphoid (27%, 72 of 265 patients) as opposed to myeloid (20%, 56 of 278 patients) malignancies had brain imaging. This difference also was not significant by categorical analysis (P=0.06) but was significant as the other predictor of the final multivariate Cox model chosen by backwards elimination (hazard ratio 1.43 [95% CI 1.01-2.03], P=0.04). The univariate hazard ratio for recipients of cord-blood HCT compared to peripheral-blood transplants was 1.81 ([95% CI 0.97-3.38], P=0.06). No effect on the likelihood of undergoing brain imaging was found in categorical analysis with other parameters, including conditioning regimen (Non-myeloablative versus Fully ablative, P=0.28), type of stem cells transplanted (bone marrow, cord blood or peripheral blood, P=0.13), or disease status at the time of HCT (P=0.21). Also, counts of CD34+ or CD3+ cells in the HCT did not affect the likelihood of undergoing brain imaging (data not shown).
Risk of Mortality and Survival Analysis
Univariate Cox-regression analysis of overall survival in all 543 patients ( To determine the effect of brain imaging on overall survival in all 543 patients, we constructed a multivariate model chosen by backwards-elimination Cox. This final reduced model (P<0.0001) estimating hazard ratio of death had two predictor terms: patients who had brain imaging (treated as a time-dependent variable) having a hazard ratio of 3.59 (95% CI 2.81-4.59) over patients who did not have brain imaging (P<0.0001). Fig. 1 shows overall survival beginning year one post HCT in the 381 patients who survived the first year after transplant. Overall survival is graphed as a Landmark analysis starting one year posttransplant as there is no easy depiction of survival in the case of time-dependent variables starting at date of transplant. We take advantage of the fact that all brain imaging relevant to this study is complete by one year post-transplant, fixing the categories of comparison in advance of the start point of Landmark analysis. Survival is worse at year 2-6 in the 65 patients who had brain imaging in year 1 compared to the 316 patients who did not have brain imaging (P<0.0001).
DISCUSSION
We defined patients as having had a CNS complication if they underwent diagnostic brain imaging for any indication during the first year after HCT in this study. Based on this criterion, we report neurological complications in 24% of patients in the first year following HCT, an incidence similar to published results [2][3][4][5][6]8]. This method of determining the incidence of neurological complications eliminates selection bias in chart reviews and inconsistencies in hospital coding. However, a clear limitation is that the decision to obtain brain imaging is solely at the clinical discretion of the transplant physician. There may be practitioner-related differences between transplant centres or even at the same centre at various times. Minor CNS complications where brain imaging may be considered unnecessary would be missed and this method also fails to capture peripheral nervous system complications of transplantation. Despite these limitations, our result with the multivariate model is similar to published work reporting a higher incidence of neurological complications in unrelated-donor transplants [3].
Descriptive survival analysis in HCT studies generally use time-to-event measurements, from the date of transplant to the time of death or last contact. However, analysis of our study was complicated by the fact that brain imaging occurred at varying times after the date of transplant. To address this methodological complexity, we included time-dependent covariates in assessing the effect of brain imaging on survival. We also employed Landmark Kaplan-Meier analysis starting one year post-transplant on patients surviving one year to ensure that all relevant diagnostic brain imaging studies had been completed by the start date of descriptive survival analyses.
Consistent with the HCT literature, we found higher mortality with cord blood and unrelateddonor grafts compared to sibling-donor transplants. There was significantly higher mortality in patients with active disease or >1 relapse at the time of HCT. Nevertheless, the greatest difference in overall survival was in patients who had neurological complications. Similar higher mortality has been reported in retrospective clinical studies [4,6]. Central nervous system (CNS) pathology such as intracranial haemorrhage, abscess, and CNS metastases were considered the major cause of death in 17% autopsy series of HCT patients [9,10]. Clearly, serious structural brain lesions contribute to the mortality of transplant patients. However, many of the neurological complications in HCT patients (approximately 50% in the more inclusive clinical series) are toxic-metabolic encephalopathy or other non-structural CNS complications [8]. Only a third of the 128 patients who had brain imaging in our study had structural abnormalities found on brain imaging (cerebrovascular complications in 10 patients, CNS infection in 9, subdural fluid collection in 6, CNS tumour recurrence in 11, and drug toxicity in 5 patients) [7]. The relatively high mortality in the group of patients without brain structural abnormality may appear unexpected, because most toxic-metabolic encephalopathy that may prompt clinicians to obtain brain imaging is not in itself lifethreatening. However, as recent studies have shown, instances of severe metabolic derangement associated with encephalopathy do occur in transplant patients [11,12].
The association of early brain imaging with decreased overall survival that we report here is analogous to published studies of poor prognostic implications of pulmonary and hepatic abnormalities in HCT recipients [13,14]. Results in our patients clearly show that the clinical indication for brain imaging was itself a strong predictor of shorter survival. It is likely that by selecting patients with symptoms and signs sufficient to prompt the transplant physician to order brain imaging, we have identified patients who have evolved or are evolving a problem-laden post-HCT course with an ultimately poorer prognosis.
CONCLUSION
These results suggest that development of neurological symptoms or a sign sufficient to prompt clinicians to order brain imaging early after HCT identifies a poor prognosis in transplant population.
CONSENT
Not applicable.
ETHICAL APPROVAL
This retrospective study was approved by Institutional Review Board (IRB). | 2,599.8 | 2013-01-10T00:00:00.000 | [
"Medicine",
"Biology"
] |
Load Frequency Control for Power Systems with Actuator Faults within a Finite-Time Interval
+is paper is concerned with the issue of finite-time H∞ load frequency control for power systems with actuator faults. Concerning various disturbances, the actuator fault is modeled by a homogeneous Markov chain. +e aperiodic sampling data controller is designed to alleviate the conservatism of attained results. Based on a new piecewise Lyapunov functional, some novel sufficient criteria are established, and the resulting power system is stochastic finite-time bounded. Finally, a single-area power system is adjusted to verify the effectiveness of the attained results.
Introduction
Load frequency control (LFC), as an integral part of automatic generation control in power systems, has been adopted to regulate the frequency deviation and tie-line power exchanges [1][2][3]. Added by the LFC strategy, the high-quality electric energy can be maintained over a certain range [4]. In general, constant frequency deviation may lead to unreliable frequency devices, transmission lines overload, etc. Meanwhile, owing to the large size of the power grid, it raises the difficulty in frequency control. erefore, it is a tough task to design suitable frequency control law. In practical applications, the loads are unexpected and unmeasurable, which indirectly regulate the system frequency. Accordingly, through the LFC strategy, the system performance can be guaranteed without affecting the generation capacity or frequency deviation. Up to now, the research on the LFC for power system gradually becomes a hot topic [5][6][7].
In networked control systems, various faults can be encountered due to the long-term utilization of components [8][9][10]. Note that the actuator faults are the source of instability and performance deterioration. To overcome the above shortage and improve the dependability, a great deal of attention has been shifted to actuator faults, and plenty of results have emerged [11,12]. However, the actuator faults are assumed to be time-unchanged, which limits the potential applications. As stated in [13], the so-called failure probability is common in the reliability industry, where failure rates can be governed by the Markov switching chain [14][15][16]. Despite the significant achievement has attained, no suitable attention has been devoted to the power systems.
On the other hand, Lyapunov asymptotic stability is most common in the literature, where asymptotic behavior can be expected over the infinite-time domain. Nevertheless, in reality, the desirable transient performance is very important in many physical systems, which causes the inapplicability of the Lyapunov stability. Following this trend, finite-time stability (FTS) has been studied [17][18][19], which concerns the dynamic behavior within bound over a fixed time interval instead of an asymptotical case. As is well known that FTS is different from the Lyapunov case, it gives more solutions of transient performance control. Owing to the merits of the FTS, many valuable achievements have been made over the past years [20]. However, to our knowledge, most of the previous results are assumed that the data communication keeps continuous between sensors and controllers. In the fields of sampled-data control law, this assumption is not accurate. In general, with respect to the demand of actual systems, the sampler may encounter component aging, data losses, etc. [21][22][23]. ese shortages may lead to unreliable periodic sampling. Fortunately, the aperiodic sampled-data control strategy is presented [24,25], which can efficiently deal with the aforementioned issues. However, the finite-time aperiodic sampled-data control for power systems remains unsettled, not mentioned to the LFC, which motivates us for this study.
Inspired by the above observations, we focus on the finite-time H ∞ load frequency control for power systems with actuator faults over the finite-time interval in this study. e main contributions can be summarized as follows: (1) different from the previous studies, to fully describe the randomly occurring actuator fault, the actuator fault is characterized by a homogeneous Markov chain. (2) To better characterize the actual demands of practical dynamics, a generalized framework of the actuator constraint is considered. (3) Apart from the traditional Lyapunov asymptotic stability, this study exploits the FTS for power systems and focuses on the finite-time control issue. By resorting to the piecewise Lyapunov theory, some novel results over the finite-time interval are reached. Finally, a numerical example is manifested to reveal the validity of the gained results. e remainder of this study is listed as follows. Section 2 provides a description of the problem. Section 3 presents the main results, and the simulation validation is exhibited in Section 4. Section 5 concludes the study.
Notations.
e notations of this paper are standard. ‖ · ‖ means the Euclidean norm. R indicates a set of n-dimensional matrix. E refers to the mathematical expectation. (λ max (A)/λ min (A)) means the largest/smallest eigenvalue of matrix A. Pr · { } means the occurrence probability. diag · { } represents a block-diagonal matrix.
Problem Formulations
Block diagram of single-area LFC power model is exhibited in Figure 1 [6]. Accordingly, the dynamic equation of power model can be listed as follows: and the system parameters are expressed in Table 1.
In single-area, the area control error (ACE) is interpreted as y(t) � βΔf due to the unaccessiblity of the tie-line power exchange. In reality, the actuator faults cannot be neglected for long-term utilization of components, which can be expressed as where . . , f). More specifically, r t , t ≥ 0 is identified as a right-continuous Markov chain taking values over a set S � 1, 2, . . . , S { } with generator Π � [π pq ] S×S , and its transition probabilities are inferred as where Δt > 0 and (lim Δt⟶0 o(Δt)/Δt) � 0, for q ≠ p and π pp � − q≠p π pq for each p ∈ S. Taking the ACE as the desired controller input of LFC, the output of the proportional-integral (PI) controller is asserted as where K P and K I signify the proportional and integral gains of the area, respectively. Let e purpose of this study is to solve the output feedback control for power system (6) with data sampling. erefore, the sampling sequence attained at a set of time instants. Added by the data sampling technique, only the measured signal y(t k ) can be released to the controller. Specifically, the sampling instants are represented as In light of periodic sampling instants, in this study, we consider the aperiodic sampling case. Following this trend, the sampling interval [t k , t k+1 ) is time-varying with the upper sampling period. us, one defines Letting K � K p K I , the PI-based sampled data LFC can be designed as Substituting (3) and (9) into (6), the closed loop power system can be governed by Before further derivation, some important contents are stated as follows.
r t . (26)
Based on Lemma 1, the following inequality can be devised: where It is well known that for any matrices H, one gets e aforementioned condition can be rewritten as On the other hand, for any matrices T 1 and T 2 , it is clear that
Mathematical Problems in Engineering
Substituting (23)-(32) into (20), it can be deduced that where Note that (33) is a convex combination of t − t k and τ − (t − t k ), in accordance with Schur complement; one can deduce that Θ(t) + (t − t k )H T Q − 1 1 H < 0 if and only if (15) and (16) hold. erefore, one can see that E LV t, r t < ρV t, r t + c 2 ω T (t)ω(t), t ∈ t k , t k+1 .
(35)
By integrating the both sides of (35) from t k to t and simple derivation, it yields Recalling the Lyapunov functional (20), we can get 6 Mathematical Problems in Engineering Substituting (37) and (38) into (36), we can obtain In light of (17), it can be concluded from (39) that E δ T (t)Rδ(t) < c 2 .
us, from Definition 2, we have to derive that power system (10) is SFTB over the time interval In the following, the actuator constraints (18) will be discussed. In light of (9), one has Recalling Assumption 2, it yields According to Schur complement, (18) can be guaranteed by (41), which completes the proof of eorem 1. □ Theorem 2. For given parameters ρ > 0, c 1 > 0, c 2 > 0, T > 0, ω > 0, u max , and matrix R, the closed-loop power system (10) is called SFTB with respect to (c 1 , c 2 , ω, T, R) and meet an , if there exists matrix P p > 0, On the other hand, other parameters are selected as τ � 0.2, c � 0.6, ρ � 0.2, c 1 � 0.1, c 2 � 1, ω � 0.8, R � I 4×4 , and T � 8. e control input u(t) is supposed to be constrained by |u(t)| ≤ u max � 2. By solving the linear matrix inequalities of eorem 2, the desired PI-type controller is derived as For graphically verifying the achieved results, we select the initial state disturbance ω(t) as
Mathematical Problems in Engineering
Added by the aforementioned controller, the simulation results are plotted in Figures 2-7. Figure 2 plots the simulated frequency, and Figure 3 displays the evolution of ACE. Meanwhile, the mode switching of actuator faults is shown in Figure 4, and control output is presented in Figure 5. Furthermore, with the disturbance given in Figure 6, the evolution of δ T (t)Rδ(t) is expressed in Figure 7. One can be observed from Figure 7 that the state of closedloop system stays in the prefixed region, which implies the resulting system is SFTB. Meanwhile, the input constraint is also satisfied.
Conclusions
In this study, the finite-time LFC problem for power systems with actuator fault has been considered. To better reflect the actual demands of practical dynamics, a generalized framework of the actuator constraint has been studied. Given the randomly occurring actuator fault, a homogeneous Markov chain-based actuator fault has been studied. Together with the piecewise Lyapunov theory, sufficient conditions have been attained. In the end, a numerical example has been applied to verify the effectiveness of the developed results.
Data Availability
No data were used to support the current work.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 2,410.8 | 2021-11-08T00:00:00.000 | [
"Engineering",
"Mathematics"
] |
Revisiting the concept of a symmetric index of agreement for continuous datasets
Quantifying how close two datasets are to each other is a common and necessary undertaking in scientific research. The Pearson product-moment correlation coefficient r is a widely used measure of the degree of linear dependence between two data series, but it gives no indication of how similar the values of these series are in magnitude. Although a number of indexes have been proposed to compare a dataset with a reference, only few are available to compare two datasets of equivalent (or unknown) reliability. After a brief review and numerical tests of the metrics designed to accomplish this task, this paper shows how an index proposed by Mielke can, with a minor modification, satisfy a series of desired properties, namely to be adimensional, bounded, symmetric, easy to compute and directly interpretable with respect to r. We thus show that this index can be considered as a natural extension to r that downregulates the value of r according to the bias between analysed datasets. The paper also proposes an effective way to disentangle the systematic and the unsystematic contribution to this agreement based on eigen decompositions. The use and value of the index is also illustrated on synthetic and real datasets.
Scientific RepoRts | 6:19401 | DOI: 10.1038/srep19401 uncertainty, there is not, a priori, one dataset that is "better" than the other. As a consequence, an index to evaluate the agreement between X and Y datasets should be equal to that calculated between Y and X, a symmetry requirement often not satisfied by validation metrics. Further aspects that should be considered include the possibility of: (1) reformulating indices to show their relationship with better known metrics, such as r or RMSE 10,11 ; and (2) disentangling systematic from the non-systematic random difference in data agreement 6 . The systematic component can be interpreted as a regularized bias due to known or discoverable factors while the unsystematic element is a random component caused by noise or unknown factors. Differentiating between the two is interesting because the systematic difference can in principle be removed by regression analysis.
In this study we review various metrics proposed in the literature that can serve to assess dataset agreement. We then test and inter-compare their performances over synthetic datasets and thus point out some of their shortcomings. We justify why an permutation based index originally proposed by Mielke 7 can, after a small modification, be considered as the most appropriate because it satisfies all desired properties for such an index, including that of being interpretable with respect to the coefficient of correlation r. We also propose a refined approach to investigate separately the unsystematic and the systematic contributions to the dataset disagreement. Finally, we apply the available metrics and the proposed index to two real inter-comparison study cases: one related to time series of the Normalized Difference Vegetation Index (NDVI) acquired during the same time period by two different satellite missions, and the other related to two time series of gross primary production (GPP) estimated by different modelling approaches.
Desired properties of an index of agreement
Let us first generalise the problem by stating that we seek an index quantifying the agreements between datasets X and Y. The datasets are measured in the same units and with the same support. In the case of most geospatialized raster datasets, this would mean that both have the same spatial and temporal resolutions. An optimal index of agreement should be: • Dimensionless. This makes it independent of the unit of measurement. It facilitates the comparison of agreement among different pairs of datasets (if each pair has different units for instance) or within different parameter space (e.g. for multi-variable datasets). • Bounded between a lower bound (such as 0) corresponding to no agreement, and an upper bound (such as 1) corresponding to perfect agreement. A corollary is that higher values should always indicate higher agreement. • Symmetric, that is, it should have the same numeric value if the values of X i and Y i are switched in the equation. This is necessary because of the assumption that, for an assessment on agreement, there is no reference to compare to. • Easy to compute so that it can be used on large datasets. • Interpretable with respect to a widely accepted and familiar measure such the coefficient of correlation r.
Background on existing metrics
Perhaps the most straightforward metric to use to compare two different datasets is the Pearson product-moment correlation coefficient (r). It measures the degree of linear dependence between two variables and is expressed as: where X and Y denote the mean values of X and Y, and σ X and σ Y represent their standard deviations. The metric r is dimensionless and is bounded between − 1 and 1. A value of zero indicates there is no linear dependence between the two variables, while a value of 1 or − 1 indicate perfect linear dependence (the latter with a negative dependence). Another common approach is to consider that a statistical model can be fitted to the data. In this case, a measure of agreement can be inferred from the coefficient of determination ( ) R 2 , which indicates how well data fit the chosen model. In the case of linear models, the coefficient of determination is equivalent to the square of r, and ranges from 0 to 1. Another interesting property is that this number represents the proportion of the variance explained by the model. A disadvantage of both r and R 2 is that they only measure the strength of the relationship between the data, but give no indication if the data series have similar magnitude.
To assess if the values of two series match, a metric summarising the deviations between all pairs of values can be computed. A generic form to express such metrics is as follows 12 : where γ ≥ 1 and w i is a weight given to each deviation. If we use equal weights to simplify the notation, when γ = 2 the expression becomes that of the root mean square deviation (RMSD) and when γ = 1, we obtain the mean absolute deviation (MAD). Note that in this context, we prefer referring to these metrics as deviations instead of errors (i.e. RMSE and MAE), because we do not consider one dataset to be more correct than the other. With respect to r and R 2 , RMSD and MAD have the advantage that they provide an information on the absolute Scientific RepoRts | 6:19401 | DOI: 10.1038/srep19401 differences, or the biases, between X and Y. A disadvantage is that they are dimensional. To overcome this problem, they are sometimes expressed in percentage by dividing the deviations by one of the two mean values (X or ) Y . However, this renders the metric asymmetric, and they become unstable when the denominators are small values.
Various indices have been designed to evaluate model skill using the following formulation: where δ is a metric measuring deviations between model estimation and reference observations (such as MAD or the mean square deviation, MSD), and μ is a set of reference deviations. The logic behind such formulation is that μ should be never be smaller than δ, thereby fixing the upper-bound of ρ to 1. An example much used in hydrology is known a the coefficient of efficiency 5 that was later generalized 13 as: where X represents the observations and Y the model predictions. Such coefficients of efficiency range from −∞ to 1. While the lack of a negative bound may seem an inconvenience, such indices have another practical reference point in that when = γ E 0, this means that the model is not performing better than by just taking the mean 14 . The original version 5 , for which γ = 2, can be additionally be related to the R 2 10,11,14 . However, such indices are not symmetric, as replacing X by Y does not lead to the same results.
Willmott 6 proposed another index for evaluating model performance against measured observations that can be generalized as: In this case, the denominator μ is defined by summing the differences of all points in both X and Y with respect to X, the mean value of X. The original version was based on squared deviations γ ( = ) 2 , but was later changed 15 by using absolute deviations arguing that MAD (or in this case MAE since they refer to errors between predictions and observations instead of deviation) is a more natural measure of average error and is less ambiguous than RMSD (or RMSE) 12 . Another refinement of the index 16 seeked to remove the predictions Y i from the denominator, but as argued by others 14 , this amounts to rescaling the expression of the coefficient of efficiency ( ) E 1 while losing the interesting reference point of = E 0 1 . Again, these indices do not respect the symmetry requirement. Mielke 7,17 proposed another definition of the denominator μ based on a non-parametric approach using random permutations. In this case, the baseline consists of the sum of differences between each point and every other point. Such index can be expressed generically for different γ values as such: Unlike previous indices, these are symmetric. A disadvantage is that calculating μ is computationally expensive, especially when n is large. However, for γ = 2 the denominator µ 2 can also be expressed in a simpler form 7 (a mathematical demonstration is provided in the Supplementary Information) that can be easily computed: Watterson 8 proposed to construct an index to evaluate climate model performance by applying an arcsine transformation to Mielke's ρ 2 index: The arcsine function transformation is justified by Watterson as enabling a linear convergence to unity while preserving the original properties of Mielke's index 8 .
Contrasting with previous studies, Ji & Gallo 9 explicitly designed an index that would satisfy the symmetry criteria. This index, proposed for inter-comparison of remote sensing imagery, is defined as follows: In this case, the baseline of the denominator μ is defined based on both X and Y. However, this latter index has some serious shortcomings that are illustrated in the following analysis.
Metric inter-comparison
An inter-comparison of metric performance is here proposed for the indices mentioned in section 0 that satisfy the criteria of symmetry: Mielke's ρ 1 and ρ 2 permutation-based indices, Watterson's M index and Ji & Gallo's agreement coefficient AC. To do so, an artificial dataset is produced. Two independent random vectors of = n 1000 samples with mean of 0 and standard deviation of 1 are first generated and completely decorrelated v v using a Cholesky decomposition 18,19 . These two vectors, x 0 and y 0 , are then recombined together in order to generate two new vectors, x new and y new , that have a given imposed correlation (the details on how to do this are explained in the Supplementary Information section). The X and Y datasets are thus generated by respectively aligning together all x new and y new vectors generated for correlations ranging from − 1 to 1 (they are illustrated for some correlation values in Fig. 1). Using X and Y, the agreement metrics can be calculated and compared for pairs of vectors with equal means and variances, but with correlation ranging from − 1 to 1. To assess how metrics behave when there is a bias in the data, the Y dataset is further perturbed by introducing various systematic additive biases (in practice, by adding b to the data) and systematic proportional biases (by multiplying the data by m), as illustrated in Fig. 1. These systematic additive and proportional effects can also interact together either compensating or compounding the disagreement. Figure 2 illustrates how the four analysed metrics perform on the generated datasets as a function of imposed additive and multiplicative bias and original correlation between X and Y. A first remark regarding the plots in columns (a) and (b) of Fig. 2 is that for all metrics, there is an intersection of the iso-lines. The metrics are assumed to correctly portray a decrease in agreement when there is an increased systematic perturbation for all types of correlations. Crossing of these iso-perturbation lines means that this assumption is violated. For AC under mild shifts of b or rescaling with m, abnormal behaviour can be seen even under moderate correlation values (such as with r between 0.5 and 0.7). For the ρ 1 , ρ 2 and M all lines cross only at = r 0. This may be considered to be less inconvenient, as the negative values of the indices could be used to evaluate how much datasets agree in magnitude despite disagreeing in sign. Yet, this adds ambiguity in the interpretation of the index which is not desirable. A second point is that Ji & Gallo's AC is not negatively bounded by zero as it is designed to be. This happens even when no systematic perturbation is added to the data (i.e. when = b 0 and = ) m 1 and even for fairly high positive correlation. These are conditions which could be easily expected for most dataset inter-comparison, suggesting how Ji & Gallo's AC should be avoided.
The systematic additive and proportional biases can interact. To illustrate this, column (c) of Fig. 2 shows the value between the index calculated for 2 vectors with a given combination of biases minus the index value calculated from the same vectors without any bias. This is only shown for a given correlation of = .
r 0 8. This graphical representation can help illustrate the sensibility of an index to small changes in b and m. Most indices react similarly, with the notable exception of AC. Ji & Gallo's AC index can be higher (i.e. more agreement) with a combination of small biases than without any bias at all.
To summarize the outcome of this analysis, it can be stated that all metrics have at least one shortcoming: at some point or another, smaller index values counter-intuitively represent higher agreement. For all of them, it is also unclear how they can be related to the coefficient of correlation. Additionally, Ji & Gallo's AC has strongly undesired behaviour in the presence (but also in the absence) of bias. While Mielke's ρ 1 index is computationally expensive, the ρ 2 index, with its simplified expression, appears to be a suitable candidate for dataset comparisons when the correlation is zero or positive. However, from the mathematical formulation proposed by the author it is not explicit how ρ 2 is related to the coefficient of correlation. We believe that this latter point deserves further investigation because the user of the index will typically have a clear understanding of what a correlation value means, but will not be familiar the values taken by agreement index itself.
Rationale for choosing the right metric
If we can accept the use an index based on MSE rather than MAE, we argue that the correct metric to choose should be a slightly modified version of Mielke's ρ 2 index. This argumentation stems from the idea that for an index constructed based on the structure of equation (3), the objective should be to define the denominator μ as the maximum value that the numerator δ can take. Finding the smallest value that maximizes the numerator (i.e. its supremum) is important in order to ensure having an index with the maximum possible sensitivity. For MSE based indices, it can be shown (see Supplementary Information) that the numerator can be rewritten as: where the last term is proportional to the covariance between X and Y. One way to ensure δ µ ≤ is to create an index λ f containing this covariance term explicitly in the denominator and constrain it to always be positive: The four terms in the denominator can be represented geometrically as illustrated in the Supplementary Information section. As a result of adding the covariance term explicitely, the index λ f ensures that when X and Y are negatively correlated, µ δ = , resulting in an index equal to zero when ≤ r 0 as can be seen in Fig. 3. However, when > r 0 the denominator is needlessly inflated by this covariance term, as the numerator will always be smaller due to the negative sign in front of the covariance term in equation (10). To solve this issue, we propose to define the index, which we refer to simply as λ, as follows: By recognizing the expression of the variances in equation (13), it can be rewritten more succinctly as: revealing the similarity with equation (7) and the simplified expression of Mielke's ρ 2 permutation index. As shown in Fig. 3, the resulting index is identical to ρ 2 for positively correlated vectors while remaining at 0 when < r 0.
The index has the additional desirable property that, when ≥ r 0 and when there is no additive nor multiplicative bias, it takes the value of the correlation coefficient. If there is a bias, the index will take a lower value than r according to a multiplicative coefficient α that can only take a value between 0 and 1. Using equation (10), it can effectively be demonstrated (see Supplementary Information) that: The advantage of this property is that the index value can be immediately compared to r, which is a metric most practitioners are familiar with. Any deviation from r indicates an increase in bias proportional to α. Willmott 6,20 proposed that his indices of agreement could provide further insight by separating the effects due to the systematic from the unsystematic components of the deviations. This idea can be generalized to any index formulated using equation (3) . The unsystematic index, ρ u , can be interpreted as the value that the standard index would take if all bias is disregarded, which therefore relates to the noise around a line passing through the data. The systematic index, ρ s , is more difficult to grasp. A better way to understand the information it contains is presenting it as the proportion of deviations composed of systematic noise δ δ = / f sys s .
Separating the unsystematic from the systematic contribution
To calculate these new derived indices, the relationship between X and Y must first be characterized, which then allows the computation of δ u , and finally that of δ s by subtracting δ u from δ. The theoretical relationship between X and Y is assumed to be linear: = + Y a bX. Willmott 6 uses an ordinary least-square regression to estimate a and b. This may be acceptable when the X dataset is considered to be a reference, but not when trying to establish agreement without assuming a reference because of a violation of the symmetry between X and Y, i.e. a regression of X on Y is not equivalent to that of Y on X. To solve this issue, Ji & Gallo 9 propose to use a geometric mean functional relationship (GMFR) model 21,22 , for which b and a are derived as follows: where both Ŷ i and X i need to be obtained from the GMFR regression. Both these approaches have the same flaw. In order to be coherent with the definition of the total deviations, the unsystematic deviations should be calculated orthogonally from the = + Y a bX line i.e. as if the = + Y a bX line would be the 1:1 line. We propose a solution to solve both problems (i.e. the definition of the line and the calculation of the deviations orthogonally) by working with the principal planes of the X-Y cloud. By applying an eigen decomposition to the covariance matrix of X and Y, we obtain the two eigenvectors describing the principal axes of the cloud of points. The first eigenvector can serve to define the line of the principal axis in the X-Y space (i.e. the = + Y a bX line). The second eigenvector can be used to calculate the vector h containing the distances of all X-Y points orthogonally from this principal axis (for more details on how to calculate the eigenvectors, the resulting = + Y a bX line of the principal axis and the vector h, see the Supplementary Information). The unsystematic mean squared deviation can then be calculated based on the h distances as follows: Figure 4 provides a geometric representation of how these unsystematic squares differ from the total squares calculated with respect to the 1:1 line. Figure 4 also illustrates geometrically how these unsystematic squares differ in surface from what is proposed by Willmott 6 and Ji & Gallo 9 using the formulations of equation (19) and (20).
An important remark regarding the unsystematic index ρ u is that, although it disregards any bias, it does not mean it is equivalent to the correlation. Neither is ρ u equivalent to α in the case of λ. The difference can be appreciated by considering a point cloud and rotating it. This alters their correlation, but thanks to the eigen decomposition, ρ u will remain the same. For the case of λ, this also results in positive values for λ u when < r 0, which can be interpreted as a measure of noise in the data.
In case the deviations δ are to be characterized by absolute differences instead of squared differences (for indices such as ρ ) 1 , the equivalent unsystematic mean absolute deviation can also be calculated from h as follows:
Demonstration for real applications
To illustrate how the proposed index can be used in real case studies and how it compares with other metrics, some examples using actual data are provided. Geophysical datasets are typically structured based on the familiar 2 or 3 spatial dimensions, plus the temporal dimension, resulting in time series of geospatial data. It is often interesting to evaluate separately the temporal evolution of the spatial agreement and the spatial patterns of temporal agreement with dedicated protocols 23 . For the sake of brevity a limited analysis is provided here, in which the first example focuses on temporal agreement of time series of satellite imagery, while the second example illustrates spatial agreement of different modelled gross primary productivity (GPP) for a single moment in time.
Temporal comparison of two satellite observation datasets
The first study case consists of satellite measurements of the Normalized Difference Vegetation Index (NDVI) obtained from 1 October 2013 up to 31 May 2014 over North-West Africa. The spatial resolution is 1 km and the temporal resolution is a dekad (a dekad is a period resulting of the division of each calendar month into 3 parts, which can thus take values of 8, 9, 10 or 11 days). Data is obtained by two different instruments on-board of two different satellite platforms: SPOT-VEGETATION and PROBA-V (these will be respectively referred to as VT and PV for simplicity). PV data are available from the Copernicus Global Land Service portal 24 , while VT archive data are provided courtesy of the JRC MARSOP project 25 . Although the geometrical and spectral characteristics of the satellites and the processing chains of the data have been designed to be as close as possible, differences between the products are still expected because the instruments are not the same. The interest here is to quantify where in the region do the time series disagree. Since there is no grounds to argue one should be a better reference than the other, a symmetric index of agreement should be applied on each pair of time series, resulting in values that can be mapped spatially.
Results are presented in the maps of Fig. 5. All maps show the expected patterns of temporal agreement: areas with a strong dynamic NDVI signal, such as the Northern cultivated areas, have a higher agreement than desert areas where the signal is mostly composed of noise. However, there is a large difference in where each metric provides negative values: The λ map shows no negative values, the map of Watterson's M metric takes negative values only were the correlation is negative, but the map of Ji & Gallo's AC index shows vast areas of negative values throughout the area. The comparison between λ and r reflects the added-value in using the former, which incorporates the biases not present in the latter. The magnitude of these biases with respect to the total deviations can be spatialized in the f sys map, while the agreement of the datasets irrespective of these biases is shown in the λ u map. From the selected time series shown in Fig. 6, one can better appreciate differences in the numerical value of the different metrics with respect to the analysed data (temporal profiles and scatterplots). By focusing on the profiles of Fig. 6c,d collected over very arid areas, the better performances of λ as compared to AC are clear. Both profiles show limited temporal variability and reduced correlation. AC shows a negative (and out of bounds) value of − 1.668 for profile c and a positive and higher value (0.227) for profile d that has a lower correlation and a clear bias. λ instead shows value lower than the two correlations and decrease for profile d as compared to c. Interestingly enough, λ u for profile d informs us that by applying a linear transformation, the agreement would be greatly improved.
Spatial comparison of two geophysical products
The second case study consists of estimations of gross primary productivity (GPP) at global scale for a given moment in time: June 2007. The first dataset is the NASA MOD17 product 26 , produced by combining canopy biophysical information derived from satellite remote sensing products with meteorological and land cover information, with some adjustments based on localised in situ flux-tower observations. The product used is a version that has been aggregated to a spatial resolution of 0.05° and a monthly temporal resolution (available here: 27 ). The second dataset is the MPI MTE-GPP product 28 constructed using a decision tree statistical approach to upscale information from flux-towers up to a 0.5° grid, aided by gridded meteorological and remote sensing co-variables (available here: 29 ). Both approaches employ similar sets of information but using considerably different methodologies, and again, there is no reason to consider one is better than the other.
The products are to be compared spatially for sets of equal reasoning areas. These areas are defined by a combination of climatic zones 30 and land cover types (based on the MODIS 2007 MCD12C1 Land cover product 31 ). All pixels within a given area in one product are compared to the corresponding pixels of the other product. The results are summarized as 2-D histograms in Fig. 7 and Table 1. The examples shown include 6 cases in which the AC value is below zero. As can be understood from the index definition (eq. (9)) and from the plots in Fig. 7, this typically occurs when the means are very similar while the variances are strongly dissimilar. The examples also show how M is systematically lower than λ (as expected due to its arcsine transformation), which makes the result less comparable to r. Good examples include evergreen needleleaf forest (case a) and shrublands (case l) which are very close to having no bias, and thus α close to 1, but M considerably lower than r. Three cases (g,h and j) also illustrate how the proposed index takes a value of 0 when < r 0, but λ u does not. Finally, high values of f sys (such as case j) help to quickly identify when the scatter clouds are not on the 1:1 line.
Concluding remarks
Through numerical evaluation of different proposed metrics, this paper shows that a modified version of the index of Mielke is preferable to others. This index, named here λ, is adimensional, bounded, symmetric, easy to compute and directly interpretable with respect to the commonly used Pearson coefficient of correlation r. This index can basically be considered as a natural extension to r that downregulates the value of r according to the bias encountered in the data. Table 1. Agreement between two global gross primary productivity products for different land cover types across different climate zones in either the northern (N) or southern (S) hemispheres. ENF, EBF and DBF stand respectively for evergreen needleleaf, evergreen broadleaf and deciduous broadleaf forests.
The logic behind the original conception of the index by Mielke 7 , based on all the possible permutations between the elements within the two datasets, intuitively suggests how its denominator is indeed the maximum possible value that the mean sum of squares can attain. Because of the mathematical properties of these squared deviations, it is possible to rewrite this index in an expression based on variances instead of permutations, making it much simpler to compute. Unfortunately, we have not succeeded to generalize the structure of the (easily computable) index to be used with other metrics of deviations, such as the mean absolute deviations. However, the demonstration of how to disentangle the unsystematic from the systematic contribution in the agreement using an eigen decomposition could be applied to any other type of metric.
The scope of the index remains to be a pragmatic extension of r and thus used in a context where a linear functional agreement is wanted. It is not intended as a tool to explore new functional associations in the data (such as the maximum information coefficient 32 ). However, its use could go beyond comparing dataset agreement symmetrically, and join the collection of existing methods 2 to characterize model performance with respect to a reference. Also, the index was demonstrated here with case studies of spatio-temporal gridded data, but it should also be usable for any pair of vectors of any kind of data, just as r. | 7,000 | 2016-01-14T00:00:00.000 | [
"Mathematics"
] |
Transfer-recursive-ensemble learning for multi-day COVID-19 prediction in India using recurrent neural networks
The COVID-19 pandemic has put a huge challenge on the Indian health infrastructure. With a larger number of people getting affected during the second wave, hospitals were overburdened, running out of supplies and oxygen. Hence, predicting new COVID-19 cases, new deaths, and total active cases multiple days in advance can aid better utilization of scarce medical resources and prudent pandemic-related decision-making. The proposed method uses gated recurrent unit networks as the main predicting model. A study is conducted by building four models pre-trained on COVID-19 data from four different countries (United States of America, Brazil, Spain, and Bangladesh) and fine-tuned on India’s data. Since the four countries chosen have experienced different types of infection curves, the pre-training provides a transfer learning to the models incorporating diverse situations into account. Each of the four models then gives 7-day ahead predictions using the recursive learning method for the Indian test data. The final prediction comes from an ensemble of the predictions of the different models. This method with two countries, Spain and Bangladesh, is seen to achieve the best performance amongst all the combinations as well as compared to other traditional regression models.
Introduction
The COVID-19 pandemic has placed India's relatively limited healthcare resources under tremendous pressure.Owing to the nation wide lockdown starting from 24 March 2020, imposed by the Indian government, the number of COVID-19 cases in the first wave were limited.However, with the opening up of the cities, its transport systems and allowance of various festivities, India has faced a very severe second wave with the peak at almost 0.4 million new cases a day.Although the peak seems to be over, the actual number of people to be affected in the coming days is very difficult to determine.There had been an exponential rise [21] of new cases in the second wave and the third wave and as of 24 July 2022, there are close to 0.15 million active COVID-19 cases in India.
COVID-19 has disrupted demand projections, which help merchants and providers of consumer products and services determine how much to purchase or create, where to purchase goods, and how much to promote or sell.During the early stages of the pandemic, abrupt curfews and a migration to working from home prompted panic purchases of<EMAIL_ADDRESS>(D.Chakraborty); debayang.ju@gmail.com(D.Goswami<EMAIL_ADDRESS>(S.Ghosh<EMAIL_ADDRESS>(A.Ghosh<EMAIL_ADDRESS>(J.H. Chan); ELPWang@ntu.edu.sg(L.Wang) ORCID(s): 0000-0001-7273-1353 (D.Chakraborty); 0000-0002-6133-6937 (D.Goswami); 0000-0002-1691-761X (S.Ghosh); 0000-0003-1548-5576 (A.Ghosh) ious food products and household supplies.Some things were sold out, while others remained on the shelf.Insecurity abounds today on several levels.Certain items, such as toilet paper and frozen foods, are still scarce.Food retailers are stocking seasons' worth of basic necessities rather than days' worth in order to best prepare for winter season, when there could be a return of illnesses and people are anticipated to stay at home.Amidst all this, there are speculations of what the next pandemic would be.The COVID-19 pandemic has showed each country their limitations.
According to a worldwide trend, we can simply state that our existing medical capacity cannot meet the high health demands caused by the coronavirus pandemic [25].Infectious diseases often rise to pandemic level whenever the risk factors simultaneously happen.The circumstances can affect the availability of hospital beds, ICU beds, fans, PPEs and qualified medical staff across the country.It is therefore challenging for the authorities to supply all sectors of society with the required healthcare services.Indian medical system had similarly collapsed during the second wave of COVID-19 infection due to the high hospital admission rate.
If there was a prediction system available which could project a better estimate of the number of affected people much earlier on, then the authorities could have maintained stocks according to that.Researchers have proposed various models for predicting the COVID-19 cases.Various mathematical models have been used to predict and to understand the spread of the disease [29].Auto-regressive Integrated Moving Average (ARIMA) has been used [1,15,32] as the standard model to predict the behaviour of the infection curves in different countries.The model was able to capture the total case statistics because the number of total cases is seen to follow a standard exponential curve.On the contrary, the number of new cases each day is highly uncertain and involves a lot many variables and therefore, it is much more difficult to handle using ARIMA [20,25].Researchers [33] have therefore, explored the use of support vector machines to predict the daily COVID-19 cases.Recurrent neural networks (RNNs) have also been tested and have become the state of the art models for predicting the daily number of cases.Researchers [30] have compared various RNN models like Long Short Term Memory (LSTM) [2,13,23], Gated Recurrent Unit (GRU) and Bi-LSTMs.They have observed that these models are more robust than ARIMA or Support Vector Regression (SVR) [41].Transfer learning and ensemble modelling have also been applied to study the statistics of daily COVID-19 cases [6] and it has shown to perform even better than the standard LSTM-RNNs.Ensemble learning in conjunction with transfer learning has, however, not been tested before.It might capture trends of multiple countries and take advantage of the knowledge of infection spread trends that the test country has not experienced before.
As we have witnessed during the peaks of the infection waves that the medical supplies were generally distributed to an area based on the real-time daily new cases in that particular area/ hospital.It takes some time to get these supplies and meanwhile the patients might be critical [22].Therefore, a predictive method for the estimation of COVID-19 statistics for multiple days in advance, can provide a better framework for medical logistics.A predictive model in the similar direction has been proposed in this article.Some researchers [9,27,28] have attempted the multi-step prediction of cumulative COVID-19 cases.However, multi-step prediction of daily new cases, daily fatalities and total active cases simultaneously is difficult due to their chaotic nature.The proposed model alleviates these problems and gives a multi-variable (3 COVID-19 parameters: New Cases, New Deaths and Active Cases) multi-day prediction for the daily statistics.
The proposed method draws motivation from the concept of recursive learning.The aim of recursive learning is to establish a model that can learn to fill in the missing parts in its input.In prediction tasks, the recursive model trains on its predictions [43].This provides the input with a feedback from the output.The proposed method has been tested to give predictions for seven days ahead case.To achieve this, the proposed method uses a combination of several learning methodologies and integrates them into one.It has been shown that such a combined model is more efficient at providing multi-day forecasts than the existing standard models.
The contribution of this work can be highlighted along the following lines: • Introduction of a transfer learning scenario for incorporating COVID-19 spread behavior in different countries .
• Recursive learning for 7-day prediction i.e., using the predictions recursively for new predictions.
• Combination of the predictions from different models using a weighted ensemble.
Rest of the article is organised as follows: The preliminaries for the proposed method is outlined in details in Section 2 with special emphasis on transfer learning and recursive learning employed.Section 3 describes the proposed method.Section 4 contains the experimental results, along with a comparison with existing standard methods.Section 5 contains a discussion of the results, along with statistical significance testing of the models.Section 6 concludes this article with the scope for future research.
Preliminaries
This section describes the dataset and the basic building blocks which have been used in the proposed model.
Dataset
The data for this study has been taken from the database of Worldometers website [39].This website provides COVID-19 related data including new cases, new deaths, active cases, total tests etc. for 222 different countries all along the period of the pandemic.To predict the multi-day ahead COVID-19 cases in India we have considered data related to daily new cases, daily new deaths, and active cases for six different countries: the USA, Brazil, Spain, Bangladesh, Australia and India for the period from 15 February 2020 to 16 June 2022.
Recurrent Neural Networks
The current solution relies on gated recurrent units (GRUs) as the basic building blocks for multi-day prediction of COVID-19 parameters.Gated recurrent units and long short-term memory (LSTMs), as prevalent members of recurrent neural networks (RNNs), have the innate ability to capture trends and seasonality in time-series data [31].They are, therefore, the go-to methods for time series predictions.
GRUs are able to solve the exploding and vanishing gradient problem that is common for vanilla RNNs [7].With its reset gate and update gate, a GRU is able to decide what information to keep from the previous state and what information to pass on to the next state.This gives GRUs the ability to keep relevant information from much earlier in the sequence while removing information that is no longer relevant for the task at hand.For a more detailed description of GRU, one can refer to [7].
GRUs are one of the simplest recurrent neural network models and have been used for time series prediction tasks in multiple domains, e.g., for traffic flow prediction [10], energy load forecasting [17], stock market forecasting [3], air pollution forecasting [34].It has also been used for COVID-19 prediction with the help of deep learning based models [30].
The present work of multi-day COVID-19 prediction has been done using GRUs as the basic building blocks in order to harness its effective sequence modelling function and also to prevent over-fit in our relatively small dataset.
LSTMs are also used for time-series prediction in multiple domains [12,16,19,40].In the present work, we have compared the performances of GRUs with that of LSTMs, considering both as the basic building blocks.
Transfer Learning
Transfer learning is the scenario where a pre-trained model for one particular problem is applied to a second, different but related problem [36].Transfer learning tries to take advantage of what has already been learned in a problem and applies it to improve the generalization in another related problem.
The domain in which the model is trained is called the source domain and the domain in which the model is applied is called the target domain [24].The source and the target domain may be different enough but need to have some sort of a relation.The predictive model in the source domain needs to be similar to that of target domain in order for transfer learning to work.Transfer learning is mainly applied in such target domains where sufficient labeled data is not available [37].
Transfer learning has been applied in the COVID-19 scenario for different tasks.It has been used for classification of COVID-19 from non-COVID-19 patients by using chest CT images [26], for face-mask detection in public areas [14], COVID-19 cases and death forecasts using LSTMs [11] etc.
In the present work, transfer learning has been chosen for the task of COVID-19 case prediction to learn from the experiences of countries affected by COVID-19.Countries with different circumstances, different climates, different measures for infection control are chosen as the source domain and COVID-19 cases prediction for India is done as the target domain.In one of our previous works [6], it is seen that transfer learning has given better results for next day prediction for COVID-19 cases using LSTMs.In this present work, we are exploiting transfer learning with recursion for multi-day ahead prediction.The details are given below.
Recursive Learning
The GRU model built in this work is able to predict the next day parameter after looking at the parameters over a period of past days (called the look-back period).As mentioned, in order to achieve a multi-day prediction of COVID-19 cases, a recursive learning methodology is adopted as shown in Figure 1.
In a recursive way, the predicted output of the model is fed back to the input in the next step to obtain the subsequent prediction.As an example, in Figure 1, data for day 1 to day 4 is used as input to the model to predict the data for day 5.In the next step, the data for day 5 is added to the input and the data for day 2 to day 5 is taken as input to the model to predict the output for day 6.This process is repeated recursively till the required days of prediction is obtained.
This recursive learning methodology uses the sliding window approach with the previous predictions being used as a In this work, this recursive learning methodology works for 7 steps to predict the COVID-19 cases for the next 7-days.However, this process of recursion can be used for prediction of COVID-19 cases for any number of days in advance.
Ensemble Learning
Through ensemble learning one could exploit the unique abilities of multiple models in an integrated manner by combining the results obtained from various models.In this work, the results obtained through recursive learning approach initiated from various transfer-learnt models (trained on data from respective countries) are ensembled to obtain final predictions.Several ensemble techniques exist in literature [8,44].In the present approach, we have proposed a weighted ensemble technique.
Performance Metric
To assess the performance of the proposed approach, two metrics have been used for this study.The first is the relative mean squared error (R-MSE) and the second one is relative mean absolute error (R-MAE).Rather than considering actual value, calculating the error as a fraction of the actual value is seen to be effective.As the error value is compared relative to the actual value of the parameter, hence the term relative has been used for the standard error metrics of mean squared error and mean absolute error.
These metrics are defined as follows: where, ( ) is the predicted value on the ℎ day and ( ) is the actual value on the ℎ day. is the total number of days involved.Note that, R-MSE better reflects the error for higher deviations than that of R-MAE as it penalizes higher deviations with a greater error value.
For each of the three COVID-19 parameters predicted (daily new cases, daily new deaths and total active cases), these two errors values (as an average of all the prediction errors over the test set) have been shown in the results section.
Proposed Method
The proposed model is an ensemble of four different models pre-trained on data from four different countries (The United States of America (USA), Brazil, Spain and Bangladesh) in order to predict the COVID-19 daily new cases, daily new deaths and active cases for India for the next 7-days.The idea was to learn the infection spread in the worst affected country from each of the seven continents.
The USA and Brazil were obvious choices from the North American and South American continents due to high number of COVID-19 infections.The USA has witnessed the most number of cases.The first wave in the USA was prolonged and the second wave has resulted in an increased number of deaths per day.The third and fourth waves also have a similar pattern.The USA has not witnessed a plateau in the total number of cases after the first wave.However, the daily new cases have decreased considerably after the peak of the third wave.Brazil has the third highest number of cases and currently the death rate is 3,015 per million population, which is one of the highest in the world.However, the total cases curve in Brazil has a prolonged second wave and severe third wave.Spain has been chosen from the European continent due to its well marked first, second and third waves of infection as compared to the initial plateau of infections in Italy.Bangladesh has been chosen from the Asian continent to incorporate similar climatic conditions and being a neighbouring country of the test country, India.South Africa was first taken into consideration from the African continent, however it was not incorporated into the model as the number of cases in the waves of infection have been quite low as compared to the other countries selected.Australia was also taken into consideration, but was not introduced in the model due to the very late nature of infection with it still being in the first wave of infections.To incorporate population density into account, the data of each country is divided by the corresponding population density.
India witnessed a decline in the number of daily cases which suggested the ending of the third wave.However, cases have started increasing in some parts of the country again, signalling a possible fourth wave.This obviously brings us to the crucial question about the condition in India about the subsequent waves of infection.Now, since all of the four countries have shown different trends, it is not sure as to which path Indian trend would follow.This is why all possible combinations of these four countries were taken into account.More countries could have been taken for pre-training,
Step 1: Train-Test Splitting of Input Data
The data from the period 15 February 2020 to 31 December 2021 has been used for training the models for the individual four countries.This period has been chosen to take 80% of the total data for training.Indian data for the period 15 February 2021 to 31 December 2021 has been used for fine-tuning the transfer learning models before testing them on Indian data for the period 16 January 2022 to 16 June 2022.The remaining period of 01 January 2022 to 15 January 2022 for the Indian data has been used for crossvalidation in the ensemble weighted averaging as mentioned in the subsequent paragraphs.
Step 2: Forming the Model
Two RNN models (LSTMs and GRUs) have been taken independently as the building blocks for the proposed method.The proposed RNN models with parameters and their values chosen are given in Table 1.As mentioned earlier, the GRU and the LSTM models are trained and tested in order to do performance comparison of the two RNN types.
Step 3: Transfer Learning
As stated earlier, the proposed model consists of all sixteen possible combinations of four RNN networks of two types (GRUs and LSTMs), each of which is pre-trained on data from four different countries.To pre-train each model, the data of each of the countries is taken from 15 February Values of the parameters of the RNN models 2020 to 31 December 2021, as mentioned earlier.The models built on the individual countries need to be fine-tuned on Indian data in order to take into account the recent trend of COVID-19 infections in the target country, India.Therefore, the pre-trained models have been fine-tuned on Indian data for the same period 15 February 2020 to 31 December 2021.This method of pre-training followed by fine-tuning introduces a transfer learning [42] ability in the individual models.Li et.al. [18] has shown that transfer learning can improve forecasting models that are based on deep learning.They have built a source domain of 12 countries, combining their data, and tried to predict the confirmed cases per million for the target countries.However, the scope of the study is limited by the prediction of just one COVID-19 parameter and also being tested on a shorter time period (31/12/2019 to 31/05/2020).Whereas, in the proposed approach, three COVID-19 parameters are predicted over an advance period of 7 days using an ensembled approach of transfer-learnt recursive model on a larger time period.
Step 4: Recursive Learning
To obtain 7-day ahead predictions, we have incorporated recursive learning in the proposed model.If the COVID-19 parameters for the next 7-days is to be predicted at the ℎ day, in the first step, the COVID-19 parameters for the next day (( + 1) ℎ day) will be predicted by the model.This prediction for the ( + 1) ℎ day is then fed back to the input to make the new input frame for predicting the COVID-19 parameters for the ( + 2) ℎ day.This process is repeated 7 times to get the predicted values for the 7-days (( + 1) ℎ day to ( + 7) ℎ day).An example of the process involved is shown in the Figure 1.This method can be applied to predict the COVID-19 parameters for a different time period by changing the period setting.However, to do a stable comparison, the prediction period has been set at 7-days for this study.Experiments are also done to study the effect of varying the look-back period used for prediction.
Step 5: Ensemble
Once the predictions are obtained for subsequent 7-days, the predictions from the combination of models are aggregated using weighted averaging.The weights are calculated based on the relative mean squared error (R-MSE) obtained from model cross-validation on validation data.15 days of data for India (01 January 2022 to 15 January 2022) is kept aside for this validation task.
The relative mean squared error (R-MSE) for the validation data is calculated using the Eqs. 1.The weights ( ) for the Model are given by Eq. 3, and the final prediction on any date D is given by Eq. 4. We have also compared our proposed ensemble method with an equally weighted ensemble technique.
where, − ( ) is the relative obtained for the ℎ model on validation data, is the number of models involved in the ensemble.The transfer learnt model with less error is given a higher weight as per Eq. 3.
where, ( ) is the prediction by the ℎ model for date D. This is a weighted summation of the predictions from the transfer learnt models.
Experimental Results
In order to predict the cases on any date, we need to use a look-back period.This period, in this case, is the number of days our model looks at for doing the predicting task.Finding the optimum value of the look-back period is crucial for the proposed method.This is because, depending on the look-back period, the performance of the models varies rapidly.
It is to be noted that, since we are relying on a recursive learning based multi-day prediction, where the predicted values are used as inputs for the subsequent predictions, we cannot afford to take the look-back to be smaller than the number of days in the multi-day prediction task.This will result in the last few predictions (of the recursive learning methodology) being made only on predictions and not on any actual data.We have experimented with a wide range of look-back periods and a value of 14 gives the best results for all the three variables.
The values of R-MSE and R-MAE (averaged over 20 runs) obtained for all 16 combinations using the proposed model (using both GRUs and LSTMs) with those for the support vector regression (SVR), auto-regressive integrated moving average (ARIMA) and Facebook Prophet models are shown in the Tables 2 and 3.The results are shown separately for the 3 COVID-19 parameters predicted in this study (new cases, new deaths and active cases).For establishing the efficacy of fine-tuning on Indian data, results obtained for the above mentioned methods, using GRUs, without finetuning are also put in Table 4.The results put in this table are the average of 20 independent runs.It is seen that the combination model of Spain and Bangladesh gives the best results for multi-day forecasting of all the three predicted variables.
Table 3
Comparison of results i.e.Mean and Standard Deviation(STD) of 20 runs, from all possible combinations of four countries considered in this study.Similar configuration LSTM models, fine-tuned on Indian data, have been used in this case.
Indian data.Also, all the models seem to improve with the fine-tuning on Indian data as depicted in Tables 2 and 3 with respect to Table 4.As expected, in transfer learning, if a model pre-trained on data from the the source domain is finetuned on data from the transfer domain, the results seem to improve with respect to the model just pre-trained on the data from source domain [42].Since GRUs and LSTMs yield similar results as seen in Tables 2 and 3 , GRUs have been preferred over LSTMs for the present experimentation as they have lesser number of parameters with respect to LSTMs [4].Hence, for the rest of this discussion, results obtained only with the GRU networks are analyzed further.
New Cases
For the single country models, Bangladesh model gives the best results, followed by Brazil, Spain and USA.The model built using the data from Bangladesh is able to predict the trend in Indian data in a very accurate way.For the combination of two-country models, the presence of Bangladesh (with better trend tracking behaviour for Indian data) influences the performance in a positive way.Spain-Bangladesh combination gives the best result with the R-MSE of 0.0013 and R-MAE of 0.0177, and yields the best result amongst all the combinations.For the combination of more than twocountry models, the performance does not improve further than the Spain-Bangladesh model.Overall, the two-country combination of Spain-Bangladesh model gives the best performance.
New Deaths
For the single country models, Brazil model gives the best results, followed by Bangladesh, Spain and USA.Bangladesh is again one of the better models, similar to the case of new cases.For the combination of two-country models, Spain-Bangladesh model gives the best results with a R-MSE of 0.0021 and R-MAE of 0.0286.Addition of data from the other two countries is not able decrease this error any further.
Active Cases
Prediction of active cases gives the best results as compared to the other two parameters.For the single country models, Brazil gives the best results, followed by USA, Spain and Bangladesh.For all the other models, the results improve only marginally for the Spain-Bangladesh model with a R-MSE of 0.0009 and R-MAE of 0.017.Rest of the results are all similar with no further improvement in the error metric.
Analysis
For the two-country models, the Spain-Bangladesh combination (highlighted by light gray in Table 2), gives a lower R-MSE than Bangladesh alone.There is an improvement in the result when pre-trained Spain model is combined with the pre-trained Bangladesh model.
Once the combination of Spain-Bangladesh model has been built, further addition of models built with the other two countries (Brazil, USA) data does not reduce the error any further.This may be due to the following facts: Brazil has a high number of cases and has similar geography like that of India, the infections spread happened later than that of India.
Bangladesh has similar climatic conditions and people's behaviour like that of India.Also, the percentage of people vaccinated with respect to the total population is similar in India and Bangladesh.As a result infection trend in Bangladesh has a positive impact in predicting infection trend in India.It may also be noted that, both India and Spain were vigilant at the start of the pandemic and had imposed strict infection control measures like lockdowns, social distancing etc.One such study [5] also corroborated this and compared the spread of COVID-19 infection in Spain and India by analysing the policy implications using epidemiological and social media data.Spain was one of the early COVID-19 infected countries, which is already at the end of fourth wave of infections.Whereas, India is at the end of the third wave.Also the spread increases sharply and then falls rapidly in both India and Spain.Such similar characteristics might be responsible for the low error in predictions obtained for the models built with Spain data.
It is to be noted that, the India model shown in the Table 2 is one where the GRU model is built with India data and then trained on India data i.e., it does not involve any transfer learning.It can be clearly seen from Tables 2 and 4 that the transfer learnt models are better predictors of all the three COVID-19 parameters.Models built with support vector regression (SVR), both with polynomial and RBF kernels, are unable to predict the COVID-19 parameters with a good level of accuracy.Same is the case with ARIMA and Facebook Prophet [35].Different nature of infection spread in different waves is difficult to take into account for SVR Comparison of results for Spain-Bangladesh model (with fine-tuning), using the proposed weighted ensemble and equally weighted ensemble and ARIMA based predictions.This is seen from the high errors of prediction obtained using these models.Predicting daily new deaths is especially very uncertain.Presence of comorbidity, age etc. play a significant role in new deaths.Hence, the proposed method, with the advantage of transfer learning from Bangladesh and Spain data combined, is able to predict the number of daily new cases, daily new deaths and active cases with the least error.
Standard deviation (STD) of the results are also studied.STD values for 20 runs for the prediction of new cases, for all the models fine-tuned on India data, are given in light blue in Figures 3 to 5, showing clearly the Spain-Bangladesh combination to be the best model.
The variation of prediction error with look-back period for all the three parameters is shown in Figures 6, 7 and 8.The prediction error is seen to be the least for the look-back period of 14 for all the three cases.
As mentioned earlier in the present method, a weighted ensemble is used for combining the models built with individual countries.Performance of the weighted ensemble method is compared with that of an equally weighted ensem- ble (where simple average is taken for combining the models) and the results for the best model (Spain-Bangladesh) is shown in Table 5.It is seen that our proposed method with weighted ensemble performs better than the equally weighted ensemble approach.
Statistical Significance Testing
The proposed method was tested on a total of 167 test sets, with each set having a duration of 7 days (i.e., 7 day prediction).In order to do a statistical significance testing of the best model (Spain-Bangladesh model) obtained in our comparisons, it has been statistically tested against each of the other single country and two-country models using the Wilcoxon signed rank test.
More than two country models were not used for calculating the statistical significance as they can be thought of as extensions of the two-country models.Only results for the prediction errors of new cases has been used in this testing.
Wilcoxon signed rank test is a non-parametric test for doing hypothesis testing of paired dependent samples.More details on Wilcoxon signed rank test can be found in [38].For doing the statistical significance testing, 167 number of R-MSE values (for the 167 test sets) for two models involved are treated as 167 number of paired observations.Wilcoxon signed rank test has been used over paired t-test as the difference between the observations are seen to be not normally distributed.Here, Null hypothesis -H 0 : There is no difference between the model and Spain-Bangladesh model Alternate hypothesis -H 1 : There is a difference between the model and Spain-Bangladesh model Two tailed hypothesis with a significance level of 0.05 has been used for the testing.The Z-score for each of the one-country and two-country models when compared with Spain-Bangladesh model is given in Table 6.All the Zscores in the Table 6 are less than -1.96 which is the critical Z-score for a two-tailed test at a significance level of 0.05.Hence, it can be concluded that the Spain-Bangladesh model is statistically significant from the other models in consideration.
Conclusion and Future Work
The proposed method with the combination of Spain-Bangladesh models outperforms the other combinations as well as other traditional regression models considered in our experiment.This is because the proposed method leverages the capabilities of both transfer learning and ensemble learning while taking into account the excellent sequence modelling capabilities of GRUs.The multi-day ahead prediction using recursive learning provides an added benefit of knowing the COVID-19 statistics multiple days ahead.The proposed method has currently been tested only on India data.Similar study and prediction can be done for other countries by choosing countries with transfer learning relevance.A regional study of COVID-19 cases is also necessary and the proposed method can be extended for individual states of India by incorporating information of other Indian states or other countries with comparable features with that of the individual Indian state.Individual waves of infection can also be analysed by using the transfer learning phenomenon from other waves of infection.
Figure 2 :
Figure 2: Flowchart of the proposed method
Figure 3 :
Figure 3: Average (over 20 runs) R-MSE after fine-tuning (blue dot) along with the standard deviation (light blue bars) for new cases prediction for all the models
Figure 4 :
Figure 4: Average (over 20 runs) R-MSE after fine-tuning (blue dot) along with the standard deviation (light blue bars) for new deaths prediction for all the models
Figure 5 :
Figure 5: Average (over 20 runs) R-MSE after fine-tuning (blue dot) along with the standard deviation (light blue bars) for active cases prediction for all the models
Figure 6 :
Figure 6: Prediction error for new cases vs Look-back period
Figure 7 :
Figure 7: Prediction error for new deaths vs Look-back period
Figure 8 :
Figure 8: Prediction error for active cases vs Look-back period
Table 2
that the results are similar for both GRUs and LSTMs, with fine-tuning on Comparison of results i.e.Mean and Standard Deviation(STD) of 20 runs, from all possible combinations of four countries considered in this study.The GRU models have been finetuned on Indian data.Results are also shown for SVR, ARIMA and Facebook Prophet models.
Table 4
Comparison of results i.e. i.e.Mean and Standard Deviation(STD) of 20 runs, from all possible combinations of four countries considered in this study.The GRU models have not been fine-tuned on Indian data.
Table 6
Wilcoxon signed rank test Z-score of the models with respect to Spain-Bangladesh Model | 7,718.2 | 2021-08-20T00:00:00.000 | [
"Computer Science",
"Medicine"
] |
Comment on tc-2021-188
The authors present a very methodical, comprehensive examination of solid microinclusions in polar ice at a range of depths that shows the potential for such a systematic approach to answer many outstanding questions about the role of impurities in ice structure and evolution. The work represents the first investigation of solid microinclusions in fast moving polar ice, and uses the methods outlined by Eichler et al. (2017) to construct impurity maps of over 5000 micro-inclusions and allows a robust statistical analysis of the frequency of location of micro-inclusions within the ice microstructure at a level that has not been obtained to date, and helps shed light on several long-speculated processes.
The authors present a very methodical, comprehensive examination of solid microinclusions in polar ice at a range of depths that shows the potential for such a systematic approach to answer many outstanding questions about the role of impurities in ice structure and evolution. The work represents the first investigation of solid microinclusions in fast moving polar ice, and uses the methods outlined by Eichler et al. (2017) to construct impurity maps of over 5000 micro-inclusions and allows a robust statistical analysis of the frequency of location of micro-inclusions within the ice microstructure at a level that has not been obtained to date, and helps shed light on several long-speculated processes.
Future implications are particularly tantalizing…e.g., the examination of the impacts of mineralogy, grain boundary sliding, and precipitation and recrystallization on ice properties from the microscale to the mesoscale, which has been suggested in past literature, but with no prior methodology for proving definitively, is a very interesting consequence of this work.
The methods, results and conclusions are well articulated and presented, I have only a few minor specific comments and a small amount of very minor, technical corrections. Abstract, line 10: "Analysing the area occupied by grain boundaries in the respective samples shows that micro-inclusions are slightly more often located at or close to grain boundaries in half of all samples. Throughout all samples we find strong indications of dynamic recrystallisation, such as grain islands, bulging grains and different types of subgrain boundaries." I think understand this sentence, but it is slightly confusing to read it. (Took me a couple of times for it to make sense). I think just rewriting it as, "In half of all samples, microinclusions are more often located at or close to the grain boundaries by a slight margin (in the areas occupied by grain boundaries). Not sure that last bit is needed.
Pg 5, Figure 2 caption (and throughout): on the last line here, and in other areas throughout the text, you state that something is "rarely close" to the micro-inclusions. Is it possible to define what you mean by "close" and is it just the 300 micron buffer surrounding the grain boundaries that defines what "close" is?
Pg 6, Line 114 what were the criteria for choosing samples, ie., what CFA values, grain sizes and orientation? For example, it seems like the Bolling Allerod period was targeted for sampling, and samples in the Younger Dryas and before and after the Holocene, but what other criteria were used to choose depths of interest?
Pg 8, Line 194, Is it worth discussing how it is determined if microinclusions are plates or clathrate hydrates here? It is mentioned in the figure caption for Figure 4, but seems like there is some more details that could be added about that in the text.
Pg 13, Line 240, Not sure if this is planned for the future work, or what exactly it would look like, but is it possible to plot grain size evolution of NEEM vs. EGRIP? That would be interesting to see. I understand there are likely limits that can be made in the intercomparison due to depth/age mismatches and differences in sample sizes and resolution, but the location of EGRIP over the ice stream vs. NEEM would be very interesting to see.
Technical corrections:
Pg 2, line 26, "depend" should be "depends" Pg 3, line 62, I think you should add the word "microstructure" to describe localization, to differentiate it from the broader, cm to m scale localization…since that is a key point of this work, that you can actually identify the microstructural context of the solid impurities within the matrix vs. a CFA approach where you lose that information. In my opinion, it's good to point that out just to make clear. | 1,037.4 | 2021-08-06T00:00:00.000 | [
"Physics"
] |
Nafion: A Flexible Template for Selective Structuring
The peculiarities of crystal growth on a Nafion polymeric substrate from supersaturated aqueous solutions of initial substances were studied. The solutions were prepared based on deionized natural water and deuterium-depleted water. As was found earlier, in natural water (deuterium content 157 ± 1 ppm) polymer fibers are capable of unwinding towards the bulk of the liquid, while in deuterium-depleted water (deuterium content ≤ 3 ppm) there is no such effect. Since the distance between the unwound fibers falls in a nanometer range (which is close to the size of the unit cell of the crystal lattice), and these fibers are directed normally to the polymeric substrate, the unwinding can affect crystal growth on the polymer substrate. As was obtained in experiments with X-ray diffractometry, the unwound polymer fibers predetermine syngony of crystals, for which the unit cell is either a rectangular parallelepiped (monoclinic system) or an oblique parallelepiped (triclinic system). A quantitative theoretical model that describes the local interaction of the polymer substrate with the crystalline complexes is presented. Within this model, the polymer substrate can be considered as a flexible matrix for growing crystals.
Introduction
The basic interest in the perfluorinated polymer membrane Nafion™ is due to the wide use of this polymer in fuel cells for hydrogen energy.Being composed of perfluorinated hydrocarbon chains with side branches of perfluorinated polyether with -SO 3 H terminal groups and a general formula [1]: Polymers 2024, 16, x FOR PEER REVIEW 2 of 23 (1) it involves fragments with drastically different response to water and aqueous solutions, namely, a highly hydrophobic Teflon™ skeleton, hydrophilic terminal groups, and somehow transient bridges in between, since polyether fragments are slightly hydrophilic.Such a combination of properties should be reflected in a characteristic response of the polymer material to water-based environments, being sensitive to minor variations in the state and/or composition of the system.Particularly, both normal and reverse micelles based on the material particles can be formed under certain conditions.This is not the sole interesting feature of the material that can be used in diverse fields.Nafion matrix is biocompatible and demonstrates excellent mechanical and chemical resistance.
In water, the polymer swells, and through channels with a diameter of 2 to 3 nm are formed in the bulk matrix (see [1] for more details).Terminal -SO3H groups dissociate to produce hydronium ions: (1) it involves fragments with drastically different response to water and aqueous solutions, namely, a highly hydrophobic Teflon™ skeleton, hydrophilic terminal groups, and somehow transient bridges in between, since polyether fragments are slightly hydrophilic.Such a combination of properties should be reflected in a characteristic response of the polymer material to water-based environments, being sensitive to minor variations in the state and/or composition of the system.Particularly, both normal and reverse micelles based on the material particles can be formed under certain conditions.This is not the sole interesting Polymers 2024, 16, 744 2 of 21 feature of the material that can be used in diverse fields.Nafion matrix is biocompatible and demonstrates excellent mechanical and chemical resistance.
In water, the polymer swells, and through channels with a diameter of 2 to 3 nm are formed in the bulk matrix (see [1] for more details).Terminal -SO 3 H groups dissociate to produce hydronium ions: while the negatively charged residues provide the possibility of ion transport [2].Here, R stands for the polymer chain.The nanoscale structure of the channels produces conditions for the separation of H + and OH − ions at the two membrane sides which is utilized in hydrogen cells [3].The mechanisms of ion separation (see, e.g., review [4]) and the ion exchange processes inside the channels (e.g., [5]) are extensively studied.In both cases, conformations of the side perfluorinated vinyl ether-based chains should play an important role.
One of the techniques used for the investigation of the behavior of Nafion is luminescent spectroscopy (see, e.g., review [6]).Our experiments particularly showed that the character of the Nafion swelling in water depends on the deuterium content.For example, in [6,7] the swelling of a Nafion plate in water samples with a deuterium content of 3 ppm (so-called deuterium-depleted water, DDW) to 10 6 ppm (heavy water) was investigated, and the distance between the polymer surface and the most distant area (with respect to the polymer surface) in the bulk water, where the luminescence is still nonzero, was measured.It turned out that in natural water (with a deuterium content of 157 ± 1 ppm, see [8]), the luminescence intensity is not leveled off up to distances of about 500 µm, while in DDW it dropped to zero at around 5 µm.As was found in [9], the surface bundles of polymer fibers of Nafion which is brought in contact with natural water are oriented basically normally to the Nafion-water interface, whereas in the environment of water vapor, these bundles are oriented tangentially to the interface.In [9], the Nafion surface was studied with an atomic force microscopy, which enabled the authors to estimate the mean distance between the fibers oriented normally to the polymer surface as several nanometers.These results indirectly confirm an idea about the partial unwinding of the polymer fibers in water.
The unwinding of the polymer fibers should result in some conformational changes in the polymer matrix.We assume that terminal SO 3 H groups are no longer tightly surrounded by more or less hydrophobic fiber segments, but rather brought into contact with water molecules.Thus, in addition to the aforementioned dissociation of SO 3 H groups, the internal configuration of the groups and their close neighborhoods (i.e., internuclear distances) should change, which means that the ground and excited electronic states should change as well.It is straightforward to assume that the absorption spectra of Nafion specimens soaked in deionized natural water and DDW should be different.Furthermore, if the electronic configuration of the polymer matrix depends on the deuterium content in water, this should manifest itself in the deposition processes at the Nafion interface, particularly in the growing of crystals at this interface as a substrate.
The goal of this work is to verify that upon swelling Nafion in water with different isotopic compositions, the absorption spectrum should depend on the deuterium content.The characteristic time of soaking at which this dependence is revealed should be found.To achieve this, we plan to conduct studies of the time dynamics of the absorption coefficient of Nafion during soaking in natural water and DDW.In addition, here we describe an experiment on growing crystals from supersaturated aqueous solutions of various substances on the Nafion substrate or on a smooth glass surface; aqueous solutions were prepared based on natural deionized water and DDW.Finally, we present the results of measuring the deposition rate from a supersaturated solution onto a Nafion substrate and a glass surface.A theoretical model that describes the local interactions of the polymer matrix with substances dissolved in water allowed us to substantiate the experimental results at a semi-quantitative level.
Methods and Materials
In the experiments, we used Nafion N117 plates with a thickness of 175 µm, purchased from Sigma Aldrich, St. Louis, MO, USA.Two kinds of water samples were used.One test liquid was deionized water (deuterium content 157 ± 1 ppm) with a resistivity of 18 MΩ•cm at 25 • C refined with a Milli-Q apparatus (Merck KGaA, Darmstadt, Germany).The other test liquid was deuterium-depleted water (DDW) with a deuterium content ≤ 3 ppm, purchased from Sigma Aldrich, St. Louis, MO, USA.The absorbance of Nafion in deionized natural water and DDW was measured with a PB 2201 spectrophotometer (SOLAR, Minsk, Belarus) with the following characteristics: spectral range is 190-1100 nm, spectral gap is 0.2-2.0nm with a step of 0.1 nm, scanning speed is 5-10,000 nm/min, and wavelength setting accuracy is ±1.0 nm.
The following crystalline substances have been investigated: potassium chloride KCl, sodium acetate NaCH 3 COO, copper sulfate CuSO 4 , as well as sucrose C 12 H 22 O 11 .All the substances were purchased from Sigma Aldrich (St. Louis, MO, USA).
For the investigation of the peculiarities of the crystals which were grown on the Nafion substrate from the supersaturated aqueous solutions of these salts, the techniques of X-ray diffractometry (XRD) and refractometry were used.In these experiments, a Nafion N117 plate with an area of 3 × 2.5 cm 2 was placed in a Petri dish filled with an aqueous solution of the tested salt with a volume of 25 mL and a concentration close to the saturation threshold at 20 • C. The solutions were prepared based on either natural deionized water or DDW.In the reference experiments, the same solutions were poured into the Petri dish in the absence of a Nafion plate.Uncovered Petri dishes were placed in a styrofoam box where water was evaporated at constant temperature, and the salt solution became oversaturated.A picture of the CuSO 4 crystal growth on the Nafion plate is shown in Figure 1.X-ray patterns of the crystal deposits were obtained with a Bruker D8 Discover DaVinci Design diffractometer.The characteristics of this diffractometer are the follow The source of radiation was a Siemens KFL ceramic X-ray tube with a focus area of 12 mm 2 .The recording mode was as follows: CuKα radiation, Kβ-filter, U = 40 kV, I mA, Bragg-Brentano geometry, Soller collimators at 2.5°, a divergence slit of 0.638 LYNXEYE detector, a scanning range 2θ = 10-65°, a scan step of 0.01°, and an expo time is 7.5 s at each step.The spectra were processed with EVA software, version 2.1 interpreted with the use of a PDF-2 database, 2011 version.
In addition, the rates of crystal growth on both smooth and polymer substrates X-ray patterns of the crystal deposits were obtained with a Bruker D8 Discover A25 DaVinci Design diffractometer.The characteristics of this diffractometer are the following.The source of radiation was a Siemens KFL ceramic X-ray tube with a focus area of 0.4 × 12 mm 2 .The recording mode was as follows: CuKα radiation, Kβ-filter, U = 40 kV, I = 40 mA, Bragg-Brentano geometry, Soller collimators at 2.5 • , a divergence slit of 0.638 mm, LYNXEYE detector, a scanning range 2θ = 10-65 • , a scan step of 0.01 • , and an exposure time is 7.5 s at each step.The spectra were processed with EVA software, version 2.1, and interpreted with the use of a PDF-2 database, 2011 version.
In addition, the rates of crystal growth on both smooth and polymer substrates were estimated.For this purpose, the n(t) refractive indices of the solutions were measured every 10 h with the use of an Abbe refractometer, Kruss AR4 (A.KRÜSS Optronic GmbH, Hamburg, Germany).For that purpose, a drop of the test solution with a volume of 5 µL was taken with an Acura 825 sampling microtube (Socorex Isba SA, Ecublens, Switzerland) and placed on the surface of the main measuring prism of the refractometer.The value of the refractive index was measured at the wavelength λ = 589.3nm.
For the interpretation of experimental results, non-empirical quantum chemical simulations were used.Model fragments of a Nafion structure were calculated at the DFT level with the use of B3LYP hybrid exchange-correlation functional and 6-31G(d,p) extended double zeta basis set.To take account of the dispersion interactions that may be important within the perfluorinated segments of the model structures, D3+ Grimme correction was added.Such an approach was repeatedly shown to adequately describe the electron density distribution of both hydrocarbons or substituted hydrocarbons and water clusters due to the balanced account for the hydrogen bonding, electrostatic, and dispersion effects.The large atomic sizes of the model systems along with their spatial compactness make the usage of larger basis sets impossible (because of the rapidly decreasing parameter of the linear independence of the basis functions) and unnecessary (because the role of additional functions both with smaller exponents and larger angular momenta is played by the tails of numerous basis functions centered on a large number of more or less distant nuclei of the model system).
For the analysis of the conformation changes of Nafion in the presence of water, complex systems that involved up to 60 water molecules were considered.For estimating the hydration energies, the difference between the energy of the Nafion-water complex and the sum of the energies of the individual model Nafion fragment and the relaxed water cluster were found, the counterpoise corrections for the basis set superposition errors being taken into account in a conventional variant.
The possible formation of salt fragments on the surface of a Nafion membrane was mimicked by cluster systems, which involved a model Nafion structure, with up to five cation-anion pairs of the salt molecule initially arranged as in a crystal lattice, and the corresponding amount of water molecules.Here, restrictions on the positions of water molecules were imposed in order to prevent their agglomeration via H-bond formation, which was found to be probable and actual in the systems that comprise a relatively small number of particles insufficient for the formation of ordered arrangements typical of large ensembles and crystals.In the absence of salt or sucrose particles, there was no position or symmetry restriction during the optimization of the structures studied.In the latter case, the correspondence of the structures identified as a result of the optimization procedure to the local minima of the adiabatic potential was confirmed by normal-coordinate analysis.
To judge the relative stability of the systems, not only their total electronic energies (E) were estimated, but also thermal corrections were taken into account, and relative ∆G vib Gibbs energy values were found, which included the zero-point contributions and the vibrational energy increments (all the rotational and translational increments were neglected to mimic the actual highly restricted reciprocal mobility of the fragments of a polymer chain).All simulations were carried out with the use of the Firefly 8.2 quantum chemical program [10] and visualized with Chemcraft software, version 1.8 (build 654b) [11].
Absorption Spectra of Nafion When Soaked in Natural Water and DDW
First of all, let us consider the absorption spectra of a dry Nafion specimen and a specimen swollen in water.Insofar as swelling is a response predetermined by hydropho-Polymers 2024, 16, 744 5 of 21 bic/hydrophilic interactions and to a certain extent dependent on the flexibility of the hydrogen-bond network of water and the relative mobility of protons, the effects produced by different water samples may differ.We compared the behavior in natural deionized water and deuterium-depleted water.The absorption was analyzed in a broad range of wavelengths of 200 to 900 nm where Nafion is known to absorb [9]. Figure 1 shows the differential spectral functions ∆(t) of the Nafion specimens swollen in two different water samples, where: Here, G 1 is the spectrum of a dry Nafion specimen with a thickness of 175 µm; G 2 is the spectrum of the water sample Nafion specimen was soaked in; and G 3 (t) is the actual spectrum of a swollen Nafion specimen which was soaked for a time of t.Here, t is a parameter, the values of which were varied from 1 to 4 h.In fact, the G 1 − G 3 (t) difference stands for the changes in the absorbance of Nafion upon its swelling in water.The absorbance decreases because of the formation of pores in the polymer matrix during soaking and filling of the pores with water, the absorption of which in the visible and near-UV ranges is negligible.Insofar as the water absorbance G 2 is added to the above difference, Equation (3) reflects the changes in the absorbance of the polymer matrix itself during its swelling.
It was found that when a Nafion specimen was immersed in a DDW sample, its absorption remained nearly unchanged with time (Figure 2b), whereas contact with natural deionized water caused a gradual change in the spectrum (Figure 2a).The absorbance decreases with time within the wavelength range of 200 to 500 nm.Any change in the absorption reflects changes in the electronic structure of the specimen, which, at the same time, causes changes in the nuclear configuration.Thus, the decrease in absorbance in a relatively broad spectral range can be predetermined by the changes in the general conformation of chains when the neighborhoods of the groups of different chemical nature and, hence, their own structural parameters are changed.However, the absolute decrease in the absorbance is nearly the same within a subrange of 200 to 400 nm, which may mean that the absorption in this range is predetermined by the spectral response of the groups of the same kind with different internal configurations.Therefore, it is reasonable to consider an integral absorbance.
Figure 3 shows the time dependence of the total absorbance of the specimens within the range of 200 to 600 nm obtained by numerical integration of the spectra shown in Figure 2. The duration of soaking was varied from 1 to 4 h.This result even more clearly shows that natural deionized water produces a more pronounced effect on the Nafion polymer.The nature of the changes cannot be determined from such an experiment, but it can be said that the change in the deuterium content in water affects the character of the interaction between the water and the polymer.It is the surface state that, on one hand, is probably changed in water samples with a standard deuterium content and, at the same time, should affect any superficial processes such as deposition of substances.Then, the hypothesis about the changes in the surface state of the polymer and the character of the changes can be checked in an experiment in which different substances are crystallized on a Nafion surface being deposited from the corresponding supersaturated solutions.
This idea is further supported by the following rationale.If distances between Nafion fibers in water are about several nanometers (as suggested in [12]), and the edges of crystal lattice cells are no larger than a nanometer, the changes in the surface state of the polymer in water should affect the character of crystallization.
in the absorbance is nearly the same within a subrange of 200 to 400 nm, which may mean that the absorption in this range is predetermined by the spectral response of the groups of the same kind with different internal configurations.Therefore, it is reasonable to consider an integral absorbance.
Experiments with X-ray Diffractometry of Crystalline Deposits on Nafion Substrate
The results of the X-ray diffractometry (XRD) of salt deposits formed in the solutions prepared with the use of natural deionized water either in the presence or in the absence of a Nafion plate in a Petri dish are shown in Figures 4-7.Black curves correspond to the absence of a Nafion plate in a Petri dish, while red ones correspond to the presence of a plate in the dish.The abscissa-axis values are doubled X-ray diffraction angles, and the ordinate-axis values are the photocounts (the intensity of spectral signals).It is worth noting that there was no difference in the diffraction patterns obtained for the deposits formed on a smooth surface (in the absence of a Nafion plate) in the solutions based on natural deionized and deuterium-depleted water.Furthermore, there was no difference Figure 3 shows the time dependence of the total absorbance of the specimens withi the range of 200 to 600 nm obtained by numerical integration of the spectra shown i Figure 2. The duration of soaking was varied from 1 to 4 h.This result even more clearly shows that natural deionized water produces a mor pronounced effect on the Nafion polymer.The nature of the changes cannot be dete mined from such an experiment, but it can be said that the change in the deuterium con tent in water affects the character of the interaction between the water and the polymer. is the surface state that, on one hand, is probably changed in water samples with a stand ard deuterium content and, at the same time, should affect any superficial processes suc as deposition of substances.Then, the hypothesis about the changes in the surface state o the polymer and the character of the changes can be checked in an experiment in whic different substances are crystallized on a Nafion surface being deposited from the corre sponding supersaturated solutions.
This idea is further supported by the following rationale.If distances between Nafio fibers in water are about several nanometers (as suggested in [12]), and the edges of crysta lattice cells are no larger than a nanometer, the changes in the surface state of the polyme in water should affect the character of crystallization.
Experiments with X-ray Diffractometry of Crystalline Deposits on Nafion Substrate
The results of the X-ray diffractometry (XRD) of salt deposits formed in the solution prepared with the use of natural deionized water either in the presence or in the absenc of a Nafion plate in a Petri dish are shown in Figures 4-7.Black curves correspond to th absence of a Nafion plate in a Petri dish, while red ones correspond to the presence of plate in the dish.The abscissa-axis values are doubled X-ray diffraction angles, and th
Dynamics of the Refractive Index of a Supersaturated Solution during the Formation of a Crystal on the Nafion Surface
To compare the dynamics of crystalline deposition on the polymer substrate with the supposedly unwound polymer fibers and on a smooth glass surface, it is reasonable to study the changes in the physical characteristics of specimens with time in the presence and in the absence of a Nafion plate in the Petri dish.As an illustrative substance, copper sulfate was selected due to the formation of regular crystal lattices of definitely different symmetry on different substrates, and as a physical characteristic, an index of refraction was taken.Figure 8 shows the measured n(t) time dependences for the CuSO4 solutions based on natural deionized water when the deposits were formed on a Nafion plate and on the smooth surface of a dish.
Dynamics of the Refractive Index of a Supersaturated Solution during the Formation of a Crystal on the Nafion Surface
To compare the dynamics of crystalline deposition on the polymer substrate with the supposedly unwound polymer fibers and on a smooth glass surface, it is reasonable to study the changes in the physical characteristics of specimens with time in the presence and in the absence of a Nafion plate in the Petri dish.As an illustrative substance, copper sulfate was selected due to the formation of regular crystal lattices of definitely different symmetry on different substrates, and as a physical characteristic, an index of refraction was taken.Figure 8 shows the measured n(t) time dependences for the CuSO 4 solutions based on natural deionized water when the deposits were formed on a Nafion plate and on the smooth surface of a dish.As can be seen in Figure 3, the integral absorbance in natural deionized water drops As can be seen in Figure 3, the integral absorbance in natural deionized water drops after a certain characteristic duration of soaking t char = 2 h, whereas in a DDW sample, the integral absorbance remains nearly constant.Thus, the time of growing a rigid brush of unwound fibers on the surface of Nafion swelling in natural water is apparently about 2 h.Since the time of formation of a crystalline deposit from a supersaturated solution on a smooth glass substrate and on the surface of a polymer was several days, we can say that in the experiments on growing crystals from supersaturated solutions based on natural deionized water, unwinding of polymer fibers should manifest itself, which can affect the shape of the unit cell of the crystalline deposit on the polymer substrate.In the case of potassium chloride, which crystallizes to form a cubic lattice solely that does not involve water molecules, no difference was found between the diffraction patterns of the crystals grown on smooth and polymer substrates (Figure 4).
For sodium acetate, again both diffraction patterns recorded for the deposits formed in the presence and in the absence of a Nafion plate in the Petri dish unambiguously reflected the appearance of a regular trihydrate (Figure 5).Note also that while the diffraction patterns overlap and nearly coincide in the case of KCl crystallization, in the case of NaCH 3 COO•3H 2 O we observe a broadening and a slight shift of the profile.
In the case of copper sulfate solutions based on the natural deionized water, a clear difference between the formation of crystals was observed.As was mentioned above, CuSO 4 crystals grown in a supersaturated aqueous solution can be either pentahydrates (CuSO 4 •5H 2 O) with a triclinic lattice or trihydrates (CuSO 4 •3H 2 O) with a monoclinic lattice.As follows from the data shown in Figure 6, it is copper sulfate trihydrate that grows on a Nafion polymeric plate, while a smooth surface promotes the formation of the pentahydrate.Let us stress that in DDW-based solutions a crystalline deposit of copper sulfate pentahydrate with a triclinic lattice was formed irrespectively of whether a Nafion plate was placed in the Petri dish or not.
Finally, drastic differences in the formation of deposits from the saturated solutions were discovered for sucrose C 12 H 22 O 11 .It is well known that sucrose exists in the form of monoclinic crystals.When melted sucrose is cooled down, a transparent amorphous bulk named caramel is formed.Judging from the results obtained, it is caramel that forms as a result of the deposition on a Nafion plate (Figure 7).This means that in this case there is something that prevents an ordered deposition of particles., and this is probably the existence of more or less extended polymer fibers in the nucleation region which promote the strong disordering of crystal lattice cells and the resulting amorphizing.
Despite its relative compactness, the set of compounds we use can be considered as quite representative, since they have principally different syngonies, different molecular sizes, and different kinds of involvement of water molecules.Potassium chloride crystallizes with no lattice water, and eight (absolutely equivalent) K. ..Cl distances of about 3.15 Å, while Cl. ..Cl distances are proportional to 3.63 and 5.14 Å depending on the lattice direction.Sodium acetate can be anhydrous, but its typical stable form involves water molecules, each sodium atom having five neighboring molecules at a mean Na. ..O distance of 2.42 Å; but the local coordination is not highly symmetric, since the totally six oxygen neighbors of sodium (those of five water molecules and one acetate group) form a distorted octahedron with the Na. ..O distances varying in a range of 2.34 to 2.50 Å, which is quite typical of sodium [22].Copper sulfate hydrates are very regular and at the same time involve water molecules.Additionally, the counterion in this case is complementary to the terminal SO 3 H groups of Nafion.As to the structural parameters, in trihydrate, the first shell of each copper atom involves three water molecules and three sulfate groups, whereas in pentahydrate, there are four water molecules and two sulfate groups, sulfates opposing each other in the extended octahedral-like configuration.Thus, the latter configuration can be treated as the one with the highest local symmetry among all three salt hydrates, while that of sodium acetate can be treated as the least structured.Finally, sucrose C 12 H 22 O 11 is a relatively large molecule that involves penta-and hexamolecular cycles with numerous hydroxyl groups, which may cause certain spatial effects.
As shown below, all of the experimental results can be explained based on the nanoscale consideration of the systems.The most interesting result concerns copper sulfate.Because of the different syngony of its tri-and pentahydrates, the initial hypothesis was the following.The presence of a Nafion specimen in a Petri dish probably favors the formation of crystal lattices of a particular kind.Insofar as the key difference between the tri-and pentahydrate is the equality of one lattice angle to 90 • in the former case, it was reasonable to assume that the orientation of the deposit growing on a Nafion surface is somehow spatially restricted in the tangential direction.This may be possible if there are some Nafion fibers that are oriented normally to the mean surface.This general peculiarity should be manifested in other situations as well.Two other salts considered in the work were potassium chloride and sodium acetate, which are crystallized (as noticed above) in cubic and monoclinic lattices.Both crystal lattices are complementary to the normal orientation of Nafion fibers with respect to the mean surface of the specimen, which means that the presence of Nafion may only either slightly distort or, by contrast, support the formation of those lattices depending on the complementarity of the parameters of the crystal lattice cells and the actual distances between the Nafion fibers.However, the crystal lattice of sucrose is of the same symmetry as that of the copper sulfate trihydrate.Then, the normal orientation of the fibers is not the sole condition that predetermines the crystal growth, and the arrangement of the fibers may be no less important.
To check the idea, non-empirical simulations of the model Nafion fragments either individually or in combination with water molecules and cation-anion salt pairs were carried out.In accordance with the general Formula (1), in which m = 6.5 for Nafion N117 used in this work, a fragment selected for modeling involved the perfluorinated hydrocarbon skeleton with two side perfluorinated vinyl ether chains terminated by SO 3 H groups and two terminal methyl groups, so that the general composition of the fragment was 9).At first, various conformations of the model were considered.formation of those lattices depending on the complementarity of the parameters of the crystal lattice cells and the actual distances between the Nafion fibers.However, the crystal lattice of sucrose is of the same symmetry as that of the copper sulfate trihydrate.Then, the normal orientation of the fibers is not the sole condition that predetermines the crystal growth, and the arrangement of the fibers may be no less important.
To check the idea, non-empirical simulations of the model Nafion fragments either individually or in combination with water molecules and cation-anion salt pairs were carried out.In accordance with the general Formula (1), in which m = 6.5 for Nafion N117 used in this work, a fragment selected for modeling involved the perfluorinated hydrocarbon skeleton with two side perfluorinated vinyl ether chains terminated by SO3H groups and two terminal methyl groups, so that the general composition of the fragment was CF3C(F)(O-CF2CF(CF3)OCF2CF2SO3H)(CF2)14C(F)(O-CF2CF(CF3)OCF2CF2SO3H)CF3 (Figure 9).At first, various conformations of the model were considered.Because of the very large number of the degrees of freedom of the system considered related to the internal rotations around all single bonds, it is nearly impossible to find the absolute energy minimum of the system.However, judging from the known peculiarities of hydrocarbons, it is clear that mutual rotations of the neighboring functional groups, which involve atoms of the same kind (fluorine in our case), result in energetically either Because of the very large number of the degrees of freedom of the system considered related to the internal rotations around all single bonds, it is nearly impossible to find the absolute energy minimum of the system.However, judging from the known peculiarities of hydrocarbons, it is clear that mutual rotations of the neighboring functional groups, which involve atoms of the same kind (fluorine in our case), result in energetically either equivalent or close minima.The small differences appear due to the interactions with the more distant groups, all of which are of the prevailing dispersion nature.Such interactions provide contributions of no larger than a kcal/mole.At the same time, there are SO 3 Hended chains, which should interact electrostatically; and these effects are much larger.Then, any set of the structures with the principally different arrangement of the side chains can give tentative estimates of the relative stability of those conformations.In any actual system, numerous interactions between the neighboring groups should inevitably distort any such structure; but the general energy trends should be the same.And the first result obtained is reasonable and quite expected.The more folded the branches and the closer the terminal sulfonate groups to each other, the lower the energy of the system.It decreases on going from (a) to (b) and (c) configurations in Figure 9.
When the SO 3 H-ended chains are initially kept at the largest possible distance from each other (taking into account that the structure should be extended in both directions, and any group has neighbors from both sides, the mean angles between the direction of the hydrophobic backbone and its polyether side chain should not exceed 90 • ) and extended to the utmost degree, the energy of the system is the highest (Figure 9a).When the branches are initially intorted, they tend to approach each other to minimize the net dipole moment (Figure 9b); and the electronic energy difference between these two structures is ca.15 kcal/mol.Finally, if the terminal groups can form an H-bond, the stabilization of the system is maximum.The structure shown in Figure 9c is lower than the previous one by another 10 kcal/mol, which makes it already definitely favorable compared to the one with extended terminal branches (with an energy difference of ca.−25 kcal/mol).When thermal increments to the energy determined also by the different folding of the backbone (which becomes progressively less extended as the SO 3 H-ended chains become involved in the interaction with each other) are taken into account, the energy differences between the structures are decreased, but remain quite substantial: the ∆G vib relative Gibbs energies are 0 vs. −11 vs. −19 kcal/mol for the (a), (b), and (c) structures, respectively.
Thus, one can assume that in a dry state, Nafion fibers should be packed in such a way that their SO 3 H-ended chains are intorted as strongly as possible (in the presence of the neighboring segments), while the sulfonate groups themselves tend to approach each other to form hydrogen bonds, which provide closed segments within the hydrophobic matrix.Naturally, the number of chains (and sulfonate groups) involved in one H-bonded knot can be larger than two, but restricted because of the spatial limitations, and the additional groups can be either those from the adjacent fibers or of the neighboring segments of the same fiber.This makes the whole structure quite strongly entangled.Now, let us turn to the hydration of a Nafion plate.To mimic the effect produced by water molecules, it is reasonable to restrict the consideration to a single-chain segment of the above Nafion model, i.e., to just one side chain connected to the perfluorinated backbone, and analyze what happens when the number of water molecules is gradually increased.Again, it is absolutely impossible and unnecessary to find all the local arrangements of water molecules around the model fragment.The most important conditions to be met are the reasonably large number of hydrogen bonds formed between water molecules and the smallest possible number of OH groups uninvolved in the bonds.The former condition corresponds to the largest possible contribution of the H-bond energies (each no less than 5 kcal/mol) to the total energy of the system, while the latter is the condition of the overall stability of the system against external perturbations.
When the number of molecules is small, they gather around the hydrophilic SO 3 H group (Figure 10a) and form a kind of a hat with an almost planar abat-jour visually parallel to the nearest contour of fluorine atoms.In the case of 16 molecules, which is sufficient for the formation of the first hydration shell around the SO 3 H group, the group dissociates (to form a hydronium ion and a negatively charged -SO 3 − residue), and the corresponding energy decrease equals 49 kcal/mol, although the formal number of hydrogen bonds stabilizing the system is increased only by one, from 25 in the relaxed water cluster to 26 in the cluster bound to a hydrophilic head of the model Nafion fragment.Note that here the reference water cluster is the one that formed upon the relaxation of the aqueous coat upon removal of the model Nafion structure from it.If we would consider the lowestenergy water cluster composed of the same number of molecules, the energy gain upon the hydration of a model Nafion fragment is smaller, but insubstantially, by 5 kcal/mol, because the total number of hydrogen bonds that stabilize the cluster is larger only by one.The above figures mean that the H-bonds themselves are strengthened upon the reorganization of water molecules around the appeared proton and the negatively charged -SO 3 − residue, and the mean energy per one terminal group at a low water content can be estimated as ca.45 kcal/mol.When the number of water molecules is increased nearly two-fold (Figure 10b) and they are initially randomly arranged around the model Nafion fragment, the number of molecules in the close neighborhood of the -SO3 − residue is not substantially increased.The crown of its hat becomes thicker, and concurrently a water shell forms over the fluorocarbon chain separated from it by about 2.5 Å on the average.Water molecules are oriented in such a way that only some of the peripheral ones have dangling OH groups, while all the protons of the so-to-speak inner-structure molecules are involved in hydrogen bonds.This extension of the water shell corresponds to the formal total hydration energy of the model Nafion structure of 24.5 kcal/mol if the reference water cluster corresponds to a compact fragment of the bulk H-bond network.At the same time, if we would take the water cluster obtained upon the relaxation of the hydration shell of a model Nafion structure as the reference, the estimated energy gain would be much larger, about 50.4 kcal/mol.It means that when water molecules are rearranged around fluorocarbon chains, much energy is spent on the distortion of the original H-bond network of water.If it is compensated by the efficient hydration of hydronium ions and -SO3 − residues, the process is thermodynamically possible, but it gradually becomes less favorable as the length of fluorocarbon chains to be coated is increased.This trend becomes ultimately When the number of water molecules is increased nearly two-fold (Figure 10b) and they are initially randomly arranged around the model Nafion fragment, the number of molecules in the close neighborhood of the -SO 3 − residue is not substantially increased.The crown of its hat becomes thicker, and concurrently a water shell forms over the fluorocarbon chain separated from it by about 2.5 Å on the average.Water molecules are oriented in such a way that only some of the peripheral ones have dangling OH groups, while all the protons of the so-to-speak inner-structure molecules are involved in hydrogen bonds.This extension of the water shell corresponds to the formal total hydration energy of the model Nafion structure of 24.5 kcal/mol if the reference water cluster corresponds to a compact fragment of the bulk H-bond network.At the same time, if we would take the water cluster obtained upon the relaxation of the hydration shell of a model Nafion structure as the reference, the estimated energy gain would be much larger, about 50.4 kcal/mol.It means that when water molecules are rearranged around fluorocarbon chains, much energy is spent on the distortion of the original H-bond network of water.If it is compensated by the efficient hydration of hydronium ions and -SO 3 − residues, the process is thermodynamically possible, but it gradually becomes less favorable as the length of fluorocarbon chains to be coated is increased.This trend becomes ultimately clear at the larger amount of water.
When the number of water molecules is increased to half a hundred (Figure 10c) and they are initially randomly distributed around a model Nafion fragment, they can already form a closed monomolecular layer with the -SO 3 − residue naturally built in this layer and the fluorocarbon chain residing in a kind of a tunnel; and a continuous network of 85 H-bonds is observed.However, this variant is not energetically favorable.Formation of such a water coat requires a dramatic distortion of the hydrogen-bond network of water.Instead of a compact ensemble of water molecules stabilized by a three-dimensional network of intermolecular bonds (Figure 11a), a strongly expanded two-dimensional bubble (Figure 11b) should appear, which is not additionally stabilized by the interactions with the structure it covers except for the coordination to an -SO 3 − residue.Therefore, the energy difference between the hydrated Nafion fragment (Figure 10c) and a combination of the individual Nafion fragment and the relaxed water cluster (Figure 11a) equals 48 kcal/mol.Formally, this difference can be explained by the smaller number of hydrogen bonds the 2D bubble is stabilized with, namely 85 vs. 94 in the 3D ensemble, which accounts for about 5.3 kcal/mol per one additional bond.This is not an individual bond energy because of the collective effects typical of hydrogen-bond networks.Nevertheless, this is a reasonable estimate of the energy that should be supplied to the water system in one way or another to provide the necessary reorganization of hydrogen bonds.Note that it is nearly equal to the energy liberated upon the hydration of the -SO 3 − residue, which means that the extension of hydration shells over the fluorocarbon chains is thermodynamically possible only at the cost of the hydration of hydrophilic segments, and the most energetically favorable variant is the one when only -SO 3 − residues and their close neighbors are hydrated.(Figure 11b) should appear, which is not additionally stabilized by the interactions with the structure it covers except for the coordination to an -SO3 − residue.Therefore, the energy difference between the hydrated Nafion fragment (Figure 10c) and a combination of the individual Nafion fragment and the relaxed water cluster (Figure 11a) equals 48 kcal/mol.Formally, this difference can be explained by the smaller number of hydrogen bonds the 2D bubble is stabilized with, namely 85 vs. 94 in the 3D ensemble, which accounts for about 5.3 kcal/mol per one additional bond.This is not an individual bond energy because of the collective effects typical of hydrogen-bond networks.Nevertheless, this is a reasonable estimate of the energy that should be supplied to the water system in one way or another to provide the necessary reorganization of hydrogen bonds.Note that it is nearly equal to the energy liberated upon the hydration of the -SO3 − residue, which means that the extension of hydration shells over the fluorocarbon chains is thermodynamically possible only at the cost of the hydration of hydrophilic segments, and the most energetically favorable variant is the one when only -SO3 − residues and their close neighbors are hydrated.An additional argument in favor of such a conclusion is that even at the smallest considered number of water molecules, the SO3H group is dissociated (see above), and an H3O + fragment separated from the -SO3 − residue with one water molecule appears.Insofar as all water molecules in the above systems are spent on the formation of a hydrating monolayer, there is no driving force for the migration of a detached proton to a larger distance from the -SO3 − residue.Only when the number of water molecules increases to 60, and some of the molecules act as nucleating sites of the second hydration shell, do there appear migration paths for the proton, and its departure from the -SO3 − residue becomes possible (Figure 10d); and in the optimum structure it resides close to the end of the fluorocarbon chain.
These results show that the hydration of side chains of a Nafion fiber is energetically favorable when water molecules do not penetrate deep and close to the perfluorinated Teflon backbone, and the energy necessary for the unfolding and extension of the fibers can easily be compensated by the hydration energy, especially taking into account that in An additional argument in favor of such a conclusion is that even at the smallest considered number of water molecules, the SO 3 H group is dissociated (see above), and an H 3 O + fragment separated from the -SO 3 − residue with one water molecule appears.Insofar as all water molecules in the above systems are spent on the formation of a hydrating monolayer, there is no driving force for the migration of a detached proton to a larger distance from the -SO 3 − residue.Only when the number of water molecules increases to 60, and some of the molecules act as nucleating sites of the second hydration shell, do there appear migration paths for the proton, and its departure from the -SO 3 − residue becomes possible (Figure 10d); and in the optimum structure it resides close to the end of the fluorocarbon chain.
These results show that the hydration of side chains of a Nafion fiber is energetically favorable when water molecules do not penetrate deep and close to the perfluorinated Teflon backbone, and the energy necessary for the unfolding and extension of the fibers can easily be compensated by the hydration energy, especially taking into account that in this situation water molecules can be arranged at a larger distance from the perfluorinated carbon skeleton, which is also energetically favorable.
Thus, based on these model simulations, we can state that hydration of a Nafion plate should result in the unfolding of the side chains involving terminal SO 3 H groups, and the orientation of the chains should minimize the electrostatic repulsion of the residues and balance all the ionic interactions, which is best at their normal orientation to the Teflon backbone.Based on this conclusion, we can analyze the possible variants of the deposit crystal growth on such a surface.
In the case of potassium chloride, the regular cubic lattice implies the basic Cl. ..Cl and K. ..K distances equal to 3.63 Å at K. ..Cl contacts of 3.15 Å (Figure 12a).This means that the lattice segment that can be formed between two Nafion fibers should be characterized by a boundary distance proportional to 3.63 Å.It is worth noting that there are -SO 3 − residues, which play the role of coordinating sites at the ends of side Nafion chains, where potassium cations can be bound.Thus, formally -SO 3 − residues should substitute a chloride ion at some boundary lattice points.Potassium sulfate is known to crystallize in an orthorhombic lattice with the following cell parameters: a = 7.46 Å, b = 10.08 Å, c = 5.78 Å, α = 90 • , β = 90 • , γ = 90 • [23].The orientation of the crystal axes is the same as in a cubic lattice of KCl; but the internuclear distances differ, falling in a range of 3.80 to 4.06 Å in the case of K. ..K and of 5.00 to 10.08 Å in the case of S. ..S, while the K. ..O contacts are all 2.9 to 3.1 Å (Figure 12b).Thus, the K. ..Cl and K. ..O distances in the potassium chloride and sulfate crystals are sufficiently close, which formally means that the replacement of one K. ..Cl contact at the fiber boundary with a K. ..O-S(O 2 ) should cause a minor perturbation rather than a substantial distortion of the crystal lattice.Furthermore, Cl. ..Cl distances in a potassium chloride lattice are proportional to 3.63 Å, while the distance between the oxygen atoms of the terminal -SO 3 − residues of the two neighboring perfluorinated ether chains of Nafion (when the SO 3 H groups have symmetrically equivalent orientation) is about 14.6 Å (Figure 12c), which is almost exactly a four-fold Cl. ..Cl distance.This means that a fragment of KCl lattice can almost exactly fit in between the two neighboring fibers with minor perturbation compared to the individual lattice of the substance.Then, the growth of a cubic KCl crystal on a Nafion plate should be straightforward, and the side Nafion chains act just as coordinating and slightly armoring peripheral inclusions.In the case of potassium chloride, the regular cubic lattice implies the basic Cl…Cl and K…K distances equal to 3.63 Å at K…Cl contacts of 3.15 Å (Figure 12a).This means that the lattice segment that can be formed between two Nafion fibers should be characterized by a boundary distance proportional to 3.63 Å.It is worth noting that there are -SO3 − residues, which play the role of coordinating sites at the ends of side Nafion chains, where potassium cations can be bound.Thus, formally -SO3 − residues should substitute a chloride ion at some boundary lattice points.Potassium sulfate is known to crystallize in an orthorhombic lattice with the following cell parameters: a = 7.46 Å, b = 10.08 Å, c = 5.78 Å, α = 90°, β = 90°, γ = 90° [23].The orientation of the crystal axes is the same as in a cubic lattice of KCl; but the internuclear distances differ, falling in a range of 3.80 to 4.06 Å in the case of K…K and of 5.00 to 10.08 Å in the case of S…S, while the K…O contacts are all 2.9 to 3.1 Å (Figure 12b).Thus, the K…Cl and K…O distances in the potassium chloride and sulfate crystals are sufficiently close, which formally means that the replacement of one K…Cl contact at the fiber boundary with a K…O-S(O2) should cause a minor perturbation rather than a substantial distortion of the crystal lattice.Furthermore, Cl…Cl distances in a potassium chloride lattice are proportional to 3.63 Å, while the distance between the oxygen atoms of the terminal -SO3 − residues of the two neighboring perfluorinated ether chains of Nafion (when the SO3H groups have symmetrically equivalent orientation) is about 14.6 Å (Figure 12c), which is almost exactly a four-fold Cl…Cl distance.This means that a fragment of KCl lattice can almost exactly fit in between the two neighboring fibers with minor perturbation compared to the individual lattice of the substance.Then, the growth of a cubic KCl crystal on a Nafion plate should be straightforward, and the side Nafion chains act just as coordinating and slightly armoring peripheral inclusions.In the case of sodium acetate, the situation is more complex.Here, the counterion of the salt involves a CH3 group in addition to the carboxyl fragment.Those CH3 groups face each other in the crystal lattice being separated by a C…C distance of 3.56 Å (Figure 13a); and this is a strong limitation imposed on the flexibility of the particle arrangement.As a In the case of sodium acetate, the situation is more complex.Here, the counterion of the salt involves a CH 3 group in addition to the carboxyl fragment.Those CH 3 groups face each other in the crystal lattice being separated by a C. ..C distance of 3.56 Å (Figure 13a); and this is a strong limitation imposed on the flexibility of the particle arrangement.As a result, the orientations of the acetic groups are alternating in two orthogonal directions in the lattice.Taking into account that methyl groups cannot be located close to the Nafion fibers, the distances between the carbon atoms of those acetic groups, which have opposite orientations, are 6.23, 6.48, and 9.16 Å.No direction coincides with any lattice vector (Figure 13b), which means that the direction of crystal growth can by no means be driven by nearly parallel Nafion chains if they extend to the maximum possible degree and form a symmetric repeated brush on a Nafion surface.However, for the two acetic groups of the same orientation, the distance between the carbon atoms of their carboxyl groups is 5.34 Å, while the distances between the oxygen atoms of the groups are 5.24 and 5.93 Å.The latter values if increased threefold provide inter-nuclear distances of 15.72 and 17.79 Å as boundaries.In Nafion model conformations where both SO 3 H groups have the same spatial orientation, the O. ..O distances fall in a range of 15 to 16 Å (Figure 13c).Taking into account that the Nafion fibers are sufficiently flexible due to the possibility of numerous internal rotations within any perfluorinated chain, and their terminal groups can also change their positions especially under the effect of solvating water molecules, the distances between the terminal -SO 3 − residues and the growing crystal can, so to speak, adapt to each other.However, if the orientations of the -SO 3 − residues of the neighboring chains do not comply with the above requirement, the reorganization may require additional time and energy.As a result, the growing bottom part of the crystal (at least two cells in thickness) may be defective.It is this defectiveness that probably predetermines the aforementioned broadening and small shifts of the signals in the XRD patterns of the sodium acetate deposit grown on a Nafion substrate compared to that formed on a glass surface.Thus, in this situation, the fibers can by no means act as armoring elements but rather as superficial inclusions, which do not produce a noticeable distortion.
Polymers 2024, 16, x FOR PEER REVIEW 18 of 23 the lattice.Taking into account that methyl groups cannot be located close to the Nafion fibers, the distances between the carbon atoms of those acetic groups, which have opposite orientations, are 6.23, 6.48, and 9.16 Å.No direction coincides with any lattice vector (Figure 13b), which means that the direction of crystal growth can by no means be driven by nearly parallel Nafion chains if they extend to the maximum possible degree and form a symmetric repeated brush on a Nafion surface.However, for the two acetic groups of the same orientation, the distance between the carbon atoms of their carboxyl groups is 5.34 Å, while the distances between the oxygen atoms of the groups are 5.24 and 5.93 Å.The latter values if increased threefold provide inter-nuclear distances of 15.72 and 17.79 Å as boundaries.In Nafion model conformations where both SO3H groups have the same spatial orientation, the O…O distances fall in a range of 15 to 16 Å (Figure 13c).Taking into account that the Nafion fibers are sufficiently flexible due to the possibility of numerous internal rotations within any perfluorinated chain, and their terminal groups can also change their positions especially under the effect of solvating water molecules, the distances between the terminal -SO3 − residues and the growing crystal can, so to speak, adapt to each other.However, if the orientations of the -SO3 − residues of the neighboring chains do not comply with the above requirement, the reorganization may require additional time and energy.As a result, the growing bottom part of the crystal (at least two cells in thickness) may be defective.It is this defectiveness that probably predetermines the aforementioned broadening and small shifts of the signals in the XRD patterns of the sodium acetate deposit grown on a Nafion substrate compared to that formed on a glass surface.Thus, in this situation, the fibers can by no means act as armoring elements but rather as superficial inclusions, which do not produce a noticeable distortion.Now, let us turn to the most interesting example of copper sulfate salt.As was mentioned above, it can crystallize to form two different hydrates, namely trihydrate with a monoclinic lattice and pentahydrate with a triclinic lattice.Fragments of both crystal structures are shown in Figure 14a,b respectively.In trihydrate, the first shell of any copper atom involves three water molecules and three sulfate groups, so that the Cu. ..O distances are not equal (falling in ranges of 1.96 to 1.98 Å in the case of water oxygens and 1.96 to 2.45 Å in the case of sulfate oxygens), but the directions of bonds are quite regular in view of the replicated cells.In pentahydrate, the situation is quite similar, namely, the first shell of copper involves four water molecules and two sulfate groups, and the sulfates are arranged along a nearly C 4 local symmetry axis (if only oxygens rather than whole water molecules are taken into account).As a result, here the local O-symmetry of a copper site corresponds to an elongated octahedron: two Cu. ..O distances to sulfates are equal to 2.41 Å, and Cu. ..O distances to water oxygens are equal in opposing pairs with a slight difference between the pairs (1.97 and 1.94 Å).The higher local symmetry of pentahydrate imposes stronger restrictions on the possible boundaries that may be imposed by Nafion chains, whereas the generally less symmetric trihydrate structure with the variations in the sulfate group orientations makes the problem of fitting of a crystal structure fragment to the space between Nafion chains more easily solvable.
Polymers 2024, 16, x FOR PEER REVIEW 19 of monoclinic lattice and pentahydrate with a triclinic lattice.Fragments of both crystal stru tures are shown in Figure 14a,b respectively.In trihydrate, the first shell of any copp atom involves three water molecules and three sulfate groups, so that the Cu…O distan are not equal (falling in ranges of 1.96 to 1.98 Å in the case of water oxygens and 1.96 2.45 Å in the case of sulfate oxygens), but the directions of bonds are quite regular in vi of the replicated cells.In pentahydrate, the situation is quite similar, namely, the first sh of copper involves four water molecules and two sulfate groups, and the sulfates are ranged along a nearly C4 local symmetry axis (if only oxygens rather than whole wa molecules are taken into account).As a result, here the local O-symmetry of a copper s corresponds to an elongated octahedron: two Cu…O distances to sulfates are equal to 2 Å, and Cu…O distances to water oxygens are equal in opposing pairs with a slight diff ence between the pairs (1.97 and 1.94 Å).The higher local symmetry of pentahydrate i poses stronger restrictions on the possible boundaries that may be imposed by Nafi chains, whereas the generally less symmetric trihydrate structure with the variations the sulfate group orientations makes the problem of fitting of a crystal structure fragme to the space between Nafion chains more easily solvable.Figure 14c,d illustrates the almost exact inclusion of such fragments.Two varia were considered, namely a (Cu)3(SO4)3(H2O)9 cluster (Figure 14c) and a (Cu)4(SO4)3(H2O cluster (Figure 14d).The initial mutual arrangement of copper ions, sulfate groups, a water molecules in both combined model systems corresponded to the copper sulfate pe tahydrate lattice.Then, the structures were optimized at the imposed restrictions on t mutual positions of copper and sulfur atoms of the sulfate groups, as well as the distan between copper atoms and water oxygens in order to prevent the agglomeration of wa 14c) and a (Cu) 4 (SO 4 ) 3 (H 2 O) 16 cluster (Figure 14d).The initial mutual arrangement of copper ions, sulfate groups, and water molecules in both combined model systems corresponded to the copper sulfate pentahydrate lattice.Then, the structures were optimized at the imposed restrictions on the mutual positions of copper and sulfur atoms of the sulfate groups, as well as the distances between copper atoms and water oxygens in order to prevent the agglomeration of water molecules and the approach of the distant oppositely charged ions, both of which are pronounced at such small relative numbers of water molecules in small ensembles of particles.The two systems of different size were considered in order to illustrate that rotations of the copper sulfate hydrate ensemble with respect to the Nafion chains can provide a compensation for the changes in the distances between the Nafion -SO 3 − residues.The corresponding S. ..S distances in the combined model systems were 12.7 and 11.5 Å, respectively.This means that here Nafion chains can probably be built in the crystal deposit in the most efficient way.
Finally, if we turn to sucrose, it can be noticed that its crystals involve staircase brushes of alternating methylene groups and hydroxyls (Figure 15) where no distinct regions of predominantly hydrophilic or hydrophobic nature can be distinguished.Additionally, because of the large size of the molecules, which are nevertheless smaller than the above distances between the unfolded SO 3 -terminated chains of Nafion, there is no possibility for a regular fitting of sucrose molecule between the fibers.The sole variant is the wavy changes when some two neighboring chains are at a larger distance (about 16 Å between their terminal groups) and, thus, can accommodate two sucrose molecules in between, whereas the distances to their next left-and right-hand-side neighbors are much smaller, so that only one molecule can be located there.This variant is possible only when the orientation of Nafion fibers is not normal to the mean polymer surface, and these are inclined at varying angles.However, such dangling fibers are no longer a regular brush, which could stabilize some cubic, orthorhombic, or monoclinic lattice, but rather an undressed hair that could only distort any kind of molecular organization.Thus, the discrepancy between both individual sucrose molecules and their crystal packing, on one hand, and the possible regular arrangements of Nafion chains, on the other, should make the nucleation of the sucrose deposit strongly unordered and disturbed, which favors the appearance of an amorphous material instead of a crystal.
Polymers 2024, 16, x FOR PEER REVIEW molecules and the approach of the distant oppositely charged ions, both of which a nounced at such small relative numbers of water molecules in small ensembles o cles.The two systems of different size were considered in order to illustrate that ro of the copper sulfate hydrate ensemble with respect to the Nafion chains can pro compensation for the changes in the distances between the Nafion -SO3 − residu corresponding S…S distances in the combined model systems were 12.7 and 11.5 spectively.This means that here Nafion chains can probably be built in the crystal d in the most efficient way.
Finally, if we turn to sucrose, it can be noticed that its crystals involve st brushes of alternating methylene groups and hydroxyls (Figure 15) where no dist gions of predominantly hydrophilic or hydrophobic nature can be distinguished tionally, because of the large size of the molecules, which are nevertheless small the above distances between the unfolded SO3-terminated chains of Nafion, ther possibility for a regular fitting of sucrose molecule between the fibers.The sole va the wavy changes when some two neighboring chains are at a larger distance (abo between their terminal groups) and, thus, can accommodate two sucrose molecule tween, whereas the distances to their next left-and right-hand-side neighbors are smaller, so that only one molecule can be located there.This variant is possible only the orientation of Nafion fibers is not normal to the mean polymer surface, and th inclined at varying angles.However, such dangling fibers are no longer a regular which could stabilize some cubic, orthorhombic, or monoclinic lattice, but rather dressed hair that could only distort any kind of molecular organization.Thus, the d ancy between both individual sucrose molecules and their crystal packing, on one and the possible regular arrangements of Nafion chains, on the other, should ma nucleation of the sucrose deposit strongly unordered and disturbed, which favors pearance of an amorphous material instead of a crystal.Summing up, Nafion differs from most coiled or folded polymers susceptible to swelling.Typically, an increase in the volume of the polymer matrix as a result of swelling causes certain mechanochemical or physical changes determined by the acquired inner strains.Usually, such behavior is demonstrated by cross-linked polymers, whose surfaces are not substantially affected by soaking.Nafion is not a cross-linked polymer, but rather a strongly branched one with a drastic difference with respect to hydration between the backbone and side chains.As shown at a nanometer level, these chains should be unfolded upon immersing the polymer in an aqueous solution.Their resulting extension is not as large, about 9 Å, of which only 6 Å segments are reasonably hydrophilic, and the behavior of the segments in water apparently plays a substantial role in the character of the surface processes.
Terminal SO 3 H groups are shown to be dissociated even in the presence of a small number of water molecules, which means that in actual solutions, the dissociation should always take place.The separation between the side chains, which is about 12 to 14 Å depending on the conformation of the backbone, makes the dissociation of nearly all the groups which are brought in contact with water highly probable.The dissociation in turn promotes further unfolding of other chains and their as regular as possible arrangement over the surface due to the governing balance of electrostatic forces.As a result, -SO 3 − residues bonded to but separated from the hydrophobic backbone can act as a regular coordinating brush or grid.Their negative charges make them efficient coordinating sites for cations, and when the compound dissolved can crystallize in a lattice, an integer number of whose unit cells can fit in between the extended Nafion chains, crystallization is promoted, and Nafion acts as a true template.
By contrast to typical nearly planar templates, the Nafion template is characterized by a certain variability in the armoring parameters due to the flexibility of the chains.As a result, compounds with different unit cell parameters can be deposited on the Nafion surface.The sole, seemingly principal, restriction is that the syngony should be no lower than monoclinic because of the optimum normal orientation of the chains with respect to the backbone.This is illustrated by the examples of the formation of cubic potassium chloride and monoclinic sodium acetate trihydrate crystal deposits.In the case when the compound can be crystallized in triclinic and monoclinic syngonies, like copper sulfate hydrates, the above feature of Nafion makes the crystallization of monoclinic deposits preferable and selective.At the same time, if the unit cell parameters prevent a stable building of an integer number of the cells in between the Nafion chains and the chemical composition of the molecules is by no means complementary to the hydrophilic segments of the polymer fibers, the deposit becomes amorphous.
Discussion of the Results of the Refractometry Experiment
Insofar as the refractive index correlates with the salt concentration, its changes indirectly characterize the growth rate of the deposit.The dependences were found to be approximated with the following exponential function (see the inset in Figure 8): Here we would like to refer to work [24], where an extensive study of the concentration dependence of the refractive index of aqueous CuSO 4 solutions was carried out.Following the study [24], theoretical and experimental estimates of the refractive index of CuSO 4 pentahydrate give n = 1.514, and the maximum value of the refractive index in supersaturated aqueous solutions of CuSO 4 in [24] was n = 1.35 (see Table 1 in [24], where the results of measurements and theoretical calculations are presented), which is very close to the asymptotic value of our approximating curve: n(t) = 1.4 at t → ∞ (Equation ( 4)).
As shown in Figure 8, in the absence of a Nafion plate in a Petri dish, the characteristic time was estimated as τ = 40 h, while in the presence of a Nafion plate it increased to τ = 109 h, i.e., the refractive index on the surface of the solution more quickly reaches its equilibrium value n = 1.4 in the absence of the Nafion plate.Assuming that the evaporation of water from the surface of the solution occurs at the same rate under identical physical conditions, we can claim that the time τ indicates how rapidly the concentration of copper sulfate increases within the probe superficial region for compensating the depletion caused by the deposition.Since such a compensation is much faster in the absence of a Nafion plate in the Petri dish, the ratio of characteristic times in the presence to those in the absence of a Nafion plate can be taken as a rough estimate of the acceleration of the deposition process on the Nafion substrate, which is explained within the framework of the theoretical model, see Section 4.2.Besides that, in accordance with our observations, all studied crystals grew approximately twice as faster on the surface of Nafion compared to the case of absence of a Nafion plate in the Petri dish.
Conclusions
In X-ray diffraction experiments, the crystalline patterns grown on a Nafion substrate from supersaturated aqueous solutions based on natural water and DDW were studied.
In accordance with the model we are developing here, these crystals were grown under the conditions of unwinding the polymer fibers into the bulk liquid, or in the absence of this effect.It turned out that the flexible brush of the fibers, formed the Nafion interface when soaking in natural water, presets the geometry of the crystal unit cell, which was especially pronounced in the case of CuSO 4 .This salt has two crystal modifications, namely, pentahydrate and trihydrate, the pentahydrate being energetically more favorable, see Section 4.2.However, the unit cell of the pentahydrate has the shape of a skewed parallelepiped, while the unit cell of the trihydrate is a rectangular parallelepiped.Since the unwound fibers are oriented normally to the mean polymer surface, this predetermines the direction of one of the edges of the unit cell.As a result, predominantly CuSO 4 trihydrate crystals grow on such a surface.Another drastic change produced by the Nafion substrate is the possible amorphization of the deposit, as was shown in the experiments with sucrose.
Thus, Nafion can act as an efficient template, which can either promote the formation of regular deposits, or provide conditions for the selective crystallization of a particular crystal form, or even act as an amorphizing agent.
Figure 1 .
Figure 1.Nafion plate in an aqueous solution of CuSO 4 .
Figure 2 .
Figure 2. The differential absorbance of a Nafion membrane soaked in (a) natural deionized water and (b) deuterium-depleted water for t hours.
Figure 2 .
Figure 2. The differential absorbance of a Nafion membrane soaked in (a) natural deionized water and (b) deuterium-depleted water for t hours.
of deposits formed in DDW on the smooth and polymer surfaces.Therefore, the results obtained in DDW-based solutions are not shown.Polymers 2024, 16, x FOR PEER REVIEW 7 of 2
Figure 3 .
Figure 3.The integral absorbance of a Nafion membrane in a range of 200 to 600 nm upon its swel ing in water depending on the duration of soaking.
Figure 3 .
Figure 3.The integral absorbance of a Nafion membrane in a range of 200 to 600 nm upon its swelling in water depending on the duration of soaking.
Figure 4 .
Figure 4. XRD patterns of KCl deposits formed in the presence and in the absence of Nafion plat
Figure 4 .Figure 5 .
Figure 4. XRD patterns of KCl deposits formed in the presence and in the absence of Nafion plate.
Figure 6 .
Figure 6.XRD patterns of CuSO4 deposits formed in the presence and in the absence of Nafion p
Figure 5 .Figure 5 .
Figure 5. XRD patterns of NaCH 3 COO deposits formed in the presence and in the absence of Nafion plate.
Figure 6 .
Figure 6.XRD patterns of CuSO4 deposits formed in the presence and in the absence of Na
Figure 6 . 23 Figure 7 .
Figure 6.XRD patterns of CuSO 4 deposits formed in the presence and in the absence of Nafion plate.
Figure 7 .
Figure 7. XRD patterns of sucrose C 12 H 22 O 11 deposits formed in the presence and in the absence of Nafion plate.
Polymers 2024 , 23 Figure 8 . 1 .
Figure 8. Refractive index of the CuSO4 specimen depending on the duration of deposit formation.4. Discussion 4.1.Discussion of Absorption Spectra of Nafion upon Swelling in Natural Water and DDW
Figure 8 . 4 . 1 .
Figure 8. Refractive index of the CuSO 4 specimen depending on the duration of deposit formation.
Figure 9 .
Figure 9. Three principally different conformations of the two-tail segment of the actual Nafion fiber.See text for details.Panel (a): the highest-energy structure with the most extended −SO3H-terminated side chains; Panel (b): lower-energy structure with the partly folded side chains; Panel (c): the lowest-energy structure with the side chains H-bonded via terminal −SO3H groups.
Figure 9 .
Figure 9. Three principally different conformations of the two-tail segment of the actual Nafion fiber.See text for details.Panel (a): the highest-energy structure with the most extended −SO 3 Hterminated side chains; Panel (b): lower-energy structure with the partly folded side chains; Panel (c): the lowest-energy structure with the side chains H-bonded via terminal −SO 3 H groups.
Figure 11 .
Figure 11.Illustration of the ultimate effect produced by the inclusion of a model Nafion fragment in an ensemble composed of 50 water molecules.Panel (a): a relaxed water cluster; Panel (b): a water shell of the model Nafion fragment.
Figure 11 .
Figure 11.Illustration of the ultimate effect produced by the inclusion of a model Nafion fragment in an ensemble composed of 50 water molecules.Panel (a): a relaxed water cluster; Panel (b): a water shell of the model Nafion fragment.
Polymers 2024 ,
16, x FOR PEER REVIEW 17 of 23 backbone.Based on this conclusion, we can analyze the possible variants of the deposit crystal growth on such a surface.
Figure 12 .
Figure 12.Schematic illustration of the partial complementarity of (a) KCl and (b) K2SO4 crystal lattices to the chain separation in the Nafion structure (c).Panel (a) KCl lattice fragment; Panel (b): K2SO4 lattice fragment; and Panel (c): a proper configuration of the model double-chain Nafion fragment.
Figure 12 .
Figure 12.Schematic illustration of the partial complementarity of (a) KCl and (b) K 2 SO 4 crystal lattices to the chain separation in the Nafion structure (c).Panel (a) KCl lattice fragment; Panel (b): K 2 SO 4 lattice fragment; and Panel (c): a proper configuration of the model double-chain Nafion fragment.
Figure 13 .
Figure 13.Schematic illustration of the minor correlation between the sodium acetate trihydrate crystal structure and the possible chain separation in the Nafion structure.Panel (a) CH3COONa × 3H2O lattice fragment; Panel (b): its basic cell with the b and c lattice vectors shown; and Panel (c): a proper configuration of the model double-chain Nafion fragment.Now, let us turn to the most interesting example of copper sulfate salt.As was mentioned above, it can crystallize to form two different hydrates, namely trihydrate with a
Figure 13 .
Figure 13.Schematic illustration of the minor correlation between the sodium acetate trihydrate crystal structure and the possible chain separation in the Nafion structure.Panel (a) CH 3 COONa × 3H 2 O lattice fragment; Panel (b): its basic cell with the b and c lattice vectors shown; and Panel (c): a proper configuration of the model double-chain Nafion fragment.
Figure
Figure14c,d illustrates the almost exact inclusion of such fragments.Two variants were considered, namely a (Cu) 3 (SO 4 ) 3 (H 2 O) 9 cluster (Figure14c) and a (Cu) 4 (SO 4 ) 3 (H 2 O) 16 cluster (Figure14d).The initial mutual arrangement of copper ions, sulfate groups, and water molecules in both combined model systems corresponded to the copper sulfate pentahydrate lattice.Then, the structures were optimized at the imposed restrictions on the mutual positions of copper and sulfur atoms of the sulfate groups, as well as the distances between copper atoms and water oxygens in order to prevent the agglomeration
Figure 15 .
Figure 15.A fragment of the crystal lattice of sucrose C12H22O11 and the structure of its ind molecule.
Figure 15 .
Figure 15.A fragment of the crystal lattice of sucrose C 12 H 22 O 11 and the structure of its individual molecule. | 17,350.2 | 2024-03-01T00:00:00.000 | [
"Materials Science",
"Chemistry"
] |
233U mass yield measurements around and within the symmetry region with the ILL Lohengrin spectrometer
The study of fission yields has a major impact on the characterization and understanding of the fission process and is mandatory for reactor applications. The LPSC in collaboration with ILL and CEA has developed a measurement program on fission fragment distributions at the Lohengrin spectrometer of the ILL, with a special focus on the masses constituting the heavy peak. We will present in this paper our measurement of the very low fission yields in the symmetry mass region and the heavy mass wing of the distribution for 233U thermal neutron induced fission. The difficulty due to the strong contamination by other masses with much higher yields will be addressed in the form of a new analysis method featuring the required contaminant correction. The apparition of structures in the kinetic energy distributions and possible interpretations will be discussed, such as a possible evidence of fission modes.
Introduction
The accurate knowledge of the fission data in the actinide region is important for studies of innovative nuclear reactor concepts.Today, in the framework of nuclear data evaluation, fission models are necessary to inscrease the consistency and the precision of the libraries.For instance, post neutron fission yields are actually needed in the current and innovative fuel cycles for the calculation of the inventory and the radiotoxicity of the spent fuel, and the estimation of the residual power after shutdown.
Moreover, fission yield measurements supply experimental data to put constraints on fission models and improve their predictive power.In this context, since 2007, various experiments from our collaboration have been performed to investigate fission yields at the Lohengrin spectrometer of the ILL with a special focus on the heavy mass and symmetry region, where can be found an inconsistency between models or evaluations and the scarce experimental data.
Recent measurements from [1,2] of 233 U fission yields in the heavy region with the Lohengrin spectrometer showed an important discrepancy near the symmetry region when compared to evaluations.This is shown in Fig. 1 where these measurements are displayed with the JEFF-3.1.1 evaluations [3].Firstly, our work presented here constitutes a deep investigation around and within the symmetry region to explain this disagreement and get accurate fission yields, which is a key element a e-mail<EMAIL_ADDRESS>ensure the self-normalisation of our data [1,2].A second important goal of our work here is to take this opportunity to study the structures in the fission fragment kinetic energy distributions and use symmetric fission as a laboratory to test fission models.
Experimental setup: the Lohengrin spectrometer
The Lohengrin mass spectrometer [4] is a nuclear physics instrument from the ILL research reactor facility which allows to study fragment distributions from thermal neutron induced fission with a very good resolution.A fissile actinide target is placed close to the reactor core, in a thermal neutron flux reaching 5 × 10 14 neutron.cm−2 .s−1 .Fission fragments emerge from the target with an ionic charge distributed around an average ionic charge state of about 21.Those fragments that are emitted along the beam tube axis undergo a horizontal deflection in a magnetic field, directly followed by a vertical deflection in an electric field.These combined fields separate ions according to their A/q and E/q ratios, with A, q and E the mass, ionic charge state and kinetic energy of the ions respectively.
At the spectrometer exit, different detection systems can be installed, such as an ionisation chamber for mass yield measurements, or Ge clovers that are used with an additional magnet whose aim is to focus the ion beam.A schematic view of the spectrometer is shown in Fig 1.
Yield measurements around and within the symmetry region 3.1 Absolute mass yields in the heavy region
High yields in the heavy region are obtained after an integration over the kinetic energy and the ionic charge distributions of the count rates measured with the ionisation chamber.A new measurement method and consequent analysis path have been developed and are detailed in Ref [1,2].Among the special features of this method are the self-normalisation of our data and the calculation of the experimental covariance matrices.Provided that all the heavy mass rates are measured, it is possible to self-normalise the data by defining to 100 % the sum of the whole heavy peak yields.As a consequence, these new measurements are independent from another experiment or assessment and may be compared directly with the existing data and evaluations.The results for 233 U(n th , f ) are shown in EPJ Web of Conferences Fig. 1 and present a clear difference of the yields on the way down to the symmetry region.The goal of this work is the complete analysis of the ionic charge and the kinetic energy distributions which present structures that had not been seen in past works [5].
Contamination highlighting
In the descent to the symmetry region (A < 130), a second component appears at low kinetic energy.We previously observed this component to be sensitive to the pressure of the spectrometer.The evolution of this effect with pressure is shown in Fig. 2.This phenomenon is explained as a consequence of ionic charge exchange between the fission fragment and the residual gas of the Lohengrin vacuum.If we consider that the main magnet selected the fragments with mass A 0 and A 1 with the condition q 1 followed by a charge change for the second fragment q 1 → q 1 , then the condenser will select the kinetic energies following the equation : Since this contaminant will appear at the same energy in the ionisation chamber, we get the following condition on the charge : E 1 = E 0 ⇔ q 1 = q 0 .Consequently the contaminant mass follows the A 0 .If we compare to the initial condition from the magnet we get the final relation for the contaminant : This hypothesis was investigated with the building of an indicator based on equation 2. Fig. 2 shows the indicator results for mass 124.The masses with discrepant count rates out of the distribution shaped by the other ones are typically the masses with the highest contamination indicator values.This indicator was validated by gamma spectrometry, where the presence of the expected contaminant was proven.
Analysis method for the contamination correction
The contaminant subtraction was achieved on the kinetic energy distributions P(E k ) following equation 3.
where P qq represents the measured charge changing probabilities using γ spectroscopy.These probabilities have the same order of magnitude for several nuclei.In the following, it will be supposed identical for all isotopes : P q→q+1 ∼ 0.02, P q→q+2 ∼ 0.0013 and P q→q−1 ∼ 0.02.P ind is linked to the presence of the contaminant in the ionisation chamber and is based on the indicator derived from equation 2. The correction factor is weighted by the yield ratio of the contaminant mass (with a higher yield) over the measured symmetry mass, multiplied by the kinetic energy distribution of the contaminant mass P(E k ) cont .
Figure 3 shows the results of such a correction for the kinetic energy distributions of masses 116 and 118.We can notice that the presence of two components in the distribution remains after the correction, and this is the case for all masses around and within the symmetry region.Moreover, even if we consider P qq as free parameters, it is not possible to fully suppress the low component.
Preliminary results and discussion
Figure 4 shows the preliminary results for our symmetry yield measurements.The yields are shown in the case of no correction (black dots), corrected (red dots), but also in the hypothesis where only one of the two components in the kinetic energy distribution is considered to estimate the yield (blue and green dots).The disagreement of our corrected values with the JEFF-3.1.1 evaluation is still present.The mean kinetic energy for the two components and for the whole kinetic energy distribution is also shown on the right figure, after correction from the energy loss in the target using a set of reference data (orange dots) with a very thin target of 30 μg.cm −2 without any cover.
An important result here is the apparition of two modes after effective decontamination.We remind that the modality of the fission process has already been observed for the Fermium region [6] and in the case of fast neutron induced fission [7].In the ponctual approximation of the Coulomb EPJ Web of Conferences 08002-p.4repulsion, the total kinetic energy of fragments is related to the scission configuration by the following equation : with m i and Z i the mass and nuclear charge of each fragment and d the distance between the two centers of charge of the fragments at the scission point.The results of this calculation, considering additional hypothesis such as the Unchanged Charge Density (UCD), are shown in Tab. 1 and compared to estimations from the Brosa approach [8] and from microscopic model calculations [9].Our values for the low and high kinetic energy components are rather consistent with the theoretical predictions from the two models considered here.These results tend to prove the modality of the 233 U(n th , f ) fission process.
Conclusion and perspectives
This work presents an investigation of the 233 U(n th , f ) yields in the symmetry region, with a focus on the measurement of the fragment kinetic energy distributions.After a proven contamination presence and its necessary correction, two components in the distributions are observed for all of the masses around and within the symmetry but not int the high yield region.A further study relative to possible biases in the correction process and an estimation of the uncertainties are still needed to conclude on the direct evidence of the modality of thermal neutron induced fission in the region of Uraniums.The development of a Time Of Flight setup is under development to get a triple coincidence between a double Frisch grid ionisation chamber and the velocity of the incoming fragments.
Figure 2 .
Figure 2. Influence of the condenser pressure on the kinetic energy distribution (left).Contamination indicator for the mass 124 as a function of ionic charge (right).
Figure 4 .
Figure 4. Mass yields per component in kinetic energy distributions compared to the Jeff-3.1.1 library (left).Mean kinetic energy of the extracted modes after energy loss corrections in the target (right).
Table 1 .
Comparison of scission configurations d, between our work and different fission models.Estimation of neutron emission ν and comparison with dedicated measurement(Nishio [10]). | 2,426.2 | 2016-03-01T00:00:00.000 | [
"Physics"
] |
Do Robots Need to Be Stereotyped? Technical Characteristics as a Moderator of Gender Stereotyping
As suggested by previous results, whether, when designing robots, we should make use of social stereotypes and thus perpetuate them is question of present concern. The aim of this study was the identification of the specific conditions under which people’s judgments of robots were no longer guided by stereotypes. The study participants were 121 individuals between 18 and 69 years of age. We used an experimental design and manipulated the gender and strength of robots, and we measured the perception of how a robot could be used in automotive mechanics for light and heavy tasks. Results show that the technical characteristics of robots helped to anchor people’s judgments on robots’ intrinsic characteristics rather than on stereotypical indicators. Thus, stereotype perpetuation does not seem to be the sole option when designing robots.
Introduction
About fifty years ago, Isaac Asimov [1] imagined what a World's Fair in 2014 would look like.Among other things, Asimov described a "robot housemaid [...] slow-moving but capable of general picking-up, arranging, cleaning, etc."The attribution of a human gender stereotype to a (gender-free) machine seems odd, so why could one imagine this "female" robot with great ease?Media equation theory [2] helps in answering this question.This theory assumes that processes guiding social judgments and interactions can be generalized to encompass human-media (including human-computer and human-robot) judgments and interactions.For instance, using the computers are social actors experimental paradigm, [3] showed how common social responses to computers are.Concerning robots specifically, several empirical studies have shown that the perception of robots can be driven by social categorization processes and stereotypes.For instance, a "male" robot has been perceived as more agentic [4], more competent to achieve male stereotyped tasks [4], or more suitable for use as a security guard [5] than a "female" robot.To say the least, it seems that judgments of robots are not immune to stereotyping, even if the picture is not that simple, given the results of a recent study [6] showing that gender stereotyping of robots has little, if any, effect on such judgments.Despite the need to have a better understanding of this asymmetry of results concerning gender stereotypes, it seems that, at least under some circumstances, human-robot interactions or people's judgments of robots involves social categorization processes and stereotyping.Hence, it could be tempting to build social robots [7] embedding social characteristics stereotypically congruent with their intended use (e.g., a "male" security guard) to enhance their acceptance and economic value; however, this would come at the price of stereotype perpetuation.In this study, we argue that socio-economic and moral principles do not necessarily need to be in conflict.
In previous studies, the role of technical characteristics associated with stereotypical characteristics was not studied in depth.We assume that the technical characteristics of a robot congruent with success in performing a task should moderate the effect of gender stereotypes.In other words, even if gender roles are so embedded in the human psyche (e.g., [8]) that they will guide the evaluation of robots in accordance with stereotypes, people should also rely on the objective skills or attributes of robots, when available, to form more accurate judgments [9].More specifically, in the context of the reflection-construction model of Lee Jussim [10], when no objective characteristics (congruent with a success on a task) of robots are known, social beliefs about gender will have a strong influence on the perceiver's judgments; however, when target's attributes are known (and congruent with a success on a task), social beliefs will lose their explanatory strength and people will use the known objective attribute when judging robots.
Consequently, we formulated one general interaction hypothesis and two specific hypotheses bringing our predictions in line with the reflection-construction model.
General hypothesis: The effect of gender stereotype on judgments of a robot is moderated by the robot's characteristics.
Hypothesis 1.When the technical characteristic of a robot is unknown or insufficient to achieve a task, gender stereotypes guide judgments (see Table 1).Hypothesis 2. When the technical characteristic of a robot is known and is sufficient to achieve a task, the observer relies on technical characteristics rather than on a gender indicator (see Table 1).
Current Study
To test our hypotheses, we chose to study an unmitigated male stereotyped activity in the cultural context of this study, i.e., mechanics.We used the ability to lift weights (maximum lift of 15 kg vs. 150 kg) as the known technical characteristic of the robot so that we could have conditions where people could explicitly consider objective skills when judging robots.Moreover, we used two different types of mechanical tasks-those requiring great strength (heavy tasks, for instance, changing a motor) and those requiring little strength (light tasks, for instance, changing spark plugs)-to create conditions where objective characteristics have different implications on success or failure on a task.In doing so, we were able to evaluate the specific conditions under which objective skills are truly taken into account-in other words, where people do or do not judge stereotypically.More specifically, the predictions listed in Table 1 are drawn from the above hypotheses.
Participants
Participants were 121 adult visitors (42 women, 78 men, 1 unknown, mean age = 31.43,SD = 13.48,ranging from 18 to 69) of the Open Doors event at HEIG-VD who volunteered to participate and who answered correctly to (memory) check questions (see below).None of the participants declared they were aware of the variables we were manipulating.
Procedure
The Open Doors event is a single afternoon event taking place once a year at HEIG-VD.Experimenters held a stand and asked visitors if they wanted to fill in a computerized questionnaire (questionnaires were divided into two unrelated parts, with the first part dedicated to ubiquitous computing in the workplace and the second part dedicated to this study).We used Limesurvey, an online survey open source software, to implement the study.Participants who volunteered received candies to thank them for their participation.
Before filling in the questionnaire, participants completed a consent form.They were informed that their responses were anonymous and that they could withdraw at any time, and they were also asked if they agreed or not to the researchers using their data in scientific publications or in teaching.
On the first webpage of the questionnaire, we manipulated our independent variables and measured their impact on the dependent variable.The second webpage contained manipulation (memory) check questions.We used a 2 (robot's gender: male vs. female, between-subject) ˆ3 (maximum lift: 15 kg vs. 150 kg vs. no information, between-subject) ˆ2 (type of tasks: heavy vs. light, within subject) mixed design.A php script randomly directed participants to experimental conditions, a technique which was used so that none of the participants or experimenters were aware of the experimental conditions (double blind process).
To manipulate the robots' gender, we used either a "male version" of FLobi (see [11], the author of which owns the copyright for the pictures) named "Christian", or a "female version" named "Christine".Below the picture located on the top of the first webpage, participants read the text containing our manipulation of the robot's strength (i.e., the robot's maximum lift, in italics below): "[Christine/Christian] is currently under development and will incorporate numerous features.For instance: basic computations and automated task execution, vocal recognition [lifts weights up to 15 kg vs. lifts weights up to 150 kg vs. no information]".Then, participants had to indicate to which extent the robot could be used in automotive mechanics for (a) "heavy tasks (for example: motor, wheel changing)" and (b) "light tasks (for example: spark plugs, mirrors changing)" (within-subjects independent variable) on a scale ranging from "1 = strongly disagree" to "7 = totally agree".Finally, on the second webpage, participants were asked to answer some manipulation (memory) check questions (asking for the gender and the maximum lift for the robot they saw).Check questions were used to ensure that participants carried out the experiment using the instructions they received during the experiment (to eliminate some noise from the data, that is, for instance, participants who thought the robots they had seen was a male when it was actually Christine).
Analysis
We used a series of multiple linear regressions to analyze our 2 (robot's gender: male vs. female, between-subject) ˆ3 (maximum lift: 15 kg vs. 150 kg vs. no information, between subject) ˆ2 (type of tasks: heavy vs. light, within subject) design.More specifically, we directly used contrast and dummy coding to analyze our data because (a) our hypotheses were specifics (e.g., simple interaction effects) and (b) main and global interactions effects were of less interest.
More specifically, we tested for seven effects to address our hypotheses.
-Effect 1: effect of gender where there is no information on strength (we expect this effect to be significant to replicate stereotype effect for both heavy and light tasks).-Effect 2: interaction effect between gender and the type of tasks where there is no information on strength (we expect to find no significant interaction effect revealing that Effect 1 is identical for both types of tasks).-Effect 7: effect of gender where strength was explicitly insufficient to achieve the tasks, i.e., a robot with the ability to lift a maximum of 15 kg had to lift a weight far an excess of this (we expected this effect to be significant).
-Effect 3: effect of gender where strength was explicitly sufficient to achieve both tasks (we expected this effect to be non-significant).-Effect 4: interaction effect between gender and the type of tasks where strength was explicitly sufficient, i.e., a robot with the ability to lift 150 kg for heavy and light tasks (we expected to find no significant interaction revealing that Effect 3 is identical for both types of tasks).-Effect 5: effect of gender in the only other condition where strength was explicitly sufficient to achieve the tasks, i.e., a robot with the ability to lift a maximum of 15 kg had to lift a light weight (we expected this effect to be non-significant).-Effect 6: interaction effect between gender and the type of tasks in the condition of the maximum lift of 15 kg, because strength, in one case, is explicitly sufficient to achieve the tasks (see Effect 5) and, in the other case, explicitly insufficient to achieve the tasks (see Effect 7) (we expected this effect to be significant as an indication of the moderation of the stereotype effect by the technical characteristics when known and sufficient to achieve the tasks).
To test these effects, we followed the recommendations of Judd, McClelland, and Ryan [12] for analyzing mixed designs.We first computed standardized differences variables ("W ki " see Equation (1) and Table 2 below) to handle non-independence of observations given our within-subjects independent variable, and we then used specific codes to test for each effect of interest (see Equation (2) and Table 2).To eliminate this non-independence problem, given the linear regression framework of our analysis, we computed, for instance, a single composite score for each participant consisting, broadly speaking, of a difference score between the evaluation of the robot for the heavy tasks and the evaluation of the robot for the light tasks (see W 2i below).In other words, the dependence of observations was no longer a problem since the composite variable represents the difference within each participant.For instance, if using contrast codes of +1 and ´1 respectively for heavy and light tasks in Equation ( 1), a resulting score of 0 indicates no difference; a positive score indicates a higher score for heavy tasks than light tasks, and a negative score indicates a lower score for heavy tasks than light tasks.Testing the main effect of this variable can now be easily achieved using a regression model (for instance, in an intercept-only model with this composite variable as the response variable and no explanatory variable, the result and p-value are strictly identical to: (a) a one sample t-test against a test value of 0; (b) a dependent t-test comparing evaluations between heavy and light tasks; or (c) a repeated measure ANOVA with the type of tasks as within-subject independent variable).It is important to note that we used a somewhat more complicated formula instead, for instance, of a difference score to "standardize" scores in order to keep them in the same metric as Y scores rather than W scores.The numerator part is used to calculate the differences between variables and the denominator to standardize scores.We used the following equation to calculate standardized differences variables: We applied Equation ( 1) and calculated four dependent variables as follows to test for specific effects (see hypothesis testing subsection below).Because we are using a model comparison approach and a different coding scheme to test for our specific effects, we have to compute four different dependent variables to be used in regression models: (a) W 1i to test Effects 1 and 3 with δ 1"heavy " 1 and δ 2"light " 1 with this coding scheme; because δ 1 and δ 2 take the same value, the effect of the type of tasks is no longer of interest (no variation due to the levels of the types of tasks); in other words, the effects of other variables in the model (gender and maximum lift) are tested for both tasks.(b) W 2i to test Effects 2, 4, and 6 with δ 1"heavy " 1 and δ 2"light " ´1 with this coding scheme; because δ 1 and δ 2 are of opposite sign, the scores of W2 are difference scores, so we can take into account the effect of the type of tasks.Using this variable in the model allows one to estimate the effect of the type of tasks and its interactions with other variables.(c) W 3i to test Effect 5 with δ 1"heavy " 1 and δ 2"light " 0 with this coding scheme; because δ 1 takes value of 0, the effect of other variables will only be tested on the light condition, which is similar to analyzing the light condition alone without taking into account the heavy condition (i.e., simple effect).(d) W 4i to test Effect 7 with δ 1"heavy " 0 and δ 2"light " 1 with this coding scheme; because δ 2 takes value of 0, the effect of the other variables will only be tested on the heavy condition; which is similar to analyzing the heavy condition alone without taking into account the light condition (i.e., simple effect).
Linear regression model: To predict our dependent variables, we used a contrast code for the gender of the robot-an independent variable (male = 1 vs. female = ´1)-and codes listed in Table 2 for the two other explanatory variables in Equation (2):
To test Hypothesis 2, we first tested for the gender effect in conditions where strength was explicitly sufficient to achieve both tasks (i.e., 150 kg of strength).We found no effect of gender, b = In this section, we report regression coefficients in the metric of the dependent variable in dividing coefficients by the denominator of W k .We analyzed these data with a robust linear model and found very similar results (the authors will be pleased to provide results of those analyses to the interested reader).Given this similarity and because sphericity was assumed, we were reasonably confident that the results of multiple linear regressions we reported in this results section were not affected by assumption violations and potential outliers.
To summarize, when strength was sufficient to achieve the tasks, the effect of stereotypes did not appear.Moreover, the interaction between gender and the type of tasks in the 15-kg condition was tendentiously significant with a small to medium size, b = 0.29, t(115) = 1.74, p = 0.08, d = 0.35 (Effect 6), but clearly in the expected direction.Indeed, contrary to the condition of 15 kg for light tasks, in the case where strength was explicitly insufficient to achieve the tasks (i.e., 15 kg for heavy tasks), a significant effect of gender, in the expected direction, was found (M female = 3.12, SD female = 1.66, 95% CI = [2.41,3.82] vs. M male = 4.50, SD male = 1.89, 95% CI = [3.65, 5.35]), b = 0.69, t(115) = 2.49, p = 0.01, d = 0.41 (Effect 7).This latter result was consistent with Hypothesis 1.
A visual inspection of means and their confidence intervals revealed an unintended result.In the male robot/heavy tasks condition (see Figure 2), we did not find a significant difference between the 15-kg condition, where strength was insufficient to achieve the tasks (M = 4.50, SD = 1.89), and the 150-kg condition, where strength was sufficient to achieve the tasks (M = 4.63, SD = 1.92).
Discussion
In recent studies, robots have been found to be, like their creators, subject to social stereotypes (e.g., [4]).Does the acceptance and hence economic value of robots need to be grounded in stereotypes?We have given a negative answer to this question in showing that, when the technical characteristics of robots are specified and congruent with success of a task, stereotypical judgment effects are diminished.That is, when technical characteristics are unknown or insufficient to achieve a task, people seem to rely on stereotypical information (i.e., we replicate previous results with quite similar effect sizes); however, when the technical characteristics are sufficient, people rely more on the technical characteristics than on the "gender" of the robot.As suggested by Fiske [9], people also rely on the objective skills of robots.Besides having replicated results of previous studies that have shown that gender stereotypes affects people's judgments of robots, we have also shown that other indicators consistently affect these judgments and are able to extinguish stereotypical ones.These results are in accordance with predictions of the reflection-construction model [10].Nevertheless, the equation is not as simple as it seems, as we also found that stereotypes continue to guide people's judgments of male robots, even if the technical characteristics are known and are insufficient to achieve the tasks.It is well-known that judgments do not obey pure rationality but are often biased and based on heuristics (e.g., [13]); thus, the latter result could be explained by representativeness heuristics [14].Because automotive mechanics is typically a male activity, the male robot could have been judged as representative of the typical car mechanic, so the strong association between the two may have led participants to ignore the actual strength of the robot when making judgments.Further studies are needed to gain a deeper understanding of the asymmetry between judgments based on stereotypes and those based on rationality and, more generally, to test the generalizability of our results.Our study used a stereotyped male task.It might be interesting to design a study using a stereotyped female task (e.g., caring).If one obtains the same pattern of results, with (a) judgments of a robot based on stereotype when objective indicator is absent (for instance, no information on the ability of a robot to react in an empathetic manner) or insufficient to achieve the tasks (for instance, poor ability to react empathetically) and (b) judgments of a robot based on objective indicator when known and sufficient to achieve the tasks (for instance, excellent ability to react empathetically), it should reinforce the generalizability of our conclusions.
Another limitation and possible extension to our work could be to replicate the study using a real robot.Given that the results indicate that interactions with social robots depend on the nature of interaction with the robot (for instance, real robot vs. simulated robot presented on a screen, see [15]), it would be of interest to use physical robots that interact with participants.In [15], it was found that individuals feel more empathy towards a real robot with which they interact in comparison to an interaction with a simulated robot.Consequently, one could hypothesize that stereotype effects can be boosted when there are interactions with a real social robot because its social presence may favor social-based reactions, perhaps including stereotyping.However, given the study of [6], showing that, in a real interaction paradigm, the stereotype effect seems to be slight if existent, one could expect the same result in a replication of our study using a real robot.Only further theoretical and empirical investigations could help to arbitrate between these two predictions (or even other explanations).
Conclusions
In this study we replicated and extended the results found in [4], showing that the effect of human stereotypes on the judgments of robot is not inevitable.We indeed found that participants also rely on technical characteristics when evaluating robots.The effect of gender stereotypes on a robot's ability to succeed in a stereotyped male task was moderated by the strength of the robot.In particular, when available, technical characteristics were used by participants to judge robots with greater accuracy, causing the effect of gender stereotypes to vanish.
Despite the need for future research, we hope that our study will contribute to giving designers of robots the choice between building stereotyped robots and building robots that avoid the perpetuation of human stereotypes, without impacting their potential economic value.
Figure 1 .
Figure 1.Mean of agreement with statements concerning the use of robots in automotive mechanics for light tasks as a function of the robot's gender, and the robot's strength (maximum lift).Error bars represent 95% confidence intervals.
Figure 2 .
Figure 2. Mean of agreement with statements concerning the use of robots in automotive mechanics for heavy tasks as a function of the robot's gender, and the robot's strength (maximum lift).Error bars represent 95% confidence intervals.
Figure 1 .
Figure 1.Mean of agreement with statements concerning the use of robots in automotive mechanics for light tasks as a function of the robot's gender, and the robot's strength (maximum lift).Error bars represent 95% confidence intervals.
Figure 1 .
Figure 1.Mean of agreement with statements concerning the use of robots in automotive mechanics for light tasks as a function of the robot's gender, and the robot's strength (maximum lift).Error bars represent 95% confidence intervals.
Figure 2 .
Figure 2. Mean of agreement with statements concerning the use of robots in automotive mechanics for heavy tasks as a function of the robot's gender, and the robot's strength (maximum lift).Error bars represent 95% confidence intervals.
Figure 2 .
Figure 2. Mean of agreement with statements concerning the use of robots in automotive mechanics for heavy tasks as a function of the robot's gender, and the robot's strength (maximum lift).Error bars represent 95% confidence intervals.
Table 1 .
Predictions drawn from the hypotheses.
Table 2 .
Codes used for Code z explanatory variables in Equation (2) given levels of gender of the robot and the lift independent variables. | 5,213.6 | 2016-06-24T00:00:00.000 | [
"Computer Science",
"Engineering",
"Psychology"
] |
Physiological responses and transcriptome analysis of Hemerocallis citrina Baroni exposed to Thrips palmi feeding stress
Thrips are serious pests of Hemerocallis citrina Baroni (daylily), affecting crop yield and quality. To defend against pests, daylily has evolved a set of sophisticated defense mechanisms. In the present study, induction of systemic resistance in Hemerocallis citrina ‘Datong Huanghua’ by Thrips palmi feeding was investigated at both biochemical and molecular levels. The soluble sugar content of daylily leaves was significantly lower than that in control check (CK) at all time points of feeding by T. palmi, whereas the amino acid and free fatty acid contents started to be significantly lower than those in CK after 7 days. Secondary metabolites such as tannins, flavonoids, and total phenols, which are harmful to the growth and reproduction of T. palmi, were increased significantly. The activities of defense enzymes such as peroxidase (POD), phenylalanine ammonia lyase (PAL), and polyphenol oxidase (PPO) were significantly increased, and the degree of damage to plants was reduced. The significant increase in protease inhibitor (PI) activity may lead to disrupted digestion and slower growth in T. palmi. Using RNA sequencing, 1,894 differentially expressed genes (DEGs) were identified between control and treatment groups at five timepoints. DEGs were mainly enriched in secondary metabolite synthesis, jasmonic acid (JA), salicylic acid (SA), and other defense hormone signal transduction pathways, defense enzyme synthesis, MAPK signaling, cell wall thickening, carbohydrate metabolism, photosynthesis, and other insect resistance pathways. Subsequently, 698 DEGs were predicted to be transcription factors, including bHLH and WRKY members related to biotic stress. WGCNA identified 18 hub genes in four key modules (Purple, Midnight blue, Blue, and Red) including MYB-like DNA-binding domain (TRINITY_DN2391_c0_g1, TRINITY_DN3285_c0_g1), zinc-finger of the FCS-type, C2-C2 (TRINITY_DN21050_c0_g2), and NPR1 (TRINITY_DN13045_c0_g1, TRINITY_DN855_c0_g2). The results indicate that biosynthesis of secondary metabolites, phenylalanine metabolism, PIs, and defense hormones pathways are involved in the induced resistance to T. palmi in daylily.
Introduction
Hemerocallis citrina Baroni (daylily) is a perennial herbaceous plant belonging to the Liliaceae family with edible flowers, medicinal properties, and ornamental functions.Daylilies are naturally distributed in East Asia, with the paramount diversity of species originating in Korea, Japan, and China, and have been cultivated for thousands of years (Matand et al., 2020;Misiukevicǐus et al., 2023).Thrips species such as Frankliniella intonsa, Thrips palmi, and Frankliniella occidentalis are common pests of daylily, causing plant damage.The life cycle of thrips includes five stages: egg, nymph, prepupa, pupa, and adult.Adults lay eggs in young plant tissues; 1st and 2nd instar nymphs are agile, and young plant tissues are their favorite feeding site; 3rd instar nymphs (prepupae) are no longer fed and pupated underground in the uppermost 3−5 cm soil layer; 4th instar nymphs (pupae) do not eat and pass the pupal stage in the soil layer (Cannon et al., 2007).The generational overlap of thrips is extensive, and it takes 15−20 days to complete the first generation, of which the egg duration is 5−7 days, and the adult duration is 7−10 days.The turn of spring and summer is the first peak of thrips infecting daylily (Dhall et al., 2021).The filingsucking mouthparts of thrips damage the young leaves, tender stems, and flower buds of daylily.Thrips-infested plants exhibit slow growth, shortened internodes, and bent flower buds, which diminishes commercial value.When thrips were present in great numbers, the bud dropping rate of daylily was 31.65% higher than in controls, the actual bud dropping rate was as high as 99.62%, and it is the only insect pest that can lead to a completely failed harvest (Gao et al., 2021).In addition, owing to the small size of thrips, the high degree of concealment, the rapid reproduction, and the high incidence of drug resistance, it is difficult to achieve the desired control effect with a single insecticide (Steenbergen et al., 2018).Therefore, the safest and most effective strategy for thrips prevention and control is to utilize the insect resistance of the host plant.To this end, investigation of the physiological mechanisms of thrips resistance in daylily provides a basis for breeding insect-resistant plants.
Host plant damage by phytophagous insects alters plant nutrient content, production of toxic secondary metabolites, the activities of defense proteins and enzymes, and upregulates the expression of various defense-associated genes (Badenes-Peŕez, 2022;Beran and Petschenka, 2022).The redistribution of certain nutrients and rapid synthesis of secondary metabolites in plants after pest infestation affects the feeding, growth, and development of pests, which in turn stimulates insect resistance (Erb and Reymond, 2019; Barbero and Maffei, 2023).Levels of soluble sugars, free amino acids, and soluble proteins in bean leaves decreased with increasing population density and feeding time of F. occidentalis, and were lower than those in control levels (Qian et al., 2018).Pest damage induces the accumulation of flavonoids in plants (Ramaroson et al., 2022); Spodoptera litura feeding stress has been shown to induce Glycine max to synthesize flavonoids (Du et al., 2019).Examples of herbivore-induced defense mechanisms are the accumulation of toxic chemicals such as benzoxazinoids (BXDs; chemical defense), glucosinolates, and alkaloids, which are classes of specialized metabolites that function as deterrents (Batyrshina et al., 2020).Chemical defense by BXDs in wheat showed a complex response at the leaf and phloem level that altered aphid feeding preference, and BXDs act as antifeedants to aphids (Singh et al., 2021a).In response to pest stress, defenserelated enzyme systems in plants are activated.The main defense enzymes include peroxidase (POD), polyphenol oxidase (PPO), and phenylalanine ammonia lyase (PAL).Changes in the activities of these enzymes reflects the insect resistance of host plants to a certain extent (War et al., 2018).PAL is the rate-limiting enzyme in the phenylpropanoid metabolic pathway.Pest damage in plants initiates or upregulates phenylpropanoid metabolism, which increases PAL activity in damaged parts, resulting in a substantial accumulation of lignin in the cell wall and cell wall thickening, which prevents the spread of pests.Simultaneously, the increase in PAL activity increases the content of phytoalexins, which are toxic to phytophagous insects, and thereby prevent and control pests (Pant and Huang, 2022).Thrips feeding causes a significant accumulation of reactive oxygen species (ROS) in plants, leading to cell damage; plant PPO and POD remove excessive H 2 O 2 and superoxide anions to maintain the dynamic balance of ROS, thus protecting plants against damage (Mouden and Leiss, 2021).Protease inhibitors (PIs) competitively and reversibly bind to intestinal proteases of herbivorous insects and allosterically bind to inhibitor-insensitive proteases to reduce protease hydrolysis activity, ultimately leading to slow growth and dysplasia in insects (Divekar et al., 2023).When herbivorous insects feed, plants are exposed to mechanical challenge in the form of tissue injury and chemical challenge caused by insect salivary secretions entering plant tissues.Subsequently, PI genes are induced at the wound site through transmission of signal molecules and amplification of the signal via a cascade, resulting in PI genes being expressed locally at the wound site and throughout the plant (Ferreira et al., 2023).
Transcriptome sequencing technology (RNA-Seq) has been frequently applied to study the interaction mechanisms between pests and hosts, and has become the main approach to explore gene expression.The transcriptome is a fundamental link between genomic and proteomic information associated with biological functions.Regulation of transcription level is the most studied and most important regulation strategy in organisms (Lowe et al., 2017;Paul et al., 2022).In plants exposed to insect feeding stress, defense signaling pathways are initiated, a series of physiological and biochemical reactions are induced, and expression of defense genes is activated (Whiteman and Tarnopol, 2021).The physiological and biochemical metabolism of plants is altered through signal transduction, transcriptional regulation, and gene expression, which improves the resistance of plants to pest stress (Du et al., 2020;Wani et al., 2022).Sitobion avenae feeding induces PAL gene expression in wheat (Van Eck et al., 2010).In cotton, Helicoverpa armigera feeding induces changes in the gene expression of lysyl oxidase (LOX), propylene oxide cyclase, and chalcone synthase, and activates plant defenses against pests at the molecular level (Chen et al., 2020).In response to insect feeding, plants initiate multiple hormone signaling pathways such as jasmonic acid (JA), salicylic acid (SA), and ethylene (ET), causing the accumulation of plant defense compounds, stimulating the expression of defense genes, and triggering the release of volatile substances, which further enhances the resistance of plants to herbivorous insects (Kersch-Becker and Thaler, 2019;Zhao et al., 2021).In maize, Spodoptera litura feeding significantly upregulateds defense-related genes, oxidative stress-related genes, transcriptional regulatory genes, protein synthesis genes, plant hormone-related genes, and genes related to primary and secondary metabolism (Singh et al., 2021b).In tobacco exposed to Bemisia tabaci stress, defense pathways such as ROS, PI synthesis, hormone metabolism, and WRKY were significantly upregulated, and plant resistance was enhanced (Wang et al., 2023b).Transcription factors play a key regulatory role in the battle between plants and herbivorous insects by regulating cellular activities via gene expression.Members of the WRKY, APETALA2/ethylene response factor (AP2/ERF), basic helixloop-helix (bHLH), basic leucine zipper (bZIP), myeloblastosis-related (MYB), and NAC (no apical meristem/Arabidopsis transcription activation factor/cup-shaped cotyledon) families are involved in the regulation of plant disease and insect resistance networks (Tsuda and Somssich, 2015).
Herbivorous insect feeding initiates the inducible defense mechanism of plants, triggering a series of signal transduction and gene expression events, and the generation of defense substances.Inducible defense plays an more important role in the self-protection of plants (Maleck and Dietrich, 1999).At present, there are few reports on the physiological responses and omics differences of daylily in response to thrips feeding.In the present study, H. citrina 'Datong Huanghua' inoculated with T. palmi was used to determine the content of nutrients and secondary metabolites and defense enzyme activities in leaves to elucidate the physiological changes that induce pest defenses.Transcriptome analysis of thrips-infested leaves was performed with healthy leaves as controls.Differentially expressed genes (DEGs) were identified, and the main transcription factors and their expression patterns were analyzed.Key insect resistance genes were identified to elucidate the induced defense mechanism of daylily in response to T. palmi.
Materials
Adults T. palmi individuals naturally occurring in daylily fields at the Horticultural Station of Shanxi Agricultural University were used as the source of test insects.The daylily variety used in the study was Datong Huanghua, which was planted at the Horticultural Station of Shanxi Agricultural University.
Seedling growth
The study was performed from March to June 2023 at the Horticultural Station of Shanxi Agricultural University.To prevent T. palmi and other pests, a 60-mesh insect-proof net was used to set up a net room, similar to a vegetable greenhouse, from west to east in the field to establish the experimental plot.After 45 days of seedling growth, the experiment began.
To establish the treatment group with induction of T. palmi (T.palmi-fed, abbreviated as TF), 1 day before the experiment, sufficient T. palmi were collected in the field and brought to the laboratory in a cage (118.7 × 100 × 100 cm) made of 60-mesh insect-proof net.T. palmi was starved for 12 h prior to the test, to ensure adequate feeding induction on plants.On the day of the experiment, T. palmi were transported to the net room in 50-mL centrifuge tubes, and T. palmi from one tube were released onto 3−4 plants such that there were ~90 individuals per plant; at least 15 plants were treated overall to ensure that three biological replicates could be sampled at each point.T. palmi concentrated on the upper-middle position of young leaves, and each plant had 6−7 such leaves.Datong Huanghua plants in this treatment group (TF) were individually covered with a 60-mesh insect-proof net to prevent T. palmi from escaping.In the control group (control check, abbreviated as CK), no insects were introduced, daylily plants were allowed to grow normally without any treatment in the net room, and each plant was individually covered with insectproof net.Each treatment group included three biological replicates.
Plant leaves were collected at 1, 3, 5, 7, and 9 days after the introduction of T. palmi (named TF1−TF5), and leaves of CK group plants collected at the same time served as controls (named CK1 −CK5).Three replicates were included at each stage, yielding five extractions with 30 samples in total, which were frozen in liquid nitrogen and stored at -80°C until future use.
Determination of plant nutrient content
The content of amino acids, free fatty acids, and soluble sugars in TF1−TF5 and CK1−CK5 was determined.Amino acids content was determined using an amino acids content determination kit (ninhydrin colorimetric method; 50T/48S) and a standard curve obtained using cysteine (Li et al., 2023a).Free fatty acids content was determined using a free fatty acids content determination kit (copper soap colorimetry; 50T/48S) and a standard curve obtained using palmitic acid (Wu and Shen, 2021).Soluble sugars content was determined using a plant soluble sugars content determination kit (anthrone colorimetry; 50T/48S) and a standard curve obtained using anhydrous glucose (Kwon et al., 2015).All kits were purchased from Beijing Solarbio Science & Technology Co., Ltd (Beijing, China).Data were summarized and processed using Microsoft Excel 2010 and statistically analyzed with SPSS software v20.0.The significance of the difference in nutrients between healthy daylily leaves and leaves fed on by T. palmi was tested by Tukey's test (p < 0.05) and graph plotting using SigmaPlot 14.0 software.Data processing for secondary matter content and defense enzyme activities of daylily leaves before and after feeding by T. palmi was done in the same way as data processing for nutrient content determination.
Determination of plant secondary metabolites content
The content of tannins, flavonoids, and total phenols in TF1−TF5 and CK1−CK5 was determined.Tannins content was determined using a Tannins content determination kit (Folin-Ciocalteu colorimetric method; 50T/48S) and a standard curve obtained using tannic acid (Sharma et al., 2021).Flavonoids content was determined using a flavonoids content determination kit (AlCl3 colorimetric method; 50T/48S) and a standard curve obtained using rutin (Liu et al., 2022).Total phenols content was determined using a total phenols content determination kit (Folin-Ciocalteu colorimetric method; 50T/48S) and a standard curve obtained using catechol (Palacios et al., 2021).All kits were purchased from Beijing Solarbio Science & Technology Co., Ltd (Beijing, China).
Transcriptome sequencing and analysis
RNA was extracted from TF1−TF5 and CK1−CK5 samples using the TRIzol method (Wang et al., 2022).RNA integrity was assessed using 1% agarose gel electrophoresis, and the RIN value was determined using an Agilent 2100 bioanalyzer (Agilent Technologies Inc., Santa Clara, CA, USA).After RNA quality determination, the cDNA library was constructed and high-throughput sequencing was performed on an Illumina Hiseq platform (Shanghai Majorbio Bio-Pharm Technology Co., Ltd, Shanghai, China) with three biological replicates.Raw data obtained by sequencing were filtered to remove adapters and low-quality reads, and high-quality clean data was obtained.The base quality score (Q30) of clean data was determined.Trinity software was used to assemble the clean data obtained by sequencing to construct the UniGene library.
Identification and annotation of DEGs
The relative expression levels of each gene were determined using the Transcripts Per Million (TPM) standardization algorithm in FeatureCounts software (Vera Alvarez et al., 2019) and combined with gene transfer format (GTF) files describing genomic features.DESeq 2 (Qi et al., 2023) was used to compare the number of read counts between TF and CK groups, and differential expression analysis was performed on samples between the groups.Genes with p-adjust <0.05 and | log2FC | ≥1 after p-value correction were considered DEGs.DEGs were functionally annotated using the Gene Ontology (GO) database (http://www.geneontology.org/) and the Kyoto Encyclopedia of Genes and Genomes (KEGG) database (https://www.genome.jp/kegg/).Finally, transcription factors of DEGs were predicted using the Plant Transcription Factor Database (PlantTFDB; http://planttfdb.gaolab.org/prediction.php/).
Identification and functional analysis of key modules for defense enzyme activities and secondary metabolites synthesis
We constructed a transcriptome expression matrix of leaves from the TF1-TF5 samples and screened for genes with TPM values <1.Furthermore, we used the WGCNA package (version 1.6.6) in R software (version 3.4.4) to construct a gene co-expression network.We selected b = 16 as the soft threshold for subsequent analysis and used the 'blockwiseModules' function to construct the gene network, with the following parameter settings: power = 6, TOMType = unsigned, maxBlockSize = 100 000, minModuleSize = 80, mergeCutHeight = 0.25, nThreads = 0; all other parameters were set to default values, and module feature genes for each module were calculated.We used the 'exportNetworkToCytoscape' function in the WGCNA package to export network relationships between genes in relevant modules, and Cytoscape software (version 3.7.1)was used to create graphs.
Quantitative real-time PCR analysis
Seven genes were selected randomly for qRT-PCR validation.RNA was reverse-transcribed using a PrimeScript RT Reagent Kit (Takara, Beijing, China).All procedures were conducted in accordance with the manufacturer's instructions.The resulting cDNAs were quantified by TB Green Premix Ex Taq II (Takara).Each qRT-PCR experiment (15 mL) consisted of 7.5 mL of 2× SG Fast qPCR Master Mix, 0.6 mL of each primer (10 mM), 40 ng of cDNA template, and ddH 2 O to 15 mL.Thermal cycling involved an initial denaturation at 95°C for 3 min, followed by 40 cycles of denaturation at 95°C for 30s, annealing at 56°C for 30s, and extension at 72°C for 40s.Relative expression levels of genes were calculated using the 2 -△△CT method with the actin gene as an internal control, and the experiment was repeated at least three times.Primer sequences are listed in Supplementary Table 1.
Effect of T. palmi feeding on plant nutrient content
Primary metabolites such as amino acids, soluble sugars and free fatty acids play an important role in plant-induced defenses (Prado and Tjallingii, 2007).The amino acid and free fatty acid contents were higher than those in CK at 1 and 3 days, but significantly lower than those in CK at 7 days; they reached the lowest level at 9 days, 0.26 and 0.73 times the content in CK, respectively (Figures 1A, B).The soluble sugars content was significantly lower in the TF group than in the CK group at each timepoint and reached the lowest level at 9 day, 0.69 times that in CK (Figure 1C).
Effect of T. palmi feeding on plant secondary metabolites content
Insect feeding induces the accumulation of various toxic secondary metabolites such as phenols, alkaloids, and terpenoids in plants, and reduces the digestive capacity of insects and the amount of food and eggs, thereby directly or indirectly enhancing insect resistance (Mipeshwaree et al., 2023).Tannins, flavonoids, and total phenols in plants were increased significantly at each timepoint after T. palmi feeding induction.Flavonoids content reached a peak at 3 days, 3.5 times that in CK (Figure 2B).Tannins and total phenols content reached a peak at 5 days, 2 and 1.7 times that in CK, respectively (Figures 2A, C).
3.3 Effect of T. palmi feeding on the activities of plant defense enzymes POD, PAL, PPO, and PI are defense enzymes of plants under biotic stress, and changes in these enzymes activities reflect the insect resistance of host plants to a certain extent (Uemura and Arimura, 2019).The activities of POD, PAL, PPO, and PI in leaves of Datong Huanghua induced by T. palmi feeding were significantly higher than in CK at each timepoint.POD, PAL, and PPO were all initially increased then decreased.POD and PAL activities reached a peak at 5 days, at 5 and 2 times those in CK, respectively (Figures 3A, B).PPO activity reached a peak at 3 days, 1.8 times that in CK (Figure 3C).PI activity reached a peak at 1 day, 1.6 times that in CK, and although it showed a downward trend, it was still 1.2-fold higher than in CK at 9 days (Figure 3D).
Quality assessment of transcriptome sequencing results
To study the changes in transcription levels of daylily under T. palmi stress, using Illumina 2× 150 bp paired-end sequencing, 141.58 Gb of clean data was obtained from 10 samples.Clean data from each sample reached >6.03 Gb, the percentage of Q30 bases was >94.18%.The percentage in brackets in the last column of Table 1 is the comparison rate for clean reads; clean reads comparison efficiency ranged between 78.76% and 82.94%.The results showed that the quality of the sequencing output data was good, and the data could be used for further analysis.
3.5 DEGs in plants in response to T. palmi feeding Transient expression of genes was investigated in the leaves of daylily in response to T. palmi feeding.The five timepoints after induction of T. palm feeding yielded 78,987 DEGs.The highest number of DEGs (20,390) was observed at the TF5 stage, including 13,701 upregulated and 6,689 downregulated genes.The TF1 stage had the lowest number of DEGs (8,010), including 5,956 upregulated and 2,054 downregulated genes.The number of upregulated genes was higher than that of downregulated genes at all stages (Figure 4A).Only 1,894 genes were differentially expressed at all stages (TF1−TF5; Figure 4B).
Pathway analysis of DEGs was performed using the KEGG database to explore the metabolic processes and cell signaling pathways involved in genes associated with resistance to T. palmi.According to the KEGG enrichment analysis results for DEGs at the five stages, amino acid metabolic pathways such as a-linolenic acid metabolism, tryptophan metabolism, and phenylalanine metabolism, and plant insect resistance pathways including glutathione metabolism, flavonoid biosynthesis, anthocyanin biosynthesis, cutin, cork, and wax biosynthesis, and Frontiers in Plant Science frontiersin.orgascorbic acid and aldehyde acid metabolism, were significantly enriched after T. palmi feeding (Figure 4D).
Analysis of gene expression patterns related to T. palmi resistance
Based on the findings from DEGs, and GO enrichment and KEGG pathway analyses, 787 potential candidate genes related to T. palmi resistance were subjected to differential expression analysis (Figure 5).These genes could be divided into two expression patterns, among which Cluster 1 contains 679 genes.Its functions include the synthesis of secondary substances such as flavonoids, alkaloids and diterpenes, the synthesis of defense enzymes such as POD, PAL, PPO, PI, and catalase, the signal transduction of defense hormones such as JA and SA, MAPK signaling pathway-plant, wax synthesis, cell wall thickening, and others, which are mainly upregulated after feeding by T. palmi, and are more significant in the TF2 period.Cluster 2 contains 108 genes whose functions include amino acid metabolism, starch and sucrose metabolism, nitrogen metabolism, photosynthesis, carbohydrate metabolism, and others, which are mainly downregulated after feeding by T. palmi.
Prediction of transcription factors and their expression patterns
Transcription factors play a key role in the transcriptional regulatory network related to plant induced defenses.In order to explore the transcription factors related to T. palmi resistance in daylily, 698 transcription factors were identified from 78,987 DEGs, which clustered into 31 transcription factors families (Figure 6A, Supplementary Table 2).Approximately half of these genes are closely related to biological and non-biotic stress responses, including MYB, bHLH, AP2/ERF, WRKY, bZIP, and NAC.On the basis of their expression patterns, these genes were divided into four clusters (Figures 6B, C).The transcription factors in Cluster 1, Cluster 2, and Cluster 3 were upregulated after feeding by T. palmi.
The transcription factors in Cluster 1 were mainly bHLH and WRKY, and were significantly upregulated at the TF4 stage.The transcription factors in Cluster 2, and Cluster 3 were mainly AP2/ ERF and MYB, Cluster 2 was significantly upregulated at the TF5 stage, and Cluster 3 was significantly upregulated at the TF2 stage.The transcription factors in Cluster 4 were mainly bZIP and NAC, which were downregulated compared with CK.Cluster 3 and Cluster 1 included the highest numbers, with 107 and 104 upregulated transcription factors, respectively, indicating that they play an important role in the resistance of daylily to T. palmi.
Co-expression network identification and key module analysis
WGCNA can be used to identify co-expressed gene modules, explore biological correlations between modules and target traits, and mine core genes in the module network.WGCNA was applied to the transcriptomic data to explore the relationships between genes related to the content of amino acids, free fatty acids, soluble sugars, tannins, flavonoids, and total phenols, and the activities of POD, PAL, PPO, and PI in daylily.The soft threshold b = 16 was determined by calculation (Figure 7A), and 24,665 genes were used to construct a co-expression network with 16 co-expression modules, among which the Turquoise module was the largest with 7,743 genes, whereas the Midnight blue module was the smallest with only 44 genes (Figures 7B, C).The Midnight blue module contained genes strongly linked to flavonoids content, PAL activity, tannins content, PI activity, and PPO activity, with correlation values of 0.572, 0.518, 0.515, 0.443, and 0.407, respectively.The Salmon module included genes strongly linked to soluble sugars content, with a correlation value of 0.647.The Black module contained genes strongly linked to total phenols content, with a correlation value of 0.502.The Blue module included genes strongly linked to POD, with a correlation value of 0.623.The Yellow module contained genes weakly linked to amino acids content and free fatty acids content, with correlation values of 0.221 and 0.205, respectively (Figure 7C).Four key modules (Purple, Midnight blue, Blue, and Red) highly correlated with the 10 phenotypes (amino acids, free fatty acids, soluble sugars, tannins, flavonoids, total phenols, POD, PAL, PPO, and PI) were selected, and key genes in the regulatory network were visualized using Cytoscape 2.0 with weights >0.4 (Figure 7D).A total of 18 network hub genes were identified as key genes and were annotated using Arabidopsis and Asparagus databases.Examples include natural resistance-associated macrophage protein, cytochrome P450, secondary metabolites biosynthesis, jasmonic/salicylic acid mediated signaling pathway, protein serine/threonine kinase activity, dienelactone biosynthetic, brassinosteroid biosynthetic, endonuclease/exonuclease/phosphatase family, glutamyl endopeptidase, haloacid dehalogenase-like hydrolase, and oxylipin biosynthetic process (Table 2).TRINITY_DN6738_c0_g2 plays a major regulatory role in the secondary material synthesis pathway, which influences pest feeding; TRINITY_DN21120_c0_g1 promotes the synthesis of PIs and hinders the digestive function of pests; TRINITY_DN167_c0_g1 regulates nutrient redistribution by plant amino acid metabolism to reduce the nutrients available to pests while ensuring normal plant growth; TRINITY_DN855_c0_g2 regulates defense hormone signaling, such as JA and SA, to activate plant systemic resistance.In addition, three transcription factors were
Verification using quantitative realtime PCR
To confirm the reliability of the transcriptome data, seven genes were selected for qRT-PCR verification.Comparison of transcriptome sequencing data and qRT-PCR data indicated very similar expression trends, with a Pearson correlation coefficient (R 2 ) of 0.838 (Figure 8; Supplementary Figure 1), demonstrating good reliability for the RNA-seq data.
Discussion
The defenses initiated by plants after being attacked by herbivorous insects are induced defenses.The process of inducing insect resistance includes the activation of pest stress signals, transmission of internal pest signals, expression of defense compound-associated genes, and synthesis of defense substances, culminating in insect resistance (Stout and Duffey, 1996).Nutrients, secondary metabolites, and defense enzymes play a vital role in the physiological responses of plants to pest stress (War et al., 2013;Li et al., 2022b).Studies have shown that low levels of soluble sugars, amino acids, and other nutrients reduce the desirability for pests, and plant resistance is stronger (Cao et al., 2018).Insect damage induces plants to produce a large number of terpenoids, phenols, nitrogen-containing compounds, and other secondary metabolites, affecting insect feeding, growth, development, and reproduction (Divekar et al., 2022).Changes in the activities of defense-related enzymes occur during the production of secondary metabolites and other anthelmintic-related substances.Plant defense enzymes are upregulated in response to insect stress; they promote the synthesis of quinones, lignin, phytoalexins, and other insect resistance compounds in plants; hinder insect feeding; and maintain plant metabolic balance (Li et al., 2022a).Plant tissues usually contain a small amount of PIs.However, after being damaged by herbivorous insects, the damage site induces a large number of PIs to be rapidly transported throughput the plant, which blocks the protease activity in the intestine of herbivorous insects, thereby inhibiting pest population expansion and protecting the plant (Zhu-Salzman and Zeng, 2015).In previous studies, transcriptome analysis of plants in response to herbivorous insect feeding shown that DEGs were significantly enriched in hormone synthesis pathways such as biosynthesis of secondary metabolites (e.g., quinones and flavonoids), phenylalanine metabolism, POD activity, a-linolenic acid metabolism, and JA synthesis (Li et al., 2020).Insects feeding on plants induce changes in primary and secondary metabolites, including sugars, amino acids, organic acids, flavonoids, phenols and tannins.In the present study, after feeding by T. palmi, the soluble sugars content in the leaves of daylily was significantly lower than that of the CK group at five timepoints.It may be that carbohydrates (mainly soluble sugars) synthesized in the aboveground part may not only meet the needs of plant growth, development, and defenses, but also be more distributed in the root system to ensure its growth activity, thereby improving the tolerance of the plant.Amino acid and fatty acid contents were higher than those in CK after 1 and 3 days of feeding by T. palmi and significantly lower than those in CK after 7 days.Amino acid and free fatty acid contents increased in the early stage of infestation, which may be a compensatory resilience of the plant to cope with pest infestation; however, when the infestation increased to a certain degree, the plant's own nutrient supply was insufficient, and then there was a successive decrease in the contents of nutrients, finally lower than those in CK.It suggests that plants can become less attractive to pests through changes in nutrient levels in the body, and that nutrients can also be involved in defense responses to increase plant resistance to pests.Reduction of foliage nutritive quality after herbivory could be an adaptation of plants to insect attack, slowing down larval development and affecting negatively impacting insect fitness (Cornelissen and Stiling, 2006).Analysis of five cotton cultivars revealed that aphid and jassid infestation decreased each cultivar's sugar and protein content (Amin et al., 2016).Notably, a previous study reported that the low sugar and protein content in tomato leaves is not conducive to the growth and development of Helicoverpa armigera (Bisht et al., 2022).Insect feeding induction is an important factor triggering the plant defense system.The elicitors in insect oral secretions enable plants to identify harmful signals, then initiate the defense system to induce resistance (Alves-Silva and Del-Claro, 2016).For example, through the catalysis of various defense enzymes such as POD, PAL, PPO, and PI, they induce the accumulation of various toxic secondary metabolites such as phenols, alkaloids, and terpenoids in plants, thereby directly or indirectly improving insect resistance (Appu et al., 2021).Nilaparvata lugens feeding increased the activities of POD, PAL, and PPO in rice plants, which not only reduced the damage induced by pest feeding, but also played an important role in the accumulation of toxic metabolites (Li et al., 2023b).Pieris rapae feeding causes damage to Phaseolus vulgaris L. leaves, which directly induces high expression of PI genes, and plants exhibit induced insect resistance (Xiang et al., 2018).In the present study, the activities of defense enzymes such as POD, PAL, PPO, and PI, and the contents of secondary metabolites such as tannins, flavonoids, and total phenols in leaves of daylily following feeding by T. palmi were significantly higher than those of CK.These findings are consistent with previous reports showing that thrips damage significantly increased the flavonoid, tannin, and lignin content in alfalfa leaves (Wu et al., 2021), and H. armigera feeding significantly increased the phenol content of pigeon pea (Kaur et al., 2014).In our previous study, PI activity was significantly increased in plants exposed to insect damage, resulting in the obstruction of insect digestion and slow growth, and the tannins, flavonoids, and total phenols content in daylily leaves were significantly higher in plants exposed to insect damage, which were not conducive to colonization by T. palmi (unpublished data).
After plants are stressed by insect feeding, defense signaling pathways are initiated, a series of physiological and biochemical reactions are induced, and the expression of defense genes is activated (Wu et al., 2010).In alfalfa damaged by thrips, pathways related to carbohydrate metabolism, lipid metabolism, MAPK signaling, hormone synthesis, and secondary metabolite synthesis are activated to initiate a defense response to thrips damage (Zhang et al., 2021).In the present study, the DEGs identified in daylily exposed to T. palmi infestation were mainly enriched in secondary metabolite synthesis, defense hormones signal transduction, defense enzymes synthesis, MAPK signaling pathway-plant, cell wall thickening, carbohydrate metabolism, photosynthesis, and other insect-resistant pathways.The In conclusion, the present findings elucidate the potential mechanism and hub genes of the resistance of daylily to T. palmi.The synergistic effects of nutrients, secondary metabolites, and defense enzymes increased the resistance of daylily to T. palmi.The mechanisms include reducing the nutrients available to T. palmi, catalyzing defense enzymes to produce secondary metabolites that are toxic to T. palmi, activating JA, SA, and other defense hormones signal transduction pathways, improving the resistance of daylily plants, and reducing the damage caused by T. palmi.The results of this study expand our the understanding of the mechanisms of insect resistance in daylily, and inform the development of effective strategies to control T. palmi by inducing exogenous factors to enhance insect resistance.
FIGURE 1 Determination of plant nutrients.(A) Amino acids content; (B) free fatty acids content; (C) soluble sugars content.Different letters indicate significant differences in nutrient composition between healthy leaves and leaves after feeding by T. palmi (p <0.05).
FIGURE 2 Determination of plant secondary metabolites.(A) Tannins content; (B) flavonoids content; (C) total phenols content.Different letters indicate significant differences in the content of secondary metabolites between healthy leaves and leaves after feeding by T. palmi (p < 0.05).
FIGURE 4 Differentially expressed genes (DEGs) in Datong Huanghua exposed to Thrips palmi feeding.(A) Number of DEGs at different stages; (B) venn diagram analysis of DEGs; (C) gene ontology (GO) enrichment analysis of DEGs; (D) Kyoto Encyclopedia of Genes and Genomes (KEGG) enrichment analysis of DEGs.
FIGURE 5
FIGURE 5Analysis of gene expression patterns related to T. palmi resistance.
FIGURE 6 Expression of transcription factors.(A) Number of transcription factors; (B) transcription factors' expression patterns; (C) expression pattern clustering results.
FIGURE 7 Weighted gene co-expression network analysis (WGCNA) of plant defense-related genes.(A) Scale-free network model index under different soft thresholds; (B) gene clustering tree based on the topological dissimilarity matrix; (C) heatmap of correlations between modules and traits; (D) gene co-expression network in the plant defense-related gene module; hub genes are colored pink.
transcription factors identified on the basis of DEGs were clustered into the MYB, bHLH, AP2/ERF, WRKY, bZIP, and NAC families.Among them, MYB, WRKY, bHLH, and AP2/ERF transcription factors were significantly upregulated after feeding by T. palmi, indicating that these four families of transcription factors play an important role in induced resistance to T. palmi defense in daylily.The aphid resistance-related transcription factors in alfalfa were consistent with the thrips resistance-associated transcription factors in daylily, but the MYB, NAC, and AP2/ERF families were dominant in alfalfa responses to aphids(Jacques et al., 2020).Furthermore, WGCNA and DEGs analysis demonstrated that MYB-like DNA-binding domain (TRINITY _ DN2391 _ c0 _ g1, TRINITY _ DN3285 _ c0 _ g1), zinc-finger of the FCS-type C2-C2 (TRINITY _ DN21050 _ c0 _ g2), and regulatory protein NPR1 (TRINITY _ DN13045 _ c0 _ g1, TRINITY _ DN855 _ c0 _ g2) are closely related to the synthesis of anti-stress compounds such as antioxidant enzymes, JA, SA and secondary metabolites.These results suggest that these genes play an important role in the defense responses of daylily to T. palmi.
TABLE 1
Transcriptome sequencing data statistics.
TABLE 2
Hub genes and predicted functions. | 8,083.6 | 2024-05-14T00:00:00.000 | [
"Environmental Science",
"Biology"
] |
Formation of marketing strategies of enterprises in the market of logistics services in the context of world trends
The article considers the problems of formation of marketing strategies of enterprises in the market of logistics services in the context of global trends. It was carried out the analysis of global trends and the market of transport and logistics services of Ukraine. It was determined the contribution of the transport and logistics services sector to the economy of Ukraine. The volumes of the market of logistics services in Ukraine are analyzed and their dependence on tendencies of change of business activity in the industry, a situation in the field of foreign trade that is reflected in indicators of change of volume of the rendered transport services is introduced. It was divided the Ukrainian market of transport and logistics services into segments. It was determined the necessity of transformation of institutional support of its functioning in the direction of harmonization with European norms Drivers of European integration for the Ukrainian logistics sector have been identified, which include the rapid development of e-commerce, the use of production technology "just in time" (JIT – just-in-time manufacturing) and migration. The study analyzes the global trends in the development of logistics services and their impact on the principles of marketing strategies in the market of logistics services. It was substantiated the basic theoretical and methodical approaches to the formation of marketing strategies taking into account the specifics of the logistics services market. It was revealed the aspects and directions of realization of strategic development of logistics activity in the context of formation of marketing strategies in the field of logistics.
Introduction
The using of strategic approaches and methods in the practice of managing domestic enterprises is becoming increasingly common. Strategic management, as an integral part of international standards of quality management, corporate, municipal and public administration, internal control, risk management are implemented in enterprises of various forms of ownership and scale of activities operating in various sectors of the economy.
Of particular importance is the need to substantiate ways to implement strategic management methods in the practice of enterprises operating in the market of logistics services. This is due to the high level of uncertainty and peculiarities of the development of the domestic market of logistics services in Ukraine.
ISSN 2534-9228
(2021) VUZF review, 6(4) The situation of general reduction of the internal market, high level of competition, especially on the part of the international logistics companies encourages the enterprises to search for effective methods and management tools which will provide opportunities for further development. Such methods and tools include strategic, in particular the development and implementation of marketing strategy of the enterprise in the market of logistics services.
For example, Oklander M. (2004) was proposed and substantiated the theoretical and applied provisions of logistics mechanisms of adaptation of enterprises to the external environment and introduced the principles and structure of the logistics system of the enterprise. Krykavsky E. pays special attention to the study of problems of optimization of logistics costs and their evaluation system (Krykavsky E., 2004). In addition, the problems of logistics systems of different levels and areas of activity have been studied by such foreign scientists as: Ronald L.Lewis (Ronald L. Lewis, 1993), Michael C.O`Guin (Michael C.O`Guin, 1991), Robert S. Kaplan and H. Thomas Johnson (Robert S. Kaplan and H. Thomas Johnson, 1987).
To justify the need for the formation and implementation of marketing strategy of enterprises operating in the market of logistics services, based on the research of Vyshnevsky O., who is the founder of the general theory of strategy, emphasizing the value of strategic management in any activity. According to him, «strategic management is a powerful tool for overcoming existential fear at the individual level» (Vyshnevsky O., 2018:5). Of course, the presence of the company's strategy in the market provides an opportunity to predict the future state. Having a strategy, an entity with a certain level of probability can predict a change in its state, coordinate their own actions, reactions to the behavior of other entities. The presence of a strategy, guaranteeing to some extent future income and level of well-being, provides an opportunity to solve the «problem of existence through self-realization» (Vyshnevsky O., 2018:5): for a person -in the professional sphere, for an enterprise -in the market.
Thus, the problem of formation of marketing strategies in the activity of logistics services enterprises has been studied by many scientists. However, the rapid development of international trade, the spread of globalization in the field of transport and logistics services have led to increased competition, business consolidation and redistribution of the logistics services market in a global context. The coronavirus pandemic also affected global trends in the market of transport and logistics services and contributed to the intensification of the introduction of marketing tools in this type of market. That is why the issue of forming marketing strategies of enterprises in the market of logistics services in the context of global trends is becoming relevant.
Material and methods
The study uses economic and statistical research methods in the analysis of global trends in the market of logistics services; forecasting methods -when calculating the forecast when calculating the volume of the world market of logistics services; theoretical and methodological approaches in the formation of marketing strategies of enterprises in the market of logistics services; grouping and classification -when substantiating marketing strategies in the market of logistics services.
Results and discussion
Strategic guidelines for the development of the logistics services market depend on general ISSN 2534-9228 (2021) VUZF review, 6(4) global trends. Transport and logistics services are one of the most dynamically developing areas, which has contributed to the spread of globalization and the revival of international trade. According to the statistics of the World Trade Organization (Official website of the World Trade Organization; Report 2019), more than 100 billion tons of cargo and more than 1 trillion passengers move annually worldwide.
According to experts from Armsrtong & Associates (Market overview of transport and logistics services in the Republic of Belarus, 2020), the global logistics market is estimated at $ 9.6 trillion and accounts for about 12% of world GDP, while the global transport market is $ 6.2 trillion, equivalent to 8% of world GDP. At the same time, the share of the transport and logistics sector in world GDP, as well as in the GDP of the EU countries, is about 20%, and in the GDP of the EAEU countries -about 12%. The transport sector also provides about 8% of global employment.
In 2019, the growth rate of world trade decreased slightly and amounted to 2.9%. The main reason for the decline in world trade was the trade war between the United States and China. In 2020, there is a decrease in international trade by about 13-22% due to the coronavirus pandemic. Today's trends show a further decline in the global market for transport and logistics services and its introduction into a state of recession caused by the crisis in the world economy and the escalation of the COVID-19 pandemic. In general, the market for transport and logistics services is one of the most affected sectors of the economy as a result of sanitary and epidemiological measures implemented during the COVID-19 pandemic.
According to McKinsey & Company experts, in 2020 the volume of world trade decreased by about 13-22%, which occurred in the global demand for transport and logistics services. According to M.A. Research, the most significant drop in demand is observed in the field of international freight (air and sea). Revenues in the freight and forwarding segment decreased by almost 15% compared to 2019. Declining demand in the United States, Europe and other countries has contributed to a reduction in container traffic by more than 6% in 2020. The full recovery of the transport and logistics services market is possible not earlier than the second half of 2023 (Market overview of transport and logistics services in the Republic of Belarus, 2020).
In the context of global trends, the market of transport and logistics services in Ukraine also shows a decline, the beginning of which has been observed since 2012 (Fig. 1). Undoubtedly, the COVID-19 pandemic has contributed to an even more significant reduction in freight traffic in the market of transport and logistics services in the Ukrainian economy. Analysis of the share of value added created in the field of transport, warehousing, postal and courier services -in fact, enterprises operating in the logistics market of Ukraine -in the gross domestic product also shows a slow decline (Fig.2). The market of transport and logistics services added about 6% -8% to the GDP of Ukraine for the period under study. However, starting from 2018, the market of transport and logistics services has been stagnant, which is confirmed by the fact that more than 90% of logistics services in Ukraine are in the transportation sector. This trend intensified during the coronary virus pandemic.
The Ukrainian market of transport and logistics services can be divided into the following segments, which are grouped according to the types of services: − transportation and freight forwarding. This market segment is the most developed in Ukraine and is characterized by a high level of competition from international logistics companies. The most well-known providers of this market segment are the following domestic logistics companies: Ukrainian Freight Couriers, Ukrtrans, Transport Logistics Center, Ost-West Express, and Exim Logistics Ukraine. Competitors of domestic companies from among international logistics operators are «Kuehne and Nagel» (Germany), «Raben» (Netherlands), «FM-Logistic» (France) and others; − professional warehousing services. This type of logistics services is provided by companies with localization in Kyiv, Odesa and Kharkiv agglomerations; − express shipping. This type of service is one of the fastest growing. The companies «Nova Poshta», «Mist Express», «Intime», «Delivery» and others are actively developing their own networks, striving to achieve the highest level of coverage of the territory of Ukraine. The reform of the oldest player in this market segment, UkrPoshta, has given impetus to the growth of the national provider of public postal services, increasing its competitiveness in the logistics services market; − integrated logistics solutions and supply chain management -a relatively new market segment, the main consumers of which are industrial enterprises.
In fig. 3 presents the relationship of the main segments of the market of transport and logistics services and the characteristics of the types of providers operating in the modern market of logistics services. This approach demonstrates the current configuration of the market of transport and logistics services, which was determined in the context of the latest trends in world development. fig. 3 classification of types of logistics service providers covers their aggregates, grouped by the degree of integration with the business that is a consumer of services, the specifics of logistics functions, the level of access to international and regional markets.
European integration processes in the economy, which have been taking place in Ukraine for a long time and intensified in connection with the signing of the Association Agreement in 2014 (Association Agreement between Ukraine, of the one part, and the European Union, the European Atomic Energy Community and their Member States, of the other part, 2014), its entry into force in 2017 contribute to the development of trade relations between Ukraine and the European Union. Export opportunities for domestic producers of various types of products are expanding. According to the
Logistics services market is a set of economic and property relations for the organization and purchase and sale of logistics services
The object of the logistics services market are material, information and service flows Logistics services market segments: − transportation and forwarding of goods; − professional warehousing services; − express delivery; − complex logistics solutions; − supply chain management.
Types of logistics providers: − First Party Logistics (1 PL) is an autonomous enterprise logistics.
− Second Party Logistics (2 PL) -these are companies that provide traditional transportation and warehouse management services. The main activities of logistics intermediaries type 2 PL are: transport logistics, freight forwarding, logistics service, warehousing logistics, customs and documentation of goods and information services.
− Third Party Logistics (3PL) provide traditional and additional services. It is a multimodal transportation operator that provides a full range of warehousing, customs, insurance and information services. The main difference between the 3PL logistics operator and the previous ones is the delegation of logistics outsourcing services to other logistics intermediaries, namely 2PL intermediaries.
− Fourth Party Logistics (4PL), which is the integrator of all companies involved in the supply chain. 4PL provider provides tasks related to the planning, management and control of all logistics processes of the client company, taking into account long-term strategic goals. 4PL -providers operate in the global economy, forming global logistics chains.
− Fifth Party Logistics (5PL) -online logistics. This type of logistics operator is inherent in the conditions of operation in the global economy. Fifth Party Logistics (5PL) -it is a system that is the planning, preparation, management and control of all stages of the logistics process using electronic media. This type of intermediaries, unlike the previous ones, does not deal at all with the physical distribution of goods to the final consumer. He organizes all logistics intermediaries within the logistics chain and manages information and material flows.
Ministry of Economic Development, Trade and
Agriculture of Ukraine, in particular, the volumes of exports to the EU of agro-industrial and food, metallurgical and machine-building products, mineral products, wood and paper pulp, as well as chemical and light industry products have increased (AgroPolit.com., 2020).
As a result of the expansion of the free trade zone and the operation of the EU-Ukraine Association Agreement, domestic producers received the right to export their own products to the territory of the European Union. It has been significantly increased the number of companies exporting products to the EU from among food producers -milk and dairy products, meat and meat products, fish, eggs, honey, snails and delicacies (for example, frog legs). At the same time, opportunities for importing products from 28 EU countries to Ukraine are expanding.
This leads to positive expectations for the growth of the market of logistics services in Ukraine and at the same time necessitates the transformation of institutional support for its functioning in the direction of harmonization with European standards. Thus, according to the Ministry of Infrastructure of Ukraine, in order to fulfill the obligations on the implementation of European norms in the field of transport in Ukraine, amendments to 11 basic laws and bylaws in the field of transport must be adopted. This primarily concerns the regulation of rail and water transport, as well as multimodal transport.
Trends in the market of logistics serviceseconomic relations between consumers and suppliers of services for transportation and forwarding of goods, warehousing services and services for integration and management of supply chains (Shimko O.V., 2011: 427) -largely depend on the dynamics of industrial production and the intensity of foreign trade.
The world market of logistics services is in a stage of moderate growth (Fig. 4). According to forecasts, the growth rate by 2024 will be 6.5 -7% annually and its value will reach $ 236 billion in value (Business Technology Center, 2020). It should be noted that the market share of transport and logistics services in the structure of world GDP is 20%, and the share of employees in the logistics services sector in the global context is 8% (Market overview of transport and logistics services in the Republic of Belarus, 2020).
Ukraine's share in the world logistics market is less than 1%. It should be noted that the local market of logistics services in Ukraine is integrated with the international one, but is at the stage of formation and consolidation. As noted by G.M. Filyuk and M.O. Kuznetsova, «significantly inferior to Western countries» both in quality of services and in their complexity (Filyuk G.M., Kuznetsova M.O., 2015:13).
This is due to a number of features of the domestic market of logistics services, among which experts include the following: − absence of 4-PL and 5-PL providers in the domestic market due to the disorder of the current legislation on the activities of virtual organizations; − a short list of logistics services; − availability of small companies that serve local markets with small volumes of freight traffic; − large organizations oriented to international routes; − duration of the export operation (Ukraine -up to 30 days, Shanghai -up to 20 days, Berlin -6 days, Amsterdam -5 days); − difference from the European and world structure of the market of logistics services with the dominance of the transport segment: up to 90% of the market of logistics services in Ukraine is the The vast majority of providers of logistics services, as well as warehousing infrastructure are located in Kyiv, Odesa and Kharkiv agglomerations. Prior to the conflict in the east of the country, the Donetsk agglomeration was considered one of such regional logistics centers, and its logistics capacity is currently lost. There is a low level of satisfaction of demand for warehouses, accompanied by a high level of rental prices with a simultaneous low vacancy in the rental market of warehouse real estate. The reason for this is the institutional disorder of land and lease relations in the country.
When analyzing the trends of the logistics services market, it is advisable to answer the question of change: volumes and structure of transportations -geographical and by types of transport; the level of wages, meeting the need for qualified personnel; capital investment in the logistics segment of services.
The volume of the market of logistics services in Ukraine depends on the trends of business activity in industry, the situation in the field of foreign trade, which is reflected in the indicators of change in the volume of transport services provided (Table 1). Analysis of the dynamics of industrial production confirms the intensification of domestic producers in 2016-2018, which is largely due to the low comparative base of previous years, when domestic production decreased significantly in response to the socio-political crisis that caused the armed conflict in the east. The volumes of foreign trade and transportation changed synchronously during this period. The stagnation of industrial production has led to a slowdown in the development of the logistics services market in Ukraine, which has significant potential for growth.
Analysis of the structure of the transport market (Table 2), which in the structure of the market of logistics services in Ukraine is almost 90%, indicates the leading role of road transport, the share of which in the total volume of transported goods is 68.6-72.7%. The share of rail and sea transport is declining, which, due to their higher (compared to road) efficiency and environmental friendliness, indicates the presence of untapped opportunities for their development.
Analysis of the geographical structure of foreign trade characterizes the areas of transportation, which determine the features of the provision of logistics services. The geographical structure of foreign trade has changed after the signing of the EU-Ukraine Association Agreement.
The leading direction of foreign trade were the CIS countries in 2013. The exports to the CIS countries amounted to almost 35%, and imports -36.3%. The State Statistics Service of Ukraine stopped publishing data on foreign trade activities with the CIS countries in 2019, but according to trends of previous years, we can say that exports to countries in this group are now less than 15% and imports -less than 23%.
Instead, the share of exports / imports to EU countries in total foreign trade turnover increased. In 2013, the share of exports to EU countries was 26,5%, imports -35,1%. In 2019, these indicators were at the level of 41,5% and 41,2%, respectively.
The analysis of labor relations in the field of logistics services indicates that enterprises-providers of logistics services have a significant unmet need for qualified personnel (Fig. 5). Thus, according to the analytical company Pro-Consalting, 35% of logistics providers note the dependence of future development prospects on the provision of highly qualified employees. The most acute problem is in the field of motor transport -most (up to 75%) vacancies of freight forwarders are concentrated here (Portal of top managers of wholesale and retail trade, 2018).
Experts call the migration of qualified personnel from Ukraine to other countries the reasons for the lack of qualified personnel. Motivation -a high level of wages. For example, the average salary of a freight forwarder in Poland is 1.3 thousand euros, and in Ukraine -about 0.4 thousand euros. The shortage of qualified personnel in the field of logistics according to the State Employment Service is about 20 thousand people, and there is a tendency to increase. Investments in technological innovations are a source of quality development of the logistics services market in the world and in Ukraine. The direction of their implementation is the automation of the logistics cycle -from the automation of the processes of physical movement of goods to the intellectualization of management logistics processes.
In the global context, the main trends in the development of the market of transport and logistics services are clearly defined, which are in the following areas: 1) Digitalization. According to experts, digitalization will have the greatest impact on the logistics business in the coming years, creating conditions for increased revenue through enhanced interaction with customers through digital channels and reduce costs in the customer service process. Digitalization expands opportunities for online marketing, reduces business risks through online payments and reduces the negative effects of lack of qualified professionals.
2) Growth in trade between the EU and China.
3) Today, the Chinese leadership is actively cooperating in the framework of the "One Belt -One Road" initiative with other countries, creating international transport corridors. Within the "One Belt -One Road" project, new transport corridors between the EU and China are being designed. Thereby increasing interest in new areas of business that were not previously used due to high logistics costs. 4) Introduction of the latest technologies "Industry 4.0" and software in the field of logistics Automation and implementation of the latest technologies, such as cloud storage and blockchain use, significantly increase production efficiency, preserve supply chains, reduce the risk of errors or fraud. In modern logistics is becoming a popular way to enable the customer in real time tracking the movement of its goods, which increases the transparency of transportation. Automation and robotics in the field of warehousing logistics has led to the emergence of fully automated warehouses. The use of unmanned aerial vehicles in logistics significantly increases the efficiency of the enterprise and reduces costs. According to experts, the introduction of the latest technologies can reduce freight costs by 10-15% (Market overview of transport and logistics services in the Republic of Belarus, 2020).
These trends determine the directions of further development of the market of transport and logistics services and demonstrate the integration of material, information and service flows. This approach changes the configuration of logistics processes, which allows you to optimize them and make them more efficient.
Global trends in logistics development open up promising areas for their implementation in Ukraine based on the use of Blockchain technologies, 3D-visualization, robots and drones, the use of unmanned vehicles and electric vehicles, 3D printing.
In general, we can note the following trends in the market of logistics services in Ukraine in the context of global trends: − active development of segments of small (up to 30 kg) and groupage cargoes; − reduction of full-time transportation; − increase in the share of manufacturers of furniture, auto parts, textiles; − expansion of the network of domestic providers of logistics services, their entry into the international market; − growth of the trucking segment.
Drivers of growth of the logistics services market in Ukraine in the context of European integration are the rapid development of e-commerce (ecommerce), the use of production technology "just in time" (JIT -just-in-time manufacturing) and migration. The largest is currently observed in the postal segment of logistics services.
All the above aspects and the implementation of the development of logistics activities contributes to the active application of marketing strategies in the field of logistics. This approach contributes to the need to substantiate the theoretical and methodological foundations of the formation of marketing strategies of enterprises in the market of logistics services. We will conduct a theoretical substantiation of approaches to the formation of marketing strategies in the logistics sector. It must be substantiated to achieve this goal, the relationship between marketing and logistics.
The use of marketing and logistics in the production process helps to get the maximum benefits. Marketing is aimed at identifying demand, logistics -to ensure the functioning of material, information flow to the final consumer. Thus, both functions are interconnected in a single and indivisible process, which operates on an ongoing basis to ensure the efficient operation of enterprises.
Within individual enterprises, logistics can be separated as one of the parts of marketing. In most cases, the connection between marketing and logistics can be seen in the process of marketing and distribution of goods or at the stage of supply of raw materials and materials for production. In this case, logistics is responsible for the process of physical placement of finished products and is an important element in the formation of distribution channels. In some enterprises, physical distribution and sales become the main part of marketing activities and the formation of trading strategy.
The functional relationship between marketing and logistics is a necessary element in the production process. However, marketing plays a major role as it provides communication with consumers and logistics increases its efficiency. In the logistics process there is communication with customers, as they are supplied with the necessary goods and services.
Substantiating the marketing strategy should determine its characteristics in the implementation of logistics activities. General approaches to the interpretation of marketing strategy are based on its definition as an element of the company's marketing plan, which determines the long-term direction of its development to achieve the maximum level of profitability using available limited resources.
A specific feature of the marketing strategy in logistics is a systematic analysis of the needs of potential consumers in the market. In the course of research, the analysis of sale of production, the analysis of sales on the basis of volume of the order, the analysis of sale to buyers, the analysis of factors of size of sales is carried out.
Analysis of scientific sources on the problems of formation of marketing strategies, allowed to generalize marketing strategies used in the market of logistics services (Fig. 6).
According to G. Kontsevich, the most common types of marketing strategies used in logistics at the enterprise are the following (Kontsevich G.E., 2019: 203): 1. Concentrated growth: transformation of the sales market or its improvement with the use of technology. This strategy involves the fight against competing companies for a significant market share.
2. Integrated growth: the most important task is to increase the company's divisions through the manufacture of new products. The strategy provides for the management of warehouses of enterprises.
3. Diversified growth: the application of this strategy is possible only if the company cannot focus on development in these conditions. Diversified growth implies that the company will be engaged in the production of new goods and services, through which it will be able to enter the trading market.
Figure6 -Marketing strategies in the logistics services market
Source: compiled on the basis (Gurzhiy N.G.,2017;Frolova L.V., 2002;Filyuk G Noted by Frolova L.V. (Frolova L.V., 2002:203), logistics aims to optimize logistics flows that operate in different systems of micro, meta-, meso-or mega levels, and is based on the combination and coherence of flow processes, achieving a synergistic effect, and this goal is achieved by minimizing total costs, maximizing the profits of participants and ensuring the social effect. Therefore, on the basis of this statement we can distinguish two marketing strategies in the field of logistics activities of the enterprise: 1) The strategy aimed at reducing overall logistics costs in the sales system. This strategy is characterized by the fact that the management of sales activities of the enterprise is aimed at optimizing the current costs of logistics. In particular, transport and warehousing, reduction of production and inventories, the formation of optimal cost and distance schemes for the movement of goods, minimizing investment costs for the formation and maintenance of logistics infrastructure.
Unfortunately, the effect of reducing logistics costs is often offset by a decrease in the quality of logistics services in the sales system of the enterprise and leads to losses due to failure of supply schedules, damage to products on the road, reducing the network of warehouses. This leads to the breakdown of partnerships with customers, reduced demand for products, reduced activity and financial losses.
2) The strategy of expanding logistics services includes the following areas: improving the quality of logistics operations, increasing logistics costs, introduction and improvement of customer and customer service and its logistics, the use of logistics tools to support and update the product life cycle, quality management of logistics services based on national and international standards.
In the process of forming and substantiating the marketing strategy for the provision of logistics services should take into account the functional integration of marketing and logistics elements. The success of such a combination depends on the competitive strategy and tactics within the enterprise, in particular, cooperation, supply and distribution, the movement of information flows, warehousing and more. It is also necessary to form external strategies for market coverage at the corporate and functional levels of sales management. These are market coverage strategies that depend on the product specialization and competitive strategy of the manufacturer: 1) Intensive distribution strategy. It consists in the maximum possible use of the external sales network -wholesale warehouses and retail outlets. Due to the wide coverage, the company achieves competitive advantages even when its products are not specific or unique compared to the products of competitors. The disadvantages of this strategy are the high logistics costs for the formation and delivery of small orders, reduced control over the implementation of the marketing strategy of the enterprise, possible problems with maintaining the image and quality of the brand.
2) Selective distribution strategy. According to this strategy, the number of trade intermediaries is low. It is used if the product has special characteristics. Sales costs are reduced by restricting access to the product. Closer cooperation with intermediaries is ensured. The disadvantage of the strategy is the scarcity of expanding market coverage.
3) Exclusive distribution strategy. There is only one retailer (or network) that distributes the manufacturer's product with a commitment not to sell competing brands. Products are aimed at a narrow segment of consumers, usually with specific activities or high sociocultural status. The advantages are keeping the company's image at a high level. In this case, logistics costs are high due to the remoteness of the intermediary or end user.
To ensure competitiveness, which forces organizations and countries to be different and better than competitors in the market, it is necessary to know how to use logistics platforms as differentiating and, consequently, competitive elements. In this sense, the development of logistics platforms must be based on the needs of markets. Therefore, a consumer-based marketing approach is used that improves the competitiveness of the organization, city, region or country by creating added value in the market. According to Kotler F. and Keller K., «the key to achieving organizational goals is the effectiveness of competition by creating, providing and transmitting added value to the target market» (Kotler F. and Keller K.,2016:18). This mission of every organization, and in particular the goal of logistics companies, is to be able to create value for their customers by reducing costs, procedures and time.
Any marketing strategy should be adopted by logistics companies, provided they know who will be provided with their services, because the conditions and requirements of each consumer are usually different. Logistics platforms need to decide how they will cover different levels of target markets through differentiated, concentrated and individual marketing. However, undifferentiated marketing is not considered strategically, because it makes no sense to try to meet the needs of the target market with a single service, when the requirements of members of the logistics chains to be served are too specific.
Differentiated or segmented markets. It is a market coverage strategy by which a logistics company decides to fill different market segments with different services; for example: warehousing, general services, logistics services.
Concentrated or niche markets. This is a market coverage strategy in which a logistics company prefers to cover a significant chpstki of a particular market in exchange for targeting a small share of a large market, for example, devoting itself to servicing only bulk carriers.
Individual markets. It is a market reach strategy in which marketing services and programs are developed based on the needs and preferences of each client; in this case, the logistics company has to develop and adapt the services it provides to the needs of individual demand.
The specificity of the market of logistics services is that marketing plans are developed by enterprises after determining the overall logistics strategy and indicators of capacity and market coverage. Marketing plans, in this case, are mostly aimed at increasing the profitability of the enterprise.
Logistics companies need to identify opportunities to ensure their market presence in the long run by adapting services to selected markets or creating service innovations for different actors in the logistics chain. Having a system of marketing information to achieve prospects is critical. Such a system is designed to facilitate the logistics platform, monitoring the behavior of agents and market factors, in particular, social, economic, technological, cultural, legal, demographic and political (Kontsevich G.E., 2019: 107).
In terms of customer relations, logistics companies will seek to establish long-term relationships in which cost generation is the main axis of interaction with demand and will allow them to clearly and precisely establish the services to be provided according to the needs of the various logistics chains.
Conclusions
Thus, in the conditions of the specified world trends which occur on logistic activity of the enterprises working in the market of logistic services the problem of formation of marketing strategies is actualized. Global trends in the development of logistics processes and the impact on them of digitalization and technology "Industry 4.0" has exacerbated the formation of strategic directions. Moreover, the corona virus pandemic has negatively affected the development of the logistics services market, while e-commerce has revived. In these conditions, the transformation of logistics processes towards the use of modern dropshipping technologies. Which in turn also affects the current configuration of the market of transport and logistics services. The general trend of development of the logistics market in ISSN 2534-9228 (2021) VUZF review, 6(4) the world and in the Ukrainian economy is to reduce the volume of freight transport, increase China's presence and dominance in the transport segment, increase the share of road transport in transport in the EU and Ukraine. | 8,141 | 2021-12-27T00:00:00.000 | [
"Business",
"Economics"
] |
Emotion recognition for human–computer interaction using high-level descriptors
Recent research has focused extensively on employing Deep Learning (DL) techniques, particularly Convolutional Neural Networks (CNN), for Speech Emotion Recognition (SER). This study addresses the burgeoning interest in leveraging DL for SER, specifically focusing on Punjabi language speakers. The paper presents a novel approach to constructing and preprocessing a labeled speech corpus using diverse social media sources. By utilizing spectrograms as the primary feature representation, the proposed algorithm effectively learns discriminative patterns for emotion recognition. The method is evaluated on a custom dataset derived from various Punjabi media sources, including films and web series. Results demonstrate that the proposed approach achieves an accuracy of 69%, surpassing traditional methods like decision trees, Naïve Bayes, and random forests, which achieved accuracies of 49%, 52%, and 61% respectively. Thus, the proposed method improves accuracy in recognizing emotions from Punjabi speech signals.
www.nature.com/scientificreports/India is a multilinguist country where most Indians live in rural areas.It has been determined that with time, various languages have been demolished and others are endangered.In this study, it has been identified that there are 1652 mother tongues in India including 103 foreign languages.As per the Indian constitution, there are 22 major languages in India out of which Punjabi is one of the widely spoken languages 19 The distribution of Punjabi language around the globe for the top three countries are 48.2%,2.8%, and 1.5% in Pakistan, India, and Canada respectively 20 .Additionally, it has been observed that there is limited work available for speech emotion detection using the Punjabi language.Another shortcoming is the non-availability of public datasets in the Punjabi language.To address this issue a novel dataset is created by the researcher.
It is analyzed that there is no prior standardized multimodal emotion dataset, which contains recordings of speech and text of people who speak native languages in the Punjabi Language.Figure 2 shows an analysis of research works done for some of the Indian languages in the last two decades 21 .
The PEMO dataset 22 is created to solve the issue.This dataset consists of web series and movies from YouTube.Further, every stream of the taken utterance is divided into smaller segments.This helps to cover the sample of speech of one speaker with minimum background noise.These utterances are labeled by three annotators on a 5-point scale namely happy, angry, sad, neutral, and none of the mentioned.These utterances were taken from native Punjabi speakers having no hearing loss or mental issues.The final label selected for the utterance depends on the common label used by all annotators.However, the utterances that do not have common labels were removed from the dataset.It has been identified that a neutral label has the maximum number of utterances, whereas a sad label is labeled with the minimum number of utterances.
The role of the PEMO dataset 22 is to recognize emotions effectively.Python coding is used to convert audio signals to spectrograms.Further, these resulting spectrogram signals are input into (CNN) network to train the model.In the end, the trained model is tested on a 20% dataset that helps to recognize human emotions.
Therefore, the main contribution of this paper is speaker-independent emotion recognition which is summarized below.
(1) The emotion-labeled speech corpus is expanded for the Punjabi language by utilizing freely available resources from multiple social media platforms.The objective involves the systematic collection of a diverse and extensive dataset of spoken Punjabi language that is appropriately labeled.The process entails conducting targeted searches and applying filters and keywords to gather relevant speech data.The collected data will then undergo an annotation process to accurately label the expressed emotions in each speech sample, categorizing them into four fields: happy, sad, angry, and neutral.Vol:.(1234567890)Organization of the research article: The article is structured as follows: Sect."Background" provides a comprehensive review of current methodologies in SER, highlighting their advantages and disadvantages.Section "Proposed methodology" presents the proposed method, outlining the approach taken in this research.In Sect."Experiment & results", the experimental data and results are discussed in detail.Finally, Sect."Conclusion and future work" explores potential future research directions that can be pursued based on the findings of this study.
Background
The typical SER consists of two major parts (1) a unit that is processed in nature and retrieves the best feature vectors from signals of voice, and (2) using its feature vectors, the classifier can detect the hidden emotions in speech.The details of feature extraction and categorization methods are provided in this section.The selection of speech vectors is one of the most prevalent issues faced by systems of SER that permit an easy distinction between different emotions.The variations in various speakers, styles of speaking, or speaking rates, as well as different sentences directly impact the extracted features like the energy contour and pitch [23][24][25][26] .Breaking down voice signals into smaller components is one method known as frames.Global features are extracted from complete utterances whereas local features are extracted from each frame thus global features result in smaller dimensionality features, minimizing the amount of computing required.Furthermore, it's possible that a spoken utterance can be associated with multiple emotions; each emotion is associated with distinct frames.Moreover, the process of identifying boundaries between these frames is difficult due to the expression of certain emotions that vary between speakers as well as cultural differences and changes in the environment.Most of the research in the literature was done in a monolingual emotion classification context, with no consideration for cultural differences across speakers.The method was created to extract features that are advanced by using magnitude spectra and then making a comparison with the features that are hand-crafted.The author used a single contextdependent DNN with several voice tasks to alter a variety of Gaussian mixtures 27 .This author devised a system for analyzing the back-and-forth utterance levels of speech and then predicting the emotion of a speech using the results 28 .A few algorithms have been employed to elicit SER 29,30 .However, research has revealed that every classifier is dependent on the domain while comparing in case of accuracy as well as data quality.An aggregated approach that includes many classifiers is also being researched for enhancing SER accuracy 31 .
As technology rapidly adapts to processes from beginning to conclusion for classifying tasks with the use of algorithms based on deep learning.It is becoming increasingly important to research hierarchical systems to conduct SER for exceedingly difficult data sets.The automated process of extraction of discriminative features, which enables effective categorization of many types of data, is one of the strengths of the learning method which is known as end-to-end.The author pioneered learning based on features in SER by employing CNN to learn certain salient that affect the user 32 .They employed publicly available speech databases containing diverse languages.In terms of speaker variation, variation in language, and noise in the environment they were able to achieve high-quality results using learned features in comparison to other known feature representations.There are many approaches for employing CNNs to recognize emotion, but some use spectrograms to identify emotions in speech, which is the first stage in the process of SER.The spectrogram-based SER approaches have included an additional classifier on fully linked layers to increase the processing capability of the model.In this, the researcher contributes the feature block formed from the feature vector's effect salient is further supplied to an SVM classifier to determine the emotion class of a voice utterance 32 .Another point of reason that differentiates our work from previous studies is that we've implemented augmentation along with early stopping of the model to prevent overfitting of the model.This helps in better training of the model as it solves two problems, one is data scarcity, and the other is overfitting of the network model.In comparison to other architectures, the proposed method is less susceptible to overfitting when using limited data for training.Table 2 shows the summary of the work done by various researchers based on the classifiers used.
It has been observed from the literature that only a few datasets are available in Indian Languages.Some of the Indian languages in which the datasets are available are Hindi 8 , and Urdu 10 .There is no 'spontaneous audio dataset' publically available for the Punjabi Language.So, there is a need to develop a labeled Punjabi Speech Emotion Database to train the machine for emotion recognition.
Proposed methodology
The specified system uses a technique (learning) based on features that are driven by a discriminative CNN and spectrograms for detecting the speaker's state related to emotions.In Fig. 3, the flowchart of the proposed methodology is presented.In the beginning, the audio files are considered as input and pre-processing has been done on the raw data files.These data files are taken from different sources.Once the raw data is collected speech enhancement is done by increasing the speech quality, and intensity using the PRAAT tool.In the next step, unwanted parts of the audio clip have been removed, for example, noise, and silence using noise reduction techniques.To categorize audio clips into four classes, segmentation has been done where the audio clip is divided into smaller parts to get the single emotional voice of a single person.Furthermore, for the count balance of four classes, data augmentation is done that removes the biases of the selected categories.The output of the pre-processing is the labeled audio clips contained in the pre-processed database.Moreover, this pre-processed database is converted into spectrograms so that it can be fed into deep-learning models.In addition, the model extracts the spectral features from the spectrogram that will be used for the training of the machine.The essential components of the proposed framework have been detailed in the following sections.
Retrieving spectrograms from voice
A representation of the volume or strength of a signal across an individual waveform is called the spectrogram.By analyzing the strength of energy in a specific region it is also possible to observe the variations in energy over time.Generally, spectrograms can be brought into use for detecting the frequencies of the continuous signal.It's a 2-D graphical representation in which the horizontal line denotes time, and the vertical axis denotes frequency.The strength of any particular frequency component f over the given moment, that is t inside the spoken signal can be described by the darkness or color of the spot S in the spectrogram (t, f). Figure 4. shows the spectrograms of one audio file taken from each emotion category.The spectrogram for angry emotion is shown in Fig. 4a, whereas for happy, neutral, and sad emotions is shown in Fig. 4b, c, d.
Convolutional neural network
Convolutional neural networks (CNNs) are sophisticated models that have led to breakthroughs in the field of image categorization 26,41,42 .By applying the series of filters to the original data of each pixel in the picture, CNNs learn and extract the most important properties.The model then constructs categorization using these properties.Fully connected layers, extract feature that maps while concurrently categorizing the derived features, and pooling layers, which lower the dimensions of feature maps and thus decrease process time.The layers are normally put in a logical order, with the number of convolutional layers employed first, then pooling layers, and lastly fully connected layers.The term "cluster" is most associated with a CNN, which is a hierarchical neural system composed of a succession of convolutional layers and pooling layers that perform feature extraction by layer-by-layer transforming images (including a spectrogram) to a higher abstraction.The first level comprises basic features like edges and raw pixels, whereas the subsequent layers have local discriminative characteristics, and the final dense (fully joined) layer creates an overall representation of the native convolutional features that is later supplied to the machine learning algorithm.Every convolutional kernel produces a feature map with activation values corresponding to the existence of certain properties.Many feature maps are created at each layer of convolution.The www.nature.com/scientificreports/pooling layer is used to avoid overfitting and reduce computations inside the network.Max pooling is the most common pooling approach that keeps the most valuable value while discarding all other values found in the area.Fully linked layers utilize larger filters to address more intricate characteristics of input layers.The effectiveness of these models is determined by the choice of proper kernel size and shapes, as well as neighborhood pooling.
Proposed model architecture
As shown in Fig. 5, the speech signals that act as input to the model are collected from the various multimedia sites.These signals are processed and segmented using the PRAAT software.The segmentation of the speech signals is done in such a way that each utterance or file consists of a single emotion by a single person.The person may be of any gender.The files taken from various movies, and plays, the utterances taken consist of some background noise.These utterances are labeled with 4 emotions, (1) happy, (2) sad, (3) angry, and (4) neutral.
Out of 70% of selected audio files, 50% are selected from neutral having the highest label whereas 10% is the lowest for sad audio.However, happy, and angry contain 18% and 22% of the dataset respectively.The collected dataset is very unbalanced as several utterances for one emotion are more than another emotion category.To balance the dataset for proper training of the model data augmentation technique is applied.The time masking technique is used to augment the speech signals.
The suggested CNN architecture consists of an input layer, 4 convolutional layers, 1 pooling layer, 3 dense layers, and a dropout layer as shown in the CNN architecture Fig. 6.CNN receives spectrograms that have been derived from emotive sounds.These spectrograms are normalized and only fed to the network.The spectrograms were 16 × 224 pixels in size.After that, they were resized to 224 × 224 pixels for use in CNN.Convolutional kernels are widely used as the input in the early layer levels for extracting feature maps.
The motive is to train and verify models.These spectrograms were split in the ratio of 80:20, whereas 80% was used for training and 20% for confirming the model's performance.In this paper, a fivefold cross-validation procedure was adopted.The training accuracy against the number of epochs for the fivefold validation process is shown in Fig. 7.It depicts that with the increase in epochs, the training accuracy also increases.In addition, when there are a smaller number of epochs the accuracy for the training fold 5 is also less but as the number of epochs increases, the accuracy for the training fold 5 is highest.
In this, the author takes the best model which has the highest accuracy from the 5 folds.The accuracies for all the folds in the fivefold validation are displayed in Fig. 8.By that, the training accuracy is more than the validation accuracy in all five folds.In every fold, the test size is 20 percent of the total dataset, and the rest is used for training and validation purposes.Every time a random set of features is selected and passed to the model for training and validation purposes.The best model with the highest training accuracy (69%) was achieved using 50 Epochs and early stopping criteria so that the model cannot be overtrained.
Experiment & results
The experiment aims to develop a Speech Emotion Recognition (SER) system for the Punjabi language using deep Convolutional Neural Networks (CNNs).First, a dataset was meticulously assembled by manually collecting audio files from multimedia platforms like YouTube.These files, encompassing novel utterances and short clips, were then expertly labeled by Punjabi language experts, ensuring high-quality annotations.Specifically, only those audio files receiving unanimous agreement on emotion labels were included in the dataset, resulting in a selection of 9000 files out of the initial 13,000 sent for expert evaluation.
The distribution of emotions within the dataset was carefully considered, with 50% of the files selected for neutral emotions, 10% for sad, 18% for happy, and 22% for angry before augmentation.After applying augmentation techniques, which expanded the dataset by 70%, the emotional distribution was adjusted to 29.41% for neutral emotions, 23.5% for sad, 21.1% for happy, and 25.88% for angry.Spectrograms, providing visual representations of the frequency spectrum variations over time, were extracted for all utterances using PRAAT software.Each spectrogram corresponded to a single emotion, forming the input data for the subsequent CNN model.
The CNN model was then trained using the spectrogram data and evaluated for various performance metrics, including precision, recall, F1-score, and accuracy.The results, illustrated in Figs. 10, 11, and 12, demonstrate the effectiveness of the proposed algorithm.Notably, the CNN-based approach achieved an accuracy of 69%, outperforming traditional machine learning algorithms such as random forest (61%), decision tree (52%), and Naïve Bayes (49%) as described in Table 3.It has been observed that despite using a deep learning model the achieved accuracy is 69% because of the language model dependency.Punjabi speech poses unique challenges for emotion recognition compared to other languages such as regional accents, dialects, or cultural nuances in expressing emotions could impact the model's performance.
Overall, the experiment successfully demonstrates the efficacy of utilizing deep CNNs for SER in the Punjabi language, showcasing superior performance compared to conventional machine learning techniques.These findings hold promise for applications in sentiment analysis, customer service, and human-computer interaction within Punjabi-speaking communities.
Conclusion and future work
Emotions play an important role in the day-to-day life of humans.To make human-computer interaction a natural process, automatic speech-emotion recognition is very important.Emotions can be recognized from various modalities like speech, text, facial expressions, etc.However, speech is the most common way through which humans communicate.Speech modality is used in the proposed work to recognize emotions.Moreover, the major difficulty in recognizing the features is in classifying the emotions.There are two types of features, namely, low-level features, and high-level features.In the proposed methodology, high-level features are used whereas previous studies worked on low-level features.The CNN network with high-level features is used to get better performance.These features are extracted from the novel dataset created manually using multimedia and sites in the Punjabi language.This dataset contains audio files and is converted into spectrograms.Afterward, the spectrograms are fed into the CNN model, to classify the audio files into four categories namely, sad, happy, angry, and neutral.
To evaluate the proposed model, four parameters namely, precision, recall, F1-score, and accuracy have been selected.It has been concluded that the proposed methodology outperforms decision trees, naive Bayes, and random forests.This is a novel work as very little research has been conducted in this area using the Punjabi language.
Therefore, this research will open new avenues for upcoming researchers.For future work, this area can be expanded in three ways.
• In the proposed work, 9000 audio files were collected to recognize the emotions in Punjabi language.To have a high impact, there is a need to create a large dataset by increasing the count of audio files.This will help to get improved and optimized results.• Only one modality has been implemented in the proposed work.The work can be extended using text, facial expressions, and electroencephalogram (EEG).• In the proposed work, the implementation has been done in one language.In the future, this can be compared with more than one language to check the dependency of one language on another.This will increase the effectiveness of human-machine interaction.
Figure 1 .
Figure 1.(a) Traditional Approach for SER (b) DNN Based Approach for SER.
( 2 )Figure 2 .
Figure 2. Analysis of experiments done on Indian languages for speech emotion recognition in the last two decades.
Figure 3 .
Figure 3. Flowchart for Speech Emotion recognition system.
Table 1 .
Overview of various datasets.
Urdu, Sindhi 1435 utterances Happiness, anger, sadness, disgust, surprise, sarcasm, neutral Acted Audio For Research use available on Request Indian Institute of Technology Kharagpur Simulated Emotion Hindi Speech Corpus (IITKGP-SEHSC) 8 Hindi 12,000 utterances by 10 actors Happy, anger, fear, disgust, surprise, sad, sarcastic, neutral Acted Audio For Research use available on Request robust to noise and variations in recording conditions, allowing for more accurate and reliable feature extraction.Moreover, spectrograms offer valuable visualizations that aid in the interpretation and analysis of emotional cues.By leveraging spectrograms, researchers can develop more effective and robust emotion recognition systems that are capable of accurately recognizing emotions from speech signals in various real-world scenarios.
Table 2 .
Overview of literature review based on classifiers.
Table 3 .
Comparison of three classifiers with the proposed model in terms of accuracy. | 4,670.4 | 2024-05-27T00:00:00.000 | [
"Computer Science",
"Linguistics"
] |
Particle motion: the missing link in underwater acoustic ecology
Sound waves in water have both a pressure and a particle‐motion component, yet few studies of underwater acoustic ecology have measured the particle‐motion component of sound. While mammal hearing is based on detection of sound pressure, fish and invertebrates (i.e. most aquatic animals) primarily sense sound using particle motion. Particle motion can be calculated indirectly from sound pressure measurements under certain conditions, but these conditions are rarely met in the shelf‐sea and shallow‐water habitats that most aquatic organisms inhabit. Direct measurements of particle motion have been hampered by the availability of instrumentation and a lack of guidance on data analysis methods. Here, we provide an introduction to the topic of underwater particle motion, including the physics and physiology of particle‐motion reception. We include a simple computer program for users to determine whether they are working in conditions where measurement of particle motion may be relevant. We discuss instruments that can be used to measure particle motion and the types of analysis appropriate for data collected. A supplemental tutorial and template computer code in matlab will allow users to analyse impulsive, continuous and fluctuating sounds from both pressure and particle‐motion recordings. A growing body of research is investigating the role of sound in the functioning of aquatic ecosystems, and the ways in which sound influences animal behaviour, physiology and development. This work has particular urgency for policymakers and environmental managers, who have a responsibility to assess and mitigate the risks posed by rising levels of anthropogenic noise in aquatic ecosystems. As this paper makes clear, because many aquatic animals senses sound using particle motion, this component of the sound field must be addressed if acoustic habitats are to be managed effectively.
Introduction
Auditory cues are particularly useful in aquatic habitats, as sound travels relatively far and relatively fast in water (Ainslie 2010). For this reason, a large number of aquatic organisms have evolved ways of detecting and producing sound (Song, Collin & Popper 2015) and aquatic bioacoustics has been an active field of study for many decades (Au & Hastings 2008). Audiometric studies have long recognized the significance of particle-motion detection in fishes and invertebrates (e.g. Chapman & Hawkins 1973;Fay 1984;Popper, Salmon & Horch 2001), yet investigations of acoustic phenomena in the ecology of aquatic systems have previously focused on only one component of the sound field: sound pressure (see for exception Banner 1968;Sigray & Andersson 2011).
From an ecological perspective, there are several key reasons why we need to better understand the particle-motion component of underwater sound. First, while aquatic mammals use sound pressure, all fish and many invertebrates (i.e. most acoustically receptive aquatic organisms) detect and use the particle-motion component of sound (Popper, Salmon & Horch 2001;Bleckmann 2004;Kaifu, Akamatsu & Segawa 2008). The role that particle motion plays in the biology and ecology of these species is largely unknown. (Particle motion is also important in terrestrial bioacoustics for invertebrates; however, its measurement is better established (see Morley, legislation surrounding the impacts of anthropogenic noise on fishes and invertebrates, until now the focus has been on sound pressure, even though many (if not most) of these species cannot directly sense this component of sound.
In some cases, particle motion can be calculated from sound pressure. However, sound pressure and particle motion are directly related only under specific conditions, which are not generally met in the shelf seas and shallow waters that most aquatic life inhabit. To characterize particle motion in these habitats, it is therefore necessary to make measurements using a particle-motion sensor. Instruments to measure particle motion have only recently become commercially available, and their use in tank experiments and field studies is still in its infancy (Popper et al. 2014;Merchant et al. 2015;Martin et al. 2016). As the uptake of these novel sensor technologies gathers pace, there is a growing need for user-friendly guidance on the methods, instrumentation, and underlying physics of particle-motion measurement to ensure broad understanding ofand participation inthis research effort. The relevant sectors extend from researchers to consultants to environmental managers, who are beginning to address the rising influence of anthropogenic noise on aquatic ecosystems. It is therefore important that the significance of particle-motion measurement is clearly articulated for non-specialists.
Here, we provide a brief introduction to underwater particle motion in an ecological context. We begin with an accessible overview of the physics of particle motion and the detection of particle motion by fishes and invertebrates. To help inform new studies, we offer practical guidance on instrumentation and data analysis techniques for particlemotion measurement, as well as software in MATLAB (Mathworks, Natick, MA, USA) to analyse particle-motion data, including tutorial materials and example data. Finally, we identify several key knowledge gaps related to particlemotion in aquatic environments, which warrant further research.
Physics of particle motion
Sound is propagated vibratory energy (Gans 1992). Put simply, a sound wave propagates because particles next to a vibrating source are moved backwards and forwards in an oscillatory motion; these particles then move the particles next to them and so on, resulting in the propagation of vibratory energy. The particles of the medium do not travel with the propagating sound wave, but transmit the oscillatory motion to their neighbours. This particle motion contains information about the direction of the propagating wave. Particle motion can be expressed as displacement (m), velocity (m s À1 ) or acceleration (m s À2 ). These three quantities are directly related in a frequency-dependent way (see Box 1).
Sound pressure is the variation in hydrostatic pressure caused by the compression and rarefaction of particles as the sound wave propagates. If a sound can be assumed to be propagating as a plane wave (see below), then there is a simple relationship between sound pressure and particle velocity (Box 2). Particle acceleration and particle displacement can then be derived from the particle velocity if required (Box 1). A plane wave occurs where the wavefront can be considered flat: this is generally far from the sound source and far from boundaries where reflections could influence the shape of the wavefront (the definition of 'far' here depends on the wavelength of sound and the dimensions of the source; see Appendix S1). These conditions are typically not met in coastal and shelf-sea habitats at the low acoustic frequencies commonly used by fish and invertebrates, meaning there is not a reliable way to derive particle motion from sound pressure measurements. Although the relationship between particle motion and sound pressure can, in theory, be derived for more complicated wavefronts (e.g. by assuming an idealized geome-Box 1. Relationships between particle velocity, particle acceleration and particle displacement Particle velocity, acceleration and displacement are always linked by the following equations: Velocity and acceleration: where a = acceleration (m s À2 ), u = particle velocity (m s À1 ) and 2pf = angular frequency (f = frequency in Hz). Velocity and displacement: where ξ = displacement (m), u = particle velocity (m s À1 ) and 2pf = angular frequency (f = frequency in Hz).
Box 2. Calculating particle motion from sound pressure In a plane wave, sound pressure is directly related to particle velocity: u ¼ P qc 0 Eqn 2.1 where u = particle velocity (m s À1 ), P is acoustic pressure (Pa), q = density of the water (kg m À3 ) and c = sound speed (m s À1 ) (qc is also known as characteristic acoustic impedance). This is only applicable in a plane wave or where a plane wave is a suitable approximation (i.e. in the free field). Particle acceleration or displacement can be calculated from velocity using equations in Box 1.
In the near field of a point source, far from any boundaries that could lead to the wave not propagating due to the cut-off frequency, or reflections that could interfere with the propagating wave, the following equation can be used to calculate particle displacement from sound pressure: where ξ = displacement (m), p = pressure (Pa), f = frequency, q = density of the water (kg m À3 ), c = sound speed (m s À1 ) and r = distance to sound source (m). Particle acceleration or velocity can be calculated from displacement using equations in Box 1 (Chapman & Hawkins 1973).
try such as a spherical wavefront), for realistic scenarios direct measurement of particle motion is the only reliable method.
Note that in addition to the distance to the sound source and the proximity of boundaries, whether the plane-wave approximation is valid can be affected by other factors, such as: the size of the sound source; the cut-off frequency, which is related to the water depth (see Fig. 1); the wavelength of sound; and variations in the sound-propagation environment (e.g. sound speed variations in the water column and seabed, determined by temperature, density and salinity). As a rule of thumb, particle-motion measurement should be considered at depths of less than 100 m and frequencies less than 1 kHz, and at distances from the source of less than the Fraunhofer distance (distance where the near field transitions to the far field) or one wavelength (Fig. 2), whichever is greater. The calculator provided in the 'tools' section of Appendix S1 (with instructions in user guide Appendix S1) allows a user to enter frequencies, depths and information about the sound-propagation environment and provides advice about whether particlemotion measurement is necessary, along with a tool for predicting particle-motion levels when measurement is not necessary. In tank measurements, near-field effects, resonant frequencies and reflections will lead to a complex relationship between particle motion and pressure; thus, particle motion should always be measured directly.
Hearing of particle motion
Hearing is the detection of propagated vibratory energy by the ear (Gans 1992). All hearing is based on mechanosensory hair cells transducing vibrations into electrical signals. Particle oscillations can either be detected directly by hair cells that protrude into the medium (air or water), or by the relative motion between the body and a solid structure in the ear to which the hair cells are attached (Gans 1992). The bodies of fish and aquatic invertebrates, being composed mainly of water, are coupled directly to the medium (water). Thus, the whole body vibrates as a sound wave passes through. Denser calcareous structures in the inner ears, such as the otoliths and statocysts, lag behind the vibration of the body due to their impedance difference (being denser). Chordontal organs are also found in the legs of some crabs and allow detection of sounds propagating in the substrate by sensing leg movement (Popper, Salmon & Horch 2001). Hearing in fish and invertebrates seems to be focused in the lower frequencies; although some fish can hear up to over 100 kHz, most have a peak sensitivity under 1Á5 kHz (Popper & Fay 1993;Popper & Hastings 2009;Fay & Popper 2012). The hearing of particle motion in fishes is relatively well understood (see e.g. Fay 1984;Radford et al. 2012), but until recently, the availability of instrumentation for use in the field has hindered our understanding of the ecology of particle motion underwater.
Instrumentation
Although measuring particle motion has been possible for decades, instruments to record particle motion have only recently become available commercially. There are three main methods of measuring particle motion underwater: (i) calculating the pressure gradient between two hydrophones; (ii) measuring with velocity sensors; and (iii) measuring with accelerometers (Martin et al. 2016). To measure particle motion using pressure gradients, it is necessary to calibrate the phase response of the hydrophones accurately . While this method has been applied successfully (e.g. Zeddies et al. 2010), it requires costly hydrophones, Fig. 1. Cut-off frequency as a function of depth, calculated for a coarse silt bottom with a sound speed of 1593 ms À1 and density of 1693 kg m À3 , assuming that sound speed in water is 1500 ms À1 and water density is 1026 kg m À3 . Sounds below the cut-off frequency will not propagate as a plane wave and particle motion cannot be calculated from pressure; thus, it should be measured. Cut-off frequency (f c ) is calculated using the equation: f c = (pÀq sed /q w )/(2psinw c ) (c/H) where q sed = sediment density, q w = water density, w c = arccos (c/c sed ), c = sound speed in water, c sed = speed of sound in the sediment and H = water depth (Ainslie 2010). which can make highly accurate phase measurements, in addition to the necessary expertise for phase calibration. Velocity sensors (geophones) typically have a very low resonance and are only useful up to a few tens of Hertz. While geophones make better sensors for seismic measurements, accelerometers are more appropriate for acoustic measurements. As frequency increases, acceleration magnitude increases in relation to velocity magnitude, meaning the signal-to-noise ratio is better with an acceleration-based sensor. Given the limitations of the geophone and pressure-gradient approaches, the accelerometer will normally be the best option for particle-motion measurements in the frequency ranges relevant to fishes and invertebrates.
Accelerometers work in a similar fashion to fish ears: they measure the relative motion between the body of the device and a denser structure within. Thus, the coupling between the device and the water must be understood for accurate measurements to be made. Ideally, the accelerometer should be neutrally buoyant, meaning that it behaves in the same way as the surrounding water (e.g. Leslie, Kendall & Jones 1956). However, neutrally buoyant devices can be difficult to position and orientate as they drift with water movement. Negatively or positively buoyant devices are more practical as they can be suspended from the surface, the seabed, or some other platform. The effect of gravity can then be filtered out as part of the instrument calibration, although there may still be some effect on the vertical axis (Sigray & Andersson 2011), which needs testing.
The accelerometer functions by transducing changes in proper acceleration ('g-force', i.e. acceleration relative to free fall) in the x, y and z directions into current fluctuations, which are converted to voltages before being recorded by a digital device. The digital recorder must also be calibrated. This can be carried out by recording a signal such as a sine wave (or 'pure tone'), which has a known voltage. The recorded voltage is then compared with the known voltage to establish the effect of the device on the voltage.
Step-by-step instructions for calibrating recorders can be found in Appendix S1 (note that the same method can be used for recorders that are used with hydrophones or microphones). Manufacturers of recorders should provide information on the bandwidth over which a recorder has a flat frequency response. This is the range that a calibration of a single tone will be valid, provided the tone lies within this bandwidth. Alternatively, a frequency-dependent calibration can be carried out by measuring sine waves at several frequencies within the range of interest. It is advisable to calibrate recorders regularly (e.g. once per field season or year), as slight changes can occur with age, climate or travel. It is also advisable to measure the noise floor of the instrument (the selfnoise generated when no sound is present, e.g. in an acoustically isolated chamber) to assess whether measured particle motion levels are due to instrument self-noise.
Data analysis
There are no current standard methods for analysing particlemotion data. We provide a user-friendly tutorial (Appendix S1) and analysis programme (Appendix S2) for each of the steps needed to analyse data recorded from triaxial accelerometers or particle velocity sensors. Here, we present a non-technical outline of the analyses appropriate to recordings of different sound types.
When making recordings from an accelerometer, digitally recorded voltage fluctuations represent changes in particle acceleration that occur as a result of the particle motion in a sound wave. A plot of these fluctuations is called a 'waveform'; values exceed 0 when the wave is 'pushing' away from the source (when the phase of the wave is between 0 and 180°) and are below 0 when the wave is 'pulling' towards the source (when the phase of the wave is between 180 and 360°) (see Fig. 3). Using calibration information, these voltage fluctuations can be converted back to represent particle acceleration. Various analyses can then be applied to waveforms to quantify the sounds they represent, thus allowing us to summarize and compare sounds.
Impulsive and continuous sounds are typically quantified in different ways (Hawkins, Pembroke & Popper 2015). For impulsive sounds, the peak or peak-to-peak amplitude, rise time, crest factor and sound exposure level (SEL) are appropriate measures. For continuous sounds (or sounds that are longer lasting and thus better summarized using approximations to continuous sounds), it is more useful to average amplitudes over time. The simple mean level from the waveform would result in 0; thus, the root mean squared (RMS) is used. Fig. 3. Schematic of a sine wave illustrating phase, wavelength and peak-peak amplitude. Time is on the x-axis. The y-axis could apply to pressure (for sound pressure levels), particle velocity, particle acceleration, or particle position in space (for particle displacement), or voltage (the language of instruments that measure any of the above).
One way to assess the variability in sound over time is to measure consistency; the amount of time that the RMS exceeds a predefined sound level (Gill et al. 2015).
Impulsive sounds can have enough energy that they cause physical injury such as barotrauma in fish (Halvorsen et al. 2012), although this is not always the case (Kane et al. 2010). Sound energy from outside the hearing range of the animal concerned can also contribute to injury. For this reason, energy at all frequencies measured is usually included in impulse measurements when impulses may be loud enough to cause injury. It is thus important to consider the frequency response of equipment used to measure impulses, because conclusions could be compromised if recording equipment does not have a flat frequency response across the range of frequencies encompassing the peak frequencies of the pulse (Merchant et al. 2015).
For sounds that do not have enough energy to cause physical injury, the hearing range of the species of interest affects the frequencies of recorded sounds that are relevant. If the auditory sensitivity of the species of interest is known (rare, even in the pressure domain, but see Casper & Mann 2007;Radford et al. 2012 for exceptions), frequencies outside the range of hearing can be filtered out before calculating impulse metrics or RMS levels. Another useful way to account for the fact that different animals have different auditory abilities is to look at the energy present across the frequency spectrum, for example, at 1 Hz resolution. This information can either be plotted over time in a 3-D spectrogram (Fig. 4), where amplitude is coded by colour, or averaged over time by RMS and plotted on a 2-D power spectral density plot (PSD, Fig. 5). Variability in sound levels over time can be represented on a PSD by percentiles or 'exceedance levels' in addition to the mean.
There are currently no internationally agreed standard units for particle-motion measurement. Here, we use the following units in lieu of such standards (M. Ainslie, pers. comm.): displacement (dB re 1 pm), velocity (dB re 1 nm s À1 ), acceleration (dB re 1 lm s À2 ). From a technical viewpoint, velocity, acceleration and displacement are equally valid representa-tions. All three can be found in the literature (e.g. Banner 1968;Fay & Popper 1974;Radford et al. 2012). We consider that the acceleration is the most relevant, as it is closest to the way that fish and invertebrate auditory systems function (Au & Hastings 2008;Mooney et al. 2010). The analyses outlined above can all be carried out using the software provided in Appendix S2.
Discussion
It has been known for decades that fishes and invertebrates hear particle motion (e.g. Cahn, Siler & Wodinsky 1969). However, although many papers written about sound and fishes and/or invertebrates have acknowledged the importance of particle motion (e.g. Wale, Simpson & Radford 2013;Kunc et al. 2014;Neo et al. 2015;Simpson, Purser & Radford 2015), very few have reported particle motion measurements, particularly in field studies, but see for exceptions (Chapman & Hawkins 1973;Nedelec et al. 2014Nedelec et al. , 2015. Published examples of measurements of ambient underwater particle motion are also rare (see Banner 1968;Lugli & Fine 2007 for exceptions).
The major obstacle to scientific progress in this area has been the availability of appropriate equipment and the expertise to apply it in laboratory and field studies. Here, we have highlighted the recent availability of commercial instruments and their potential to make particle-motion measurement more accessible to researchers. We are optimistic that the analysis tools provided in the supporting information will encourage others to participate in this research effort. We have laid out some priorities for particle-motion measurements in Box 3. particle-motion measurement may play a role in answering important biological and ecological questions relating to fishes and invertebrates. From a methodological perspective, there are several related topics that warrant further attention. Deviations between sound pressure and particle motion can be high in the near field (near sound sources), meaning sound cues such as vocalizations are likely to be detectable at different ranges via particle motion compared with sound pressure. This is also the case for anthropogenic noise sources, such as pile driving and shipping, which may have near-field effects on fishes and invertebrates that scale with particle motion rather than sound pressure. Methods to measure and model the particlemotion field at close ranges are needed to understand better the behavioural and evolutionary implications for acoustic communication, and the potential effects of noise on aquatic fauna. A related subject is the role of directionality in these effects: sound pressure signals do not contain directional information, whereas particle motion is inherently directional, which gives information about source direction. To what extent this information is used by fish and invertebrates, and by what mechanism these animals resolve the 180°ambiguity in source direction are as yet uncertain (Bleckmann 2004). Finally, there is the inclusion of particle motion in remote sensing and modelling of acoustic habitats. Measurements of particle motion could improve eco-hydroacoustic models for environmental impact assessment where fish and invertebrates may be affected by anthropogenic noise (e.g. Rossington et al. 2013;Bruintjes et al. 2014). More generally, the use of remote sensing to monitor and model acoustic habitats is a growing area in relation to sound pressure (Gill et al. 2015;Merchant et al. 2015), and the extension of these techniques to include the particlemotion component of sound would further improve our understanding of natural and human-influenced soundscapes and their interactions with aquatic ecosystems.
Box 3. Priorities for particle motion measurements 1) Comparison of different suspension methods on calibration of accelerometers constrained in the vertical axis.
2) Comparisons between measured sound pressure levels, sound velocity levels, sound acceleration levels and modelled sound velocity and acceleration levels. With varying i Source type ii Source distance iii Water depth iv Bottom type 3) Comparisons of effects of sounds on fish where the pressure is maintained constant, but particle-motion levels are varied (can be achieved by adjusting speaker volume and distance). Conduct with i Species that cannot detect pressure (i.e. do not possess a swim bladder) ii Species that can detect pressure (i.e. possess a swim bladder) 4) Maps of areas where models cannot predict particle motion from pressure. Overlay with species presence or biomass. 5) Mechanisms used for sound source localization i Are fish able to localize sound sources in the far field?
ii How do invertebrates localize sound sources? 6) Does particle motion allow a release from masking for nearby signals in distant noise? This information should be incorporated into models predicting impacts of anthropogenic noise, which are thus far only based on pressure measurements. 7) Effect of accelerometer size on particle-motion measurements in small tanks. | 5,560.8 | 2016-07-01T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
Trimethyl 2,2′,2′′-[1,3,5-triazine-2,4,6-triyltris(azanediyl)]triacetate
The title compound, C12H18N6O6, was synthesized via nucleophilic substitution by reacting 2,4,6-trichloro-1,3,5-triazine with glycine methyl ester hydrochloride in reflux (dried toluene) under anhydrous atmosphere. Individual molecules self-assemble via strong N—H⋯O hydrogen bonds into supramolecular double tapes running parallel to the [010] crystallographic direction. The close packing of supramolecular tapes is mediated by geometrical reasons in tandem with a number of weaker N—H⋯O and C—H⋯N hydrogen-bonding interactions.
The title compound, C 12 H 18 N 6 O 6 , was synthesized via nucleophilic substitution by reacting 2,4,6-trichloro-1,3,5triazine with glycine methyl ester hydrochloride in reflux (dried toluene) under anhydrous atmosphere. Individual molecules self-assemble via strong N-HÁ Á ÁO hydrogen bonds into supramolecular double tapes running parallel to the [010] crystallographic direction. The close packing of supramolecular tapes is mediated by geometrical reasons in tandem with a number of weaker N-HÁ Á ÁO and C-HÁ Á ÁN hydrogenbonding interactions.
Related literature
For background to nucleophilic reactions of 1,3,5-triazine, see: Blotny (2006); Giacomelli et al. (2004). For coordination polymers based on N,N 0 ,N 00 -1,3,5-triazine-2,4,6-triyltrisglycine, see: Wang et al. (2007a,b,c). For previous work from our research group on the synthesis of derivatives of 2,4,6trichloro-1,3,5-triazine from reactions with glycine methyl ester hydrochloride, see: Vilela et al. (2009a,b Table 1 Hydrogen-bond geometry (Å , ). Comment 2,4,6-Trichloro-1,3,5-triazine is a versatile organic molecule which can be used for the design and construction of larger entities because the three chlorine atoms are prone to nucleophilic substitution by several functional groups to form amides, nitriles and carboxylic acids, among several others (Blotny, 2006;Giacomelli et al., 2004). Resulting compounds exhibit specific physico-chemical properties which render them of potential academic and industrial interest (e. g., in the textile and pharmaceutical industries). Following our interest on crystal engineering of functional solids we have been using 2,4,6-trichloro-1,3,5-triazine as a molecular canvas for the design and synthesis of novel multipodal organic ligands. For instance, we have recently reported the synthesis and structural characterization of the monosubstituted form of the title compound: methyl 2-(4,6-dichloro-1,3,5-triazin-2-ylamino)acetate (Vilela et al., 2009a). Following the same reaction procedure we were able to isolate the title compound (the trisubstituted derivative) as a pure phase. Noteworthy, the title molecule can be a precursor of N,N',N''-1,3,5-triazine-2,4,6-triyltrisglycine which has been used in the construction of a number of transition metal coordination polymers (Wang et al., 2007a,b,c).
The complete nucleophilic substitution of the chlorine atoms of the chlorotriazine ring by methyl glycinate (Vilela et al., 2009b) led to the isolation in the solid state of the title compound (see Scheme). This novel compound crystallizes in the monoclinic centrosymmetric C2/c space group with one whole molecular unit composing the asymmetric unit as represented in Figure 1. The presence of three pendant substituent groups imposes significant steric impediment around the aromatic ring, ultimately preventing the existence of onset π-π stacking interactions as reported in the crystal packing of the monosubstituted analogue compound (Vilela et al., 2009a). In addition, the spatial arrangement of the pendant groups promotes a minimization of the overall steric repulsion: adjacent pendant moieties are either pointing toward different sides of the ring or, when located on the same side, the N-C bond rotates so the groups are as far away as possible from each other ( Figure 1).
The N-H groups are hydrogen-bonded to two carbonyl groups from adjacent molecular units. O4 acts as a double acceptor of two strong (d D···A in the 2.91-2.99 Å range) and highly directional [<(DHA) angle in the 168-172° range] N-H···O hydrogen bonding interactions ( Figure 2 and Table 1). These interactions lead to the formation of a double tape of molecular units running along the [010] crystallographic direction. As represented in Figure 3, the pendant groups point outwards of the double tape, thus allowing for an effective close packing of tapes in the crystal structure. Besides these pure geometrical reasons, the N6-H6 moieties located in the periphery establish physical connections between adjacent supramolecular tapes via a weaker N-H···O hydrogen bond (not shown; see Table 1 for geometrical details). It is also worth to mention that the presence of several crystallographically independent -CH 2 -and terminal -CH 3 groups in close mmol; Sigma-Aldrich, >99.0%) were added at 273 K to a solution of 2,4,6-trichloro-1,3,5-triazine (100 mg, 0.542 mmol; Sigma-Aldrich, >98,0%) in dried toluene (ca 5 ml). The reaction mixture was kept under magnetic stirring and slowly heated to reflux under anhydrous atmosphere. The progress of the reaction was monitored by TLC and stopped after 24 h. The reaction mixture was then separated by flash column chromatography using as eluent a gradient (from 0 to 5%) of methanol in dichloromethane. The third isolated fraction was identified as the title compound (27% yield). Single crystals suitable for X-ray analysis were isolated from recrystallization of the crude product from a solution of dichloromethane: methanol (ca 1: 1). All employed solvents were of analytical grade and purchased from commercial sources.
Refinement
Hydrogen atoms bound to carbon were located at their idealized positions and were included in the final structural model in riding-motion approximation with C-H distances of 0.99 Å (-CH 2 -groups) or 0.98 Å (terminal -CH 3 groups). The isotropic thermal displacement parameters for these atoms were fixed at 1.2 (-CH 2 -) or 1.5 (-CH 3 moieties) times U eq of the carbon atom to which they are attached.
All hydrogen atoms associated with the NH moieties were directly located from difference Fourier maps and included in the structure with the N-H distances restrained to 0.95 (1) Å and with U iso fixed at 1.5 times U eq of the N atom to which they are attached. Fig. 1. Schematic representation of the molecular unit of the title compound. Non-hydrogen atoms are represented as thermal ellipsoids drawn at the 50% probability level and hydrogen atoms as small spheres with arbitrary radii. The atomic labeling scheme is provided for all non-hydrogen atoms. Fig. 2. Fragment of the crystal structure emphasizing the contacts interconnecting adjacent chemical entities. The C=O4 carbonyl groups act as double acceptors in strong and highly directional N-H···O hydrogen bonds promoting the formation of a supramolecular double tape. For geometric details on the represented hydrogen bonds see Table 1. Symmetry transformations used to generate equivalent atoms have been omitted for clarity. Table 1.
Figures
Trimethyl 2,2',2''-[1,3,5-triazine-2,4,6-triyltris(azanediyl)]triacetate Refinement. Refinement of F 2 against ALL reflections. The weighted R-factor wR and goodness of fit S are based on F 2 , conventional R-factors R are based on F, with F set to zero for negative F 2 . The threshold expression of F 2 > σ(F 2 ) is used only for calculating Rfactors(gt) etc. and is not relevant to the choice of reflections for refinement. R-factors based on F 2 are statistically about twice as large as those based on F, and R-factors based on ALL data will be even larger. | 1,587.6 | 2010-11-20T00:00:00.000 | [
"Materials Science"
] |
MetaboLights: a resource evolving in response to the needs of its scientific community
Abstract MetaboLights is a database for metabolomics studies, their raw experimental data and associated metadata. The database is cross-species and cross-technique and it covers metabolite structures and their reference spectra as well as their biological roles and locations. MetaboLights is the recommended metabolomics repository for a number of leading journals and ELIXIR, the European infrastructure for life science information. In this article, we describe the significant updates that we have made over the last two years to the resource to respond to the increasing amount and diversity of data being submitted by the metabolomics community. We refreshed the website and most importantly, our submission process was completely overhauled to enable us to deliver a far more user-friendly submission process and to facilitate the growing demand for reproducibility and integration with other ‘omics. Metabolomics resources and data are available under the EMBL-EBI’s Terms of Use via the web at https://www.ebi.ac.uk/metabolights and under Apache 2.0 at Github (https://github.com/EBI-Metabolights/).
INTRODUCTION
Metabolomics is the systematic study of the small molecular metabolites in a cell, tissue, biofluid or cell culture media that are the tangible result of cellular processes or responses to an environmental stress. Collectively, these metabolites and their interactions within a biological system are known as the metabolome. Just as genomics is the study of DNA and genetic information within a cell, and transcriptomics is the study of RNA and differences in mRNA expression; metabolomics is the study of substrates and products of metabolism, which are influenced by both genetic and environmental factors. Metabolomics is a powerful approach because metabolites and their concentrations, unlike other 'omics measures, directly reflect the underlying biochemical activity and state of cells / tissues. Because of this, metabolomics best represents the molecular phenotype. Metabolomics technologies yield many insights into basic biological research in areas such as systems biology and metabolic modelling, pharmaceutical research, nutrition and toxicology.
Our challenge is to capture the growing amount, depth and diversity of metabolomic information and to make it easily available and interpretable to our users and integrated with the wider 'omics community. We describe the significant developments that we have made with a focus on how we are positioning MetaboLights (1) to address its increasing use in biosciences.
Growth of submissions
The MetaboLights repository has continued to see consistent year on year growth since its first release in 2012, with a particularly notable exponential growth in the last year ( Figure 1A). The user base is global with the USA, China and the UK being the top countries for study submissions ( Figure 1B). At the time of writing, the database hosts over 500 publicly available studies with a further 140 awaiting public release due to publication requirements, and ∼250 in preparation. MetaboLights supports publications in a range of journals with a significant proportion not only in specialist metabolomics journals but also in journals from publication groups including Nature, Cell and PLOS. The studies in preparation reflect the fact that the database is evolving into a resource where data can be deposited throughout the experimental progress of a study and in the future, we plan to also provide pre-processing and analysis capabilities.
New online guided submission and editor
The four stages in the MetaboLights submission and curation processes are: (i) Submitted, the user is adding all relevant raw data and information. (ii) In curation, the MetaboLights curation team is editing where required. (iii) In review, the study is ready to be shared with journals. (iv) Public, the study is publicly available. As part of our ongoing efforts to streamline and enhance the study submission and curation process, we have developed a new guided submission process to submit and edit studies online. This new submission tool (https://www.ebi.ac. uk/metabolights/editor/login) fully replaces our traditional desktop-based JAVA application (ISAcreator) (https://isatools.org/software-suite.html) while still using its functionalities relevant to metabolomics. The tool is also integrated with our resumable high-speed data transfer processes to conveniently enable large volumes of data transfer simultaneously while the user curates their study. The online submission tool is context-aware and guides submitters stepby-step through the process of describing the relevant experimental metadata, such as study characteristics, protocols, instrumentation and related factors. This enhances both the submitter experience and the accuracy and completeness of the study. To aid the submitter further, there are short video tutorials available at each stage of the submission process, see Figure 2.
Upon completion of the submission, the submitter is presented with a full study view to review and edit their study further should this be required. Going forward, we plan to extend this feature to facilitate ongoing studies/projects after the initial publication. Once the submitter is finished this initial stage, the MetaboLights curation team will start the final curation and quality control process and will interact with the submitter as appropriate to ensure high quality and comprehensive reporting of each study. Once this process is complete, the submitter will automatically receive instructions on how to safely allow direct read-only access to the study for the journals and their reviewers. The source code for the new submission tool (https://github.com/EBI-Metabolights/metabolights-editor) and the API ( https: //github.com/EBI-Metabolights/MtblsWS-Py) are publicly available on GitHub.
In addition to the online submission process, we are actively working with the metabolomics community to provide solutions for larger scale resumable submission pipelines. A successful example of this is the recent development of a bespoke submission pipeline for metabolon (https://www.metabolon.com). Metabolon submits raw and metadata to MetaboLights on behalf of their clients who then complete the submission process with the online editor. Facilitated by the new MetaboLights RESTful API (https: //www.ebi.ac.uk/metabolights/ws/api/spec.html#!/spec), we are now enabling programmatic submissions for Phenome centres and other large scale laboratories.
Curation and content development
A MetaboLights study's discoverability and reusability is enhanced by mapping the metadata to the most relevant ontology resources. During the development of the online tool, the curators took the opportunity to review the content of all the studies and their related publications submitted since 2012. This highlighted how the metabolomics field is evolving, in both the science and the technology. For example, while there are 3400 species represented in the MetaboLights database, there has been an increasing deposition of human and mouse data which now constitutes 50% of the studies, reflecting the use of metabolomics in clinical applications. The techniques are also diversifying from variations of liquid chromatography (LC) and gas chromatography (GC) based mass spectrometry (MS) and nuclear magnetic resonance (NMR) to more recent developments in Image based MS and magnetic resonance. This review enabled us to align the metadata with the most relevant ontology resources (see Table 1) As a result, submitters are provided with the closest matching ontology linked vocabulary options within the new online editor. Where suitable terms are not available, submitters can make suggestions which are then reviewed by the curators. If the terms fall within the remit of an existing ontology, they are submitted to that resource. For more specialist terms, the curators are working to develop branches in reliable existing ontologies to serve the metabolomics community requirements. One existing and very successful collaboration with the ChEBI ontology team (2) illustrates how this kind of collaboration can evolve. ChEBI provides a uniform reference resource for metabolite nomenclature, associated chemical details and cross-omic referencing. All metabolites reported as part of a MetaboLights study submission are manually curated into the ChEBI ontology. In this way, the MetaboLights team has contributed over 4000 new metabolite entries to ChEBI together with experimental validation of over 20 000 more existing ChEBI entries. Curation of metabolites is a challenging and resource intensive activity and so we are developing a new automated submission pipeline between MetaboLights and ChEBI. This process will use publicly available compound resources like ChEBI, PubChem (3), ChemSpider (https://www.chemspider.com), Cactus (https: //cactus.nci.nih.gove/chemical/structure), and OPSIN (4) to collate existing information including InChiKey, SMILES, InChi, IUPAC name, and synonym data and Classyfire (5) to assign ontological classes required for automatic inclusion in the ChEBI ontology. This will enable curators in both MetaboLights and ChEBI to focus on more expert curation and enable us to capture the complete data sets which benefits the community by enforcing the FAIR principles (https://www.force11.org/group/fairgroup/ fairprinciples). All of this work has been informed by our collaboration with our longstanding colleagues in the ISA team at the University of Oxford (6) and the wider ISA commons community to ensure metabolomics continues to evolve in the wider 'omics field.
Website redesign
Over the last two years, the MetaboLights team has focused on updating the website for an improved user experience and study discoverability. Usability testing (UX) facilitated a more focused and intuitive view of the entire website and the data in the repository, see Figure 3. Newer JavaScript frameworks also ensure compatibility with various, portable, smaller screen sizes for the new website. The development will continue as we expand the functionality and services of the resource.
Outreach and training
The MetaboLights team is dedicated to supporting the growth and development of the metabolomics field. A number of online resources are available through the EMBL-EBI online training portal (https://www.ebi.ac.uk/training/ online/topic/metabolomics) which are very popular with thousands of hits per quarter. They include introductory lectures for metabolomics as well as webinars on specialist topics given by some of the leading experts in the field. The team has hosted an annual course 'Introduction to Metabolomics Analysis' for the last two years which has been very successful and oversubscribed. It teaches data handling, analysis and interpretation for newcomers to the field. The course offers a combination of lectures and hands on experience delivered in collaboration with experts in metabolomics from international laboratories. We are active participants in outreach at relevant international conferences, e.g. we were involved in four workshops at Metabolomics 2019 at the Hague, Netherlands.
CONCLUSION
The MetaboLights team is committed to continually developing our resource and services to become a central hub for metabolomics related data and tools. This is a very exciting time in our field as more data is becoming available and the metadata is continually evolving. By redeveloping our submission processes, we have enabled our user community to make more data available as easily as possible. We will continue to work with the ISA commons community and we plan to develop a working area called Metabo-Lights Labs that will provide pre-processing and analysis capabilities to aid our submitters. We are also collaborating with our international colleagues, Metabolomics Work- bench in the USA (https://www.metabolomicsworkbench. org) and MetaboBank in Japan (soon to be released at http://www.ddbj.nig.ac.jp) to both exchange data and to provide it in a variety of user-friendly ways for re-use and interpretation by the scientific community. Through these collaborations, we hope to deliver a key service and contribute to the further development of the field globally. We greatly value feedback from our user community. Please send your feedback and suggestions to<EMAIL_ADDRESS>or through the contact form at https://www. ebi.ac.uk/metabolights/contact. | 2,539.6 | 2019-11-06T00:00:00.000 | [
"Biology",
"Chemistry",
"Environmental Science"
] |
Contact geometry during indentation of a sphere into an elastoplastic half-space
. The indentation of a sphere into an elastoplastic half-space is considered, which is accompanied by pile-up / sink-in effects, extrusion of material around a sphere (formation of a bulk) and elastic sinking of the material. The evolution of studies of the indicated phenomena is shown. Based on the similarity of the deformation characteristics, expressions are obtained for determining the indentation depth, the depth of the residual crater, and the contact depth depending on the degree of loading. In this case, the influence of the characteristics of the hardened material — the yield strength and the hardening exponent — were taken into account. The radial boundary of the bulk is determined from the volume of the displaced material. Expressions are obtained for describing the profile of a loaded and unloaded crater.
Introduction
The operational performance of the joints of machine parts, including tightness, are determined of the relative contact area and the density of the gaps at the junction of rough surfaces [1,2]. For determining contact characteristics at the junction of rough surfaces, various roughness models are widely used, in which microasperities (hereinafter, asperities) are represented as spherical segments with the same radius. Moreover, the asperities height distribution corresponds to the normal law [3] or the surface bearing curve [1,4]. It should be noted that in [2] the equation of the entire profile bearing curve is used, and not only its initial part as in [1,2,4]. For elastic contact of asperities, Hertz theory is used to determine contact characteristics. When the contact characteristics reach critical values, the stress state will begin to cause fluidity inside the body. In most cases, the contact of metal surfaces is elastoplastic. As indicated by the authors of [5], in the past many analytical, experimental, and numerical studies have been performed for modeling and predicting the properties of an elastoplastic contact, such as the contact radius, the average pressure, and the contact strength. However, because of their complexity, no closed solution was proposed for elastoplastic contacts. An exception is the work [6], which was not considered in [5]. From a review of numerical studies [5], it follows that for simplify the problem, most authors model the contact of a sphere and a half-space. Moreover, contact models can be divided into two main groups: indentation models and flattening models [7]. In flattening models, a plane is considered rigid and the sphere is deformed, while in indentation models, the plane is deformed, and the hemisphere is either rigid or elastic.
The presented results of numerical studies for a separate spherical asperity [5,8] relate the relative contact area and the relative applied force to the relative displacement in the contact. Given the known asperities height distribution, it is possible to determine the relative contact area of the junction of rough surfaces. Difficulties arise in determining the density (or volume) of gaps in the joint. The volume of gaps in the joint is determined by the total volume of gaps per one asperity, which is determined by the geometry of the contactthe indentation depth of the asperity h , the contact depth c h and the profile of the deformable surface. If the problem of determining the density of gaps is solved for an elastic contact of asperities, including under the mutual influence of asperities [2,9,10], then for an elastic-plastic contact this solution is associated with "sink-in / pile-up" effects ( Fig. 1). The aim of this work is to determine the contact geometry when introducing a rigid spherical asperity into an elastoplastic hardened half-space.
«Sink-in/pile-up» effects
Mayer was the first who described the behavior of the material in the elastoplastic region, which related the load P to the imprint diameter a d 2 by a power law [11], which is often represented as: where m is the Mayer index, A is a constant that has a stress dimension.
The expression on the left is the average pressure at the contact area, which is called Mayer hardness. (2) For the first time, attention was paid to the effects of "pile-up / sink-in" (Fig. 1) when measuring hardness according to Brinell Norbury and Samuel [12]. The ratio of h s where h h s c , they are associated with the ability of materials to harden. Given that for a wide range of materials, the true stress under uniaxial deformation is described by a power law where k is the coefficient, is the plastic deformation, n is the exponent of hardening, Matthews [13] proposed the following expression for including also only the exponent of hardening.
Fig. 2. The kinetic indentation diagram
Further development of the study of the effects of "pile-up / sink-in" was associated with the improvement of the method of measuring hardness and elastic modulus, developed in 1992 by Oliver and Farr [15,16] and adapted for characterization at the micro-and nanoscale. The method is based on the expression for contact stiffness obtained by Bulychev et al., who first proposed the kinetic indentation of materials (Fig. 2) in order to determine their mechanical properties.
According to [17], the contact stiffness of the initial part of the unloading curve where A is the contact projection area, E is the contact modulus of elasticity: In the proposed method [15], the hardness and elastic modulus were determined by the expressions: where 05 , 1 is the correction factor. As follows from expressions (13), special attention should be paid to the accuracy of determining the projection area of the print, since neglect of the pile-up effect leads to underestimation of the contact area to 60% [16]. Further refinement of the parameter 2 c occurred for any specific conditions [18,19,20]. In the indicated papers, the parameter 2 c depends only on n , and the influence of the material properties y and E on 2 c is not shown.
In a later paper [21], Taljat and Pharr focused on a more detailed study of the pile-up effect. Using finite element modeling, they studied the effect of elastic deformation depending on the y E ratio, the influence of the relative penetration depth R h h , the exponents of hardening, and the friction coefficient, but they did not present the results in a form convenient for engineering calculations.
In this regard, the papers [22,23] Kim et al. [24] indicated that the contact depth c h can be represented by the addition of where c h is the elastic contact depth, pile h is the depth due to the plastic bulk.
Elastic displacements can be calculated according to [15]: Earlier in [25], it was also suggested that elastic deformations and plastic "bulk" should be described by separate equations or functions, but this was not realized.
Based on the above data and taking into account the work [6] devoted to the interaction of a rigid sphere with an elastoplastic half-space, the following conceptual model for achieving this goal is proposed. The elastic-plastic contact occurs at will be less or more than 1. When unloaded, the surface outside the contact area will always be higher than the original surface. The depth of the residual crater from the initial surface is
Description of the elastic-plastic implantation of the sphere
To describe the indentation of the sphere, we use the kinetic indentation diagram (Fig. 2). The loading curve is described by the expression and the unloading curve where exponents Differentiating expression (13) from expression (6), we obtain an expression describing the process of elastoplastic penetration of a sphere [5] where 5 , 1 0 w w According to [15], Substituting expression (15) into (14) and denoting: we get Using the similarity of deformation characteristics [5], we have (18) and (19) in (17), we obtain the equation the free term, which is characterized by dimensionless quantities: the degree of loading K and the constants for a given material m , , K and the parameter takes into account the characteristics of the hardened material and is determined according to the data [26].
Having the solution k y of equation (20), we find from (16) and (15) Thus, all deep-seated characteristics of the indentation of the sphere into the elasticplastic half-space are determined.
Contact geometry description
Height of unloaded Bulk is According to [27], taking into account (19) and the fact that The elastic displacement r u c of the surface points outside the contact area is described by the expression [27] Then the profile of the noncontacting surface in the loaded state is described by the equation In order to verify the obtained expressions, we considered the profiles for different indentation values )according to [21] As follows from fig. 4, there is a fairly good qualitative and quantitative agreement of the results with the data of finite element modeling.
Conclusion
1. The indentation of a sphere for the entire range of elastic-plasticity is described by a single expression in a dimensionless form depending on the degree of loading K, taking into account the characteristics of the hardened material. 2. The authors hypothesis was confirmed [24,25] that "sink-in / hile-up" effects are separate processes and should be described by separate equations. | 2,161.6 | 2019-01-01T00:00:00.000 | [
"Materials Science",
"Engineering"
] |
Fluid-fluid Interaction Study of Different Density and Viscosity Using Smoothed Particle Hydrodynamics Method
In Computational Fluid Dynamics, fluid-fluid interaction is used to model the behaviour of two different fluids as a mixture. The numerical simulations of fluid-fluid interaction were carried out by using Smoothed Particle Hydrodynamics (SPH) method based on Navier-Stokes equation in FORTRAN programming language. Correct assumptions and other suitable conditions are needed to reach best fluid-fluid interaction simulation. The term of viscosity force equation was written to model the multiple-fluid interactions. This research uses two different fluids in density and viscosity within a three-dimensional cube container. In order to get a pure interaction of fluids, other effects and influences like chemical reaction and heat transfer are neglected. The fluids are miscible with each other. The fluid particle collisions to container are regarded as perfectly elastic collisions. The simulations of two different fluids in this research showed different particle movements in each numerical simulation, but similar to its actual behaviour.
Introduction
In Computational Fluid Dynamics (CFD), the simulations can occur between fluid-fluid or solid-fluid interactions. These two ways of interactions will give different results. Fluid-fluid interaction is commonly used to model the behaviour of two different fluids as a mixture. The correct assumption and specific conditions is still researched until today. The one common method used in CFD problems is Smoothed Particle Dynamics (SPH) Method which uses Lagrangian approach.
The SPH method first invented to simulate non-axisymmetric phenomena in astrophysics (Lucy 1977, Gingold & Monaghan 1977 [1]. The SPH method is a simple method and can provide good accuracy. The Lagrangian approach makes the movement of particles which can be modeled well without bounds of grid and particles can move freely. By using this method, modeling of fluid behavior becomes more realistic and applicable for studying fluids with multiple phases.
SPH modeling with varying fluid phases has a few differences where the formulations are affected by the conditions of each phase. Basically, the equations must still satisfy the equation of continuity and momentum equation. In this multi-phase fluid model, the properties of the fluid will not be declared globally, but will be stored in each particle. The characteristic of fluids also affects whether they are miscible or immiscible.
This research is emphasized to analyze the interaction between fluid-fluid. This research uses some simple numerical simulations in a three-dimensional cube container for simulation, with two different 1234567890''"" fluids in density and viscosity. In order to get a pure interaction of fluids, other effects and influences like chemical reaction and heat transfer are neglected. And also, the fluids are miscible with each other.
This study is a further research of SPH to develop the previous numerical program from single-phase fluid to multiple-phase fluids. The previous researches were taken by Marthanty [2] and Lydiana [3] which still used single-phase fluid in their researches. As the development, this research modifies the term of viscosity force equation from single-fluid to multiple-fluid equation used by Müller et al. [4,5] and Kelager [6]. This research was done to make qualitative validation and to study the influence of different density and viscosity to multi-phase fluid behaviour.
Smoothed Particle Hydrodynamics
SPH method was invented to simulate nonaxisymmetric phenomena in astrophysics (Lucy 1977, Gingold & Monaghan 1977) [1]. It is a simple method and can provide good accuracy. Furthermore, this method gives satisfactory results and can be applied to various physics problems.
This method is basically an interpolation method with kernel functions. The integral of any function A(r) is defined by: where r is particle position and W is kernel function (smoothing kernel) with width h (smoothing radius). In numerical form, the function can be written: where m is mass of each particle and ρ is density of each particle.
The SPH method can estimate the properties of fluid and derivation values of a continuous field based on discrete points called smoothed particles. The property of a fluid particle is computed by interpolating the property of surrounding fluid particles in a kernel radius. The derivation value of a fluid's property is calculated by deriving the kernel function W:
SPH Equations for Fluid-Fluid Interaction
The Navier-Stokes equation is used as the governing equation in this method. The Navier-Stokes equation for incompressible and isothermal fluid is: where ρ is density, u is velocity vector, p is pressure, μ is viscosity coefficient, and f is external forces. The total forces consists of internal and external forces, so the equation becomes: Then, the acceleration of particle i can be obtained from the equation:
1234567890''""
International The internal forces consist of forces due to pressure and viscosity effects, while the external forces consist of forces due to gravity, buoyancy, and surface tension [4,5,6]. The equations to compute density, pressure (Tait equation, [1]) and forces are as follows:
Smoothing Kernel
The stability, accuracy, and speed of SPH method are strongly influenced by selection of kernel functions or smoothing kernels. The kernel functions used in this research is based from Müller et al. [4] and Kelager [6]. The equations can be seen on the appendix.
Numerical Parameters
The numerical parameters shown in Table 1 were chosen for all simulations, except for density and viscosity of fluids that are shown in Table 2. Here, each scenario of numerical simulation will consider two types of fluid with different density or viscosity to observe the interactions of different fluid properties. The buoyancy (b) is set to zero because we don't model air particles. Coefficient of restitution (cr) is used to bounce back particles that are move through the boundary, and it is set to 1 to model perfectly elastic collision.
Geometry of Container
The simulation used a three-dimensional cube container as a computational domain with dimension of 0.1 m x 0.1 m x 0.1 m. The model of boundaries (walls and plate) used perfectly elastic collision to prevent all particles leave the container.
Numerical Simulations
In each simulation, two layers of fluid are placed as shown in Figure 1. The combinations of Fluid1-Fluid2 are used for studying the influence of different density of fluid in a container. Then, the combinations of Fluid1-Fluid3 are used for studying the influence of different viscosity of fluid in a container. The two fluids are assumed at rest and the positions of two fluids are interchangeable. The number of fluid particles in a container is 3179 particles with particle spacing equal to 6.25 x 10 -3 m (or 1/16 of dimension of the container). The density was observed by computing the average from all fluid particles. The detail of simulation scenarios is described on Table 3.
Pressure Correction
Some simulations were performed to check the stability of fluid particles, and it is found that the gravity force dominates over the other forces. The gravity force made all particles to move downward and the fluid became compressed. As a result, the density increased to 3.5 times its rest density. And the other forces had only small effects in the simulation, especially pressure force which should be more active to adjust the fluid density. So, the pressure correction is needed by considering the similitude law to adjust the pressure force. The pressure force was compared to the gravity force from the value obtained in simulation and the actual value. The pressure correction coefficient was derived: with ΔpA and ρA are pressure difference and density in actual condition, and ΔpB and ρB are pressure difference and density obtained from the simulation. The pressure correction coefficient makes the pressure force increases and more comparable to the gravity force.
Results
The pressure correction increased the pressure force to achieve the stability of the fluid particles. The pressure force became more dominant than other forces and this pressure force could adjust the particle density to density at rest. Figure 2 shows the average density during simulation time from 0 to 2 s for each simulation. There is some fluctuation happened in density from the start of simulation, because the numerical computation was adjusting the density until reach the stability of computation. After some computation steps, the fluctuation decreased and the density became stable. The difference of density in Figure 2 is proportional to average density in Table 3. Simulation 1A and 1B have higher density than Simulation 2A and 2B that have similar density.
Figure 2. Average density during simulation
Difference of density affects the gravity force which is proportional to the density. The higher gravity force on a particle makes the particle moves downward. As shown in Figure 3 for Simulation 1A, the higher density fluid (lighter color) is positioned above the lower density fluid (darker color) at initial condition of simulation. As the simulation run (t = 0.4s), the higher density particles move through the lower density particles with mixing between the particles. After some time steps (t = 1.2s), most of the higher density particles are on the lower layer and the particles become stable. In the boundary of two layers, some particles mix with the other type particles because of miscible model. In Simulation 1B, the lower density fluid is positioned above the higher density fluid but there is no such position change like the previous one. The different viscosity simulations did not have significant difference for the variables except the viscosity force. Figure 4 shows the average viscosity force for each simulation. Fluid 3 in Simulation 2A and 2B has 1000 times viscosity than Fluid 1, so the effect of viscosity difference is clearly visible in the graph. Higher viscosity forces make effects on particle accelerations and movements. As shown in Figure 5, the effect of higher viscosity makes the particle movements slower and the particles have different behaviour (more viscous) than the other one. The pressure correction factor makes the simulation could achieve the stable condition after some simulation steps. Table 4 shows the magnitude of the pressure correction factor used in each simulation and the simulation time when most of particles reach stable condition. From the results, we can simulate and model two different density or viscosity fluids but still in miscible model where chemical effect and heat transfer are neglected. All simulations in this research are executed on 1.67 GHz processor and simulate 3179 particles on 1000 time steps to reach simulation time t = 2s with average computer time consumption 12457 s.
Conclusion and Future Work
In this research, a numerical program was developed to simulate different density and viscosity of fluid by SPH method. Pressure correction coefficient was determined in this simulation to increase the pressure which makes the particle movements more stable. The results showed the particle movements and behaviour of fluid-fluid interaction similar to their actual behaviour. These results also showed that the qualitative validation had been achieved by applying the kernel functions and internal forces proposed by [4,5,6]. Further work that needs to be done is to improve the void area near the container found in Simulation 2 by using ghost particle method.
The assumptions used in this program are limited to miscible fluid; so in the future, the program will be improved for immiscible fluids with the effects of interface surface tension and more to develop fluid-solid interaction to simulate a sediment transport phenomenon. | 2,601.2 | 2018-11-29T00:00:00.000 | [
"Engineering",
"Physics"
] |
Playing with blocks: Toward re-usable deep learning models for side-channel profiled attacks
This paper introduces a deep learning modular network for side-channel analysis. Our deep learning approach features the capability to exchange part of it (modules) with others networks. We aim to introduce reusable trained modules into side-channel analysis instead of building architectures for each evaluation, reducing the body of work when conducting those. Our experiments demonstrate that our architecture feasibly assesses a side-channel evaluation suggesting that learning transferability is possible with the network we propose in this paper.
Introduction
In the side-channel analysis (SCA) field of research, deep learning models (DL models) are powerful tools to evaluate the implementation of secure algorithms.Unfortunately, despite the significant accomplishments by using deep learning models, many challenges remains.
When evaluating a secure implementation of an IoT device, for example, it is challenging to develop a deep learning classifier that feasibly assesses the resilience of the devices.Electronic noise as countermeasure and desynchronization are specific challenges during the evaluation.Indeed, a noisy signal intrinsically suggests dealing with high-dimensional signals.For instance, targeting a modern System-on-Chip with high clock frequencies requires increasing the sampling resolution; as consequence, the side-channel information required for the evaluation contains leakage traces with several irrelevant features (sample points).Then, noise filters and feature engineering as pre-processing steps are being reconsidered as tools to deal with those challenges [22,19,27,16,13,11].
This paper proposes a new technique to overcome those challenges and introduces a novel approach that uses a deep learning classifier whose part of its architecture allows to be re-used in other models that use the same approach.By featuring the exchangeable modules, we can re-use networks for different SCA evaluations, reducing the body of work of deriving models each time.The suggested architecture comprises coupled modules, and those modules have specific tasks to deal with the challenges of an SCA evaluation.We call this approach DL-SCA modular network.Precisely, an autoencoder and a convolution base classifier are the two modules that we suggest in this paper.
An autoencoder can effectively deal with the problem of high dimensionality and the problem of noise.An autoencoder comprises two parts by itself; the encoder and decoder.The encoder ends to an embedding where high dimensional leakage traces are transformed into lower dimensional version of them.Because of it, autoencoder are learning algorithms used in pre-processing steps i.e feature extraction [15,19,7].
The classifier module serves two objctives; (i) the classification required for the SCA evaluation and (ii) to regularize the autoencoder.As we explain in further sections of this paper, autoencoders might fail to compress the samples taken from the device under test; so penalizing it with a regularization might correct it toward better performance.
Our experiment uses datasets with desyncronization and countermeasures.After proving the effectiveness of the DL-SCA modular network, we perform a second set of experiments where we exchange the modules between modular networks.Our results show that transferability is feasible and applicable to sidechannel analysis.The contributions of this paper are as follow: -We introduce an approach called DL-SCA modular network and deep learning architecture featuring the exchange of modules through models.We provide the implementation details of the architecture, as well as the hyperparameter to take into account in the design to avoid pitfalls.-We present a training strategy based on sharing weight technique and early stopping policy for a seamlessly adoption of our approach in current SCA evaluations.
-We elaborate experiments that demonstrate the effectiveness of re-using modules through modular networks, using different "sharing" protocols based on non-trainable layers.
The rest of the paper is organized as follows: Sect. 2 details theoretical aspects of the topics used for this work.Related works are discussed in Sect.3. Sect. 4 provides information about datasets used in the experiments.Sect. 5 discuss the main contribution of this paper.Sect.6 and Sect.7 discuss the experiments.While, Sect.8 concludes the paper.
Profiled attack
A side-channel attack requires a leakage model to attack the sensitive information contained in a target device.A leakage model refers to a function (δ) that models the leak of sensitive information.Using a leakage model, an adversary can steal the secret key from a device that implements a cryptographic algorithm.The expression (1) is an example of leakage model used to attack a cryptographic implementation of AES 3 .In that leakage model, p is the publicly available data i.e. the plaintext and k * is the secret key.
The adversary measures the power consumption4 when the device inputs the AES algorithm with p random values from the keyspace K = {0, • • • 255}, drawing several leakage traces (a.k.a power traces).With enough leakage traces, the adversary can find a correlation between the power consumption and the inputs p of the leakage model; consequently, he can infer the key k * .
The previous paragraph just describes a traditional non-profiled side-channel attack over a crypto primitive.However, to understand how a profiled sidechannel attack works we have to explain its differences from a non-profiled attack.A profiled type of attack born with the idea of training classifiers to distinguish the outputs of a leakage model; so that the attack splits into two phases; (i) to train a classifier (profiling phase) and (ii) to perform the attack (attack phase).
The first phase comprises applying the corresponding leakage model to a clone device (a.k.a.profile device), collecting from it the leakage traces forming a set of profiling traces (X ) used to train the classifier.During the attack phase, a set of attacks traces from the actual device is collected and used for the trained classifier to compute probabilities, then a key recovery process takes place using an algorithm called guessing entropy that we will explain in brief.
Template attack and machine learning are two techniques to build a classifier to evaluate side-channel attacks [7,12].It is well-known that to come up with a classifier for SCA evaluation is not straightforward.Indeed, to reduce the body work of building classifier anytime an SCA evaluation is required is the motivation for the proposal of this paper.We propose a deep learning-based model whose architecture allows the model to exchange classifiers with other deep learning model aimed to conduct SCA evaluation over a different device; and in the following, we address the necessary aspects that support the theory about this approach.
Guessing entropy (GE)
GE is the average rank of the correct key byte value k * in a key guessing vector g, over all the set K of key candidates k [26].Formally denoted as GE = rank k * (g), where rank k (g) ∈ {0, . . ., |K| − 1}, and the key guessing vector is defined as: is the input vector of probabilities p i,j from a classifier (usually aimed for key recovery task) given a leakage trace t i .After applying the expectation E per multiples experiments of P r , the sort function orders the resultant vector g in decreasing order.The element g 0 ∈ g corresponds to the most likely key candidate, while g |K|−1 ∈ g is the less likely one.
Deep learning base profiled attacks
A deep learning classifier outputs a vector of probabilities fed into the guessing entropy (GE) metric to compute the rank of the key (k * ).We denote a deep learning model C θ δ for profiled attacks as a classifier C with a vector of parameters θ ∈ R n aimed to distinguish leakage traces labeled using a leakage model δ.
Having labeled leakage traces means that our learning approach is supervised learning [10] which represents one of the most feasible ways to leverage the learning of a deep learning classifier.
Despite several deep learning architectures, CNNs based models are the preferred architecture to use in profiled attacks.The convolutional part plays an essential role when leakage traces are desynchronized.The deep learning model we propose uses a specific type of convolution, called dilated convolution [18] for boosting the feature extraction capability of the layer (see sub-section 2.5).
Feature extraction
A feature extraction process applies a transformation (linear or non-linear) to a space of observations resulting in a new space mapped by the transformation.Formally, given profiling set X of N leakage traces and each trace comprises m features (or sample points).Feature extraction applies a function to the profiling set X mapping a new profiling set Y whose elements have fewer dimensions of the corresponding elements in X ; precisely, is an application such as : X −→ Y, and X ∈ R m , Y ∈ R n such that n < m.This transformation aims to derive new features (Y) to leverage the performance of a classifier, for instance.Theoretically, features in Y contain the "transformed" information that best represents the ground truth of X , in the SCA case, it is the leakage of the sensitive information.In simple words, the intensity of the valuable information gets emphasized while the irrelevant information (non-correlated information) has little to no influence in the new space.
However, it is not straightforward to come up with a transformation that indeed emphasized the side-channel information.A transformation that goes wrong discards a lot of useful information, and it happens when cannot keep the variance that distinguishes a leakage trace from another; as consequence, Y is made of several collapsed traces becoming useless for classification purposes.In section 5, we will discuss how our proposed method implements regularization to avoid transformations that collapse the Y space.
Function can be inferred directly from X .For instance, Principal Components Analysis (PCA) [9] or Linear Discriminant Analysis (LDA) [2] are two algorithms to build linear base functions for feature extraction.However, PCA and LDA are highly sensitive to desynchronization because of their "per feature" process, meaning they find a relation by correlating the same positioning feature through samples.So that, when the samples have a spatial disruption, the relation gets reduced, requiring more samples.
Autoencoders
An autoencoder is a learning algorithm useful to infer ; contrary to PCA and LDA, an autoencoder can infer a non-linear transformation due to the nonlinear activation functions in its architecture.Moreover, when the autoencoder architecture comprises convolution layers, it handles the spatial disruption better than PCA and LDA.
An autoencoder consists of two parts; (i) an encoder ϕ and (ii) a decoder ψ.Let us define a leakage trace t i ∈ X , and its dimension being denoted as dim(t i ) = m.The encoder outputs a new trace t i with dim(t i ) < dim(t i ) (see expression (2)).At the other side of the autoencoder, the decoder tries to reconstruct t i but it is able to re-build an approximation ti only; consequently, one can understand that an autoencoder learns by minimizing the difference between t i and ti (as we will see in expression 6).
From a functional perspective, the encoder maps X to an embedding space denoted by Z (i.e.ϕ : X → Z), the embedding Z is usually called latent space, code, latent code or hidden code.Further, Z is the space result of the transformation applied by the encoder.According to the discussion in the previous subsection, Z is the resultant space of a feature extraction process i.e.Y.Likewise, the decoder maps Z to X (i.e.ψ : Z → X ), where t ∈ X .The expressions (3) and (4) formalize these two mappings; Function σ denoted a non-linear activation function.An encoder is parameterized by a weight matrix W enc ∈ R m×n and a bias vector b ∈ R n ; likewise, a decoder is parameterized by a weight matrix W dec ∈ R n×m and a bias vector b ∈ R m (see Fig. 1).Training an autoencoder implies finding a vector of parameters θ = (W enc , W dec , b, b ) that minimize a loss function L such as; As we said, autoencoder learns by minimizing the difference between t i and ti ; so that, the Mean Square Error (MSE) is a loss function commonly used; Fig. 1.Typically autoencoders are symmetrical models, meaning that both parts of the encoder and decoder resemble each other.During training, the encoder trains to code the original signal to a Latent space, ideally this code from the features that better represent the characteristic of the original signal.From there, the decoder re-constructs as much as possible the original signal.
Convolution layer architecture Autoencoders are built using either fully connected layers or convolutional layers.The latter makes the autoencoder inherit the spatial invariant robustness property, which is useful when leakage traces are desynchronized; our autoencoder uses dilated convolution layers.
A convolution layer consists of kernels that essentially are matrices; then, to dilate a convolution layer consists of inserting zeros into its kernels, meaning to separate the matrices' elements using zeros, expanding their receptive field 5 .According to [18] a dilated kernel allows convolutions base classifiers to combine spread features that contain the leakage information, at the same time, avoiding irrelevant features that might lay in between.
Let us consider the expression in (7) showing a regular convolution where a leakage trace t i is multiplied by the kernel q whose length is denoted by l q .If we displace the leakage trace t i from right to left, a single feature t of t i is multiplied l q times.If l q is large, then t might be excessively used during the operation.According to [29], this excessive use of t may decrease the convolution effectiveness.Notice that if l q increases aiming to use further spread features, it also increases the times t is used.By using dilated convolutions, one can avoid this downside.
The expression in (8) shows a dilated kernel with one zero inserted between its elements.Notice that when the convolution is performed, the feature t alternates being multiplied or not by a zero; consequently, it reduces the times the operation uses the feature t.
The hyperparameter dilatation rate (dr) controls the number of zeros inserted.When a kernel is dilated its receptive field is modified by the relation; In this way, the receptive field increases by modifying either the length of the kernel or the dilatation rate, letting the user regularize the convolution operation.
Related work
While few works in SCA discuss an approach of architecture transferability with reusable modules, several works have discussed feature reduction for SCA.Cagli et al. in [6,5,4] discussed application of traditional feature reduction methods using PCA [9], LDA [2], and its kernel base variant Kernel PCA and KDA.Picek et al. [21] published results using same methods as [5].However, authors in [21] used an approach that combined feature extraction and feature selection; precisely, PCA and LDA combined with SOST and SOSD, they called it hybrid feature selection methods.
Intrinsically, any work that uses the same feature reduction techniques aims to downsample the signal by taking it to a new space (latent space).However, these approaches consider only linear base feature reduction disregarding the more powerful non-linear version of it; it is likely, that this situation may be a consequence of advertising CNNs as built-in feature extraction deep learning models.Hence, very few works have addressed non-linear methods for SCA evaluation.One of those few works are, for instance, Paguada et al. [19], and Yang et al. [28]; similar to us, those works used autoencoders toward inferring a nonlinear function to pre-process leakage traces in a fashion that overcome linear methods.
While those two works are the closest one we can relate with, to our best knowledge, there is no previous work on side-channel analysis that suggest a deep learning approach based on modules; featuring to share modules between models.
The dataset has two versions, traces collected with fixed key encryption k f and traces collected with random key encryption k r (plaintext is always random), while the target byte of the secret key in both cases is the third one.We named these versions as ASCAD f and ASCAD r respectively.Due to these key characteristics, ASCAD r is more challenging and more realistic than ASCAD f when conducting an SCA evaluation over them.TABLE 1 contains a summary of main characteristics of these two datasets.
Profiling traces 50 000 Profiling traces 200 000 Attack traces 10 000 Attack traces 100 000 dim(ti) 700 dim(ti) 1 400 Table 1.Cardinalities of the ASCAD datasets.Since their goal is to be used for benchmarking profiled attacks the leakage traces are grouped in profiling traces and the attack traces sets.
Leakage traces in each version are desynchronized according to a threshold value that moves traces around the x-axis, being frequently used threshold values of 0, 50, and 100.Then, to make clear distinctions when exchanging the modules between modular networks, we add to the name the threshold value, for instance, ASCAD r desync50.
DL-SCA modular network architecture
This section explains the details about the architecture of the DL-SCA modular network; further, we describe the strategy to train it.
Since we are using autoencoders; then, our suggested DL-SCA modular network comprises three main modules; an encoder, a decoder, and a classifier (see Fig. 2).Particularly, we will group the encoder and decoder into a single module called a downsampler.The downsampler has two goals; (i) to extract meaningful features by reducing the noise in the leakage traces and (ii) to downsample them.Now, the classifier is in charge of evaluating those extracted features as a classification problem.
It is worth mentioning that once the DL-SCA modular network is trained, we discard the decoder of the downsampler, and we only use the encoder and classifier to perform the SCA evaluation.Due to this, we elaborate a training strategy to monitor only those two parts of the model; we will elaborate this late in this section.
The goal of both modules might be apparent; however, the downsampler has an implicit objective.To achieve compatibility with as many classifiers as possible, we should use a downsampler to fix the classifier input.Precisely, we downsample the leakage traces to a fixed length; then, when we re-use the classifier with another downsampler, this latter fixes its output to match the classifier's input.By doing this, we fulfill the first step of re-usability.We demonstrate this in the experimental section of this paper.
Training a DL-SCA modular network architecture requires a loss function for the decoder and another for the classifier.The decoder's loss function (L M SE ) was discussed in the sub-section 2.5.We introduce the classifier loss function.
Classifier loss function
As we said, a classifier outputs a vector of probabilities used as input for the guessing entropy.So, for the classifier to output this vector, it must be trained using a cross-entropy (CE) loss function.In supervised profiled side-channel attacks the leakage traces are labeled by the output of a leakage model (see expression (1)).Further, the classifier learns by minimizing its error in predicting the label of each trace.
To explain this better, let us consider the expression (10).The space K corresponds to a batch of key candidates or labels, each one of the labels in K represents a trace.For instance, let us take δ i ∈ K as one of those labels, we say that δ i is the ground truth while σ(δ i ) is the output score a neural network computed 7 .
7 Often σ is the softmax activation function for multi-class classification During training, this loss function computes the error in the prediction made by the classifier; consequently, the weights of the classifier are updated toward achieving a prediction with highest accuracy possible.
Clearly, we use a classifier with the same purpose as in a common profiled side-channel evaluation.However, the feature in using a classifier in our approach is to add a regularization term to the downsampler.Precisely, the supervised classifier adds an extra penalization to the downsampler with regard to the featuring space Y the downsampler is building up; leading the whole network toward better performance.It is impossible to feature this with a self-supervised downsampler trained separately as authors did in [19].
The arrangement depicted in Fig. 2 suggests that both the classifier and decoder attach to the encoder.Consequently, when training the modular network, the classifier feed-forwards the downsampled traces from the embedding and back-forwards its loss.Meanwhile, the decoder trains its reconstruction capability that additionally penalizes the encoder.These two losses resemble a double voting system that the encoder uses to leverage learning.Now, notice that because the activation functions are non-linear, the classifier acts as a non-linear regularizer for the embedding space.Consequently, the decoder takes the regularization effect as small perturbations in that space; those perturbations challenge the decoder in reconstructing the original traces as it understands that those are small errors in its reconstruction.Contrary to the approach in [19], training jointly the autoencoder and the classifier produces an embedding likely to learn positive features.Due to the regularization factor, less correlated features are emphasized over highly correlated noisy features.
Analogy with linear regularized autoencoder
Autoencoders aim to be imperfect models; so, when training an autoencoder we must avoid an architecture that ends with a model called "identity function".When this phenomenon happens, the autoencoder will just copy the data from the input to the output.One way to avoid this is by using a undercomplete architecture, which refers to the embedding we discussed early; further, the deepest an autoencoder is the stronger becomes to avoid ending as an identity function.However, doing this carelessly might reduce the network performance as the model becomes supra-complex, so that, we cannot rely on it repeatedly.
Applying a regularizer to the latent space is another alternative.Regularized autoencoder proved overcoming normal autoencoders when leveraging meaningful features in the embedding.A linear regularizer applies to the latent space an extra penalization.The embedding neurons fire the additional penalization to the decoder added to its loss function as small epsilons of error.This latter advocates the disruption by training its neurons to reconstruct the original data; ignoring that it is being fooled by the regularizer, so its learning is actually "imperfect" [10].
A drawback of using linear regularizer is precisely its nature.A linear regularizer applies the penalization linearly to all the embedding neurons, there is no a criterion that controls the magnitude each neuron should receive based on its contribution to the loss function; as eventually, the decoder starts copying the input as it is.
In a DL-SCA modular network, the classifier acts as a regularizer; nonetheless, the regularization is based on non-linearity since the classifier is a non-linear function.The non-linear activation functions used in the classifier receive their input from the embedding neurons; once the classifier does the back-propagation it applies an epsilon value according to their contribution to the classification.Once again the decoder interprets those as small errors, but now facing a more advance regularization.
Both linear and non-linear regularizers require a value to control the intensity of the penalization.For our non-linear regularizer, this value is a parameter γ ∈ ]0, 1] ⊂ R multiplied by the loss function's result.
DL-SCA modular network loss function
Now that we know the two losses required by our architecture as well as the hyperparameter to control their intensity, we have the expression (11) that defines the loss function for a DL-SCA modular network architecture.
Notice that there is an ω parameter for L MSE that works exactly as γ.We fix ω = 1, because our goal is to control the regularization and not the reconstruction.
Training strategy for a DL-SCA modular network
Recently, authors from [20] published an early stopping framework to monitor the state of a deep learning model during its training preventing it from getting overfit/underfit.Overfitting/underfitting is a phenomenon that might happen during training; it represents the state when a deep learning network cannot generalize beyond its training set.
The framework computes the guessing entropy at the end of each epoch basing the stopping criterion on the whole guessing entropy vector, considering when the guessing entropy converges, and how many traces keep the guessing entropy in the state of convergence, proving to overcome existing frameworks (more details can be found in the original paper [20]).We use this early stopping to elaborate a training strategy for our DL-SCA modular network.
Training strategy We know that an early stopping framework stops the training of a deep learning model when it meets conditions established using a metric, e.g., the accuracy of the model.Typically, these frameworks evaluate the entire model.In contrast, we need the framework to consider just the encoder and the classifier as they are the parts used in the SCA evaluation.The framework from [20] is a "typical" framework, so it monitors the whole deep learning model.
We modified the suggested framework to receive a truncated model which comprises just the modules of interest (encoder and classifier).To apply this modification is effortless when using the weight sharing technique [30].In this technique two or more neural networks share the references to some specific layers, all of those networks can update the weights of those layers; however, in our particular case the original networks updates the weights, while the truncate model monitors the state (of those weights) of the encoder and classifier.
Precisely, we set a truncated model (see Fig. 3) to reference those modules and to be only evaluated (not trained) by the early stopping framework.Then, it stops training when the encoder generates features that makes the classifier to achieve an expected performance, in this case an expected guessing entropy convergence.The modification works since the framework uses the truncated model as the predictor, and its output serves as the input to compute the guessing entropy.
In the first experimental results section of this paper, we show the training strategy outcome using surface plots.Notice that we did not stop the network training, so the surface plots correspond to the entire training process, our goal is to show that an SCA-DL modular network does not rely on an early stopping framework.
Experimental results of training modules
In this section, we discuss the results of using our proposed approach over AS-CAD datasets -ASCAD f all desync and ASCAD r all desync.
We organize the experimental results as two different use cases where a DL-SCA modular network analyzes; (i) ASCAD fixed key dataset and (ii) ASCAD random key dataset.We accomplish two goals with these uses cases; (i) to show the feasibility when an SCA evaluation uses our architecture to attack an specific dataset, and (ii) to create a scenario where we demonstrate the feasibility of sharing modules.The strategy is applicable to real evaluations; the derived modular network evaluates a first dataset; consequently, a second modular network could evaluate another dataset borrowing a module from a previous modular network.
In our particular case, our experiments use two datasets that share the same source of data; precisely, both datasets were composed with leakage traces from the same microcontroller (Atmega8515 8-bit).We aim for performing experiments when the source of data is uncommon between both datasets as future works.
Notice, we used the same model for all levels of desynchronization, meaning that additional effort in finding neural network architectures for specific noisy scenarios is not required.
ASCAD f all desync use case
The TABLE 6.1 summarizes the hyperparameters of the modular network architecture to evaluate ASCAD f all desync.
Network's architecture We set the architecture by following the discussion in Sect.2.5; the first convolutional block uses dilated convolutions to avoid any useless features that might reduce the model's performance.We dilate the convolutions at the first convolutional block because it is where we deal with the original version of the trace.Further, we add convolutional blocks to the encoder following the rules applied for VGG [25] base deep learning architectures 8 .
The decoder mirrors the encoder, as our downsampler uses symmetric autoencoders.For the decoder to up-sample, namely to reconstruct the actual length of the trace, it uses transpose convolutions.As known, matrix multiplication is not commutative, and we cannot achieve the same output in respective convolutional blocks.Consequently, we have to tune the hyperparameters in the decoder's convolution layers.For instance, let us take the third encoder's convolutional block that uses a stride value of 5, its corresponding decoder's transpose convolutional block is the first one but it uses stride value of 7. By doing this, we fix the output of the decoder to meet the original trace dimension.
Latent space hyperparameters With regard to latent space units and γ value.We perform a grid search for the best number of units in the latent space, using the values 100, 200, 300, 400, and 560.Further, we know that the parameter γ relates strictly with the number of latent units; consequently, to find the value of γ we create combinations using the latent space values and values of γ as 2. DL-SCA modular network architecture to use in experiments with ASCAD f all desynchronizations levels {1e −3 , 1e −6 , 1e −9 }.It turned out that the best combinations was 300, and 1e −3 for latent space units and the γ parameter, respectively.
Regarding the classifier module, bear in mind that we are only interested in its classification performance and not too much in its ability to filter out unnecessary features of the leakage traces, so we use a shallow architecture since it will deal with already filtered features.
Training strategy and results
As we said, to train a modular network, we use the early stopping framework from [20].To show that our suggested architecture does not rely on the framework, we did not stop the training after the mentioned framework finds the best learning state.Further, we will use this outcome in the next section to discuss the result of the reusing modules experiment.Fig. 4 depicts the training process when our modular network evaluates ASCAD f datasets.As we expected, the training outcome differs according to the level of desynchronization; regardless, our modular network achieved a zero convergent guessing entropy for all desynchronization levels.A view of the attack performance is depicted in Fig. 5.
ASCAD r all desync use case
Network's architecture Regarding this dataset, our strategy was to keep the same modular network as the previous use case to reuse as much as possible an already worked model and see how it performs.After experimenting, we noticed that the downsampler module required an additional convolutional block -identical to the third convolutional block of the decoder-without pooling layer.Consequently, the decoder should also have the corresponding transpose convolutional block.
Latent space hyperparameters, training strategy, and results
We keep the same classifier as in the previous use case because we have the same number of latent units.In our particular case, to keep the same latent units is convenient because we aim for exchanging a trained classifier in the following experiments to evaluate the modular re-usability.Fig. 7 depicts the training process of guessing entropy by epochs for ASCAD r dataset.In this case, we observe that the performance of our modular network slightly decreases, which is expected since the dataset has a higher level of noise than the previous.Even though, we achieve good guessing entropy convergence as depicts Fig. 6.Finally, we compare our experimental results with previously reported results over the same datasets.TABLE 3 gathers this information.
Module re-usability experimental results
This section presents the results of module re-usability.We show that another non-trained DL-SCA modular network can reuse the modules of a DL-SCA modular network.We use the DL-SCA-based module networks trained in the previous section to prove it.
Analyzing transferability
We aim to show how "transferable" is the knowledge of a classifier module.We have six modular network -meaning six classifiers-trained with three different datasets -three on ASCAD f and three on ASCAD r .Further, due to the number of latent units (300) we used, all classifiers are interchangeable without performing additional downsampling operations to fix their inputs.For our experiments, we took the classifier from the DL-SCA modular network of ASCAD f desync50 to share with all the downsampler from ASCAD r .We considered it sufficient for proving our claim about "module re-usability".We chose ASCAD f −→ ASCAD r direction because it represents the complex direction -from fixed key to random key.We inspect the transferability of the ASCAD f desync50 classifier by conducting a similarity analysis using gradient activation operations.In particular, we use heatmaps and gradient visualization to compare how the neurons' of the classifier are activated by the data outputted from the downsamplers.
Original
We perform this analysis by locking specific layers of the classifier to identify how transferable those layers are.Precisely, we choose convolutional block (Conv) layers and fully connected block (FC) layers and lock them by turns to evaluate them separately.A heatmap allows us to inspect the convolutional layers of the classifier, while gradient visualization helps us analyze how both Conv and FC perform with the different datasets.TABLE 4 summarizes the similarity analysis we are going to perform using the classifier ASCAD f desync50, the ASCAD r datasets, and the gradient activation operations.Fig. 8 depicts the first convolutional layer heatmaps from ASCAD f desync50 classifier and ASCAD r all desync classifiers (desync0, desync50, and desync100).
For these particular experiments, all ASCAD r classifiers share similarities with the ASCAD f desync50 classifier in how their convolutional layer neurons' get stimulated.According to our assumptions, it indicates that the weights of those layers might be transferable.This claim is experimentally demonstrated later in the final experiments.
Although the magnitude of the ASCAD f desync50 classifier's heatmap is higher than any other heatmap from ASCAD r classifiers, it does not represent a drawback to the transferability.We could have gotten the same magnitudes if we had normalized the weights applying constraints in the architecture, though a similarity analysis does not need to do this.
We use gradient visualization to inspect the classifiers' fully connected block (FC).The output of that operation indicates which input features are the most meaningful for the classification.The gradient visualization uses the loss function of a trained classifier to conduct backpropagation, collecting the information about those neurons that emphasize the performance.Further, when it reaches the input layer, it points out which features are connected to those neurons, indicating the meaningful features [14,24,1].Fig. 9 depicts the result of gradient visualization operation.
Notice that gradient visualization shows less intuition than heatmap.As a workaround, we apply a Dynamic Time Warping (DTW) [17] to visualize the similarities between gradient visualization signals.
According to this experiment, two phenomena happen; (i) the meaningful features are displaced according to each classifier, or/and (ii) the meaningful features are less intense in magnitude.These phenomena could represent an issue.For instance, let us take the ASCAD r desync0 classifier, notice the displacement because the ASCAD f desync50 interprets that the meaningful features localize differently.Further, those features have an even lower magnitude in contrast to those supposedly being the lowest(see points from 0 to 30 in Fig. 9
top plot).
This analysis gives us the intuition that we will need to retrain the classifier; nevertheless, the reader might remember that the classifier is just a part of a bigger model.The downsampler will leverage its learning according to the limitation imposed by the classifier.
Playing with blocks
Let us suppose we have trained a DL-SCA modular network using a former dataset; then, we have the opportunity to evaluate another dataset.We could use the classifier module of the first network to evaluate it.In this hypothetical scenario, the first dataset is played by ASCAD f and the second one by the ASCAD r dataset.
To experimentally evaluate if we need to re-train some or all parts of the classifier, we perform experiments locking the blocks of the classifier to restrict them from getting trained.In the previous sub-section, we inspected the blocks of the classifier (Conv and FC), and we observed some similarities in its neurons' weights.Now, we are going to evaluate the performance of the whole modular network when its classifier module has the following locks: -Convolutional block -Fully-connected block -Both blocks We will refer to these as "sharing protocols".We find out which could be the best sharing protocol for these particular modular networks by locking the blocks.TABLE 5 summarizes the combination of locks and dataset where the shared classifier will be used.We previously said that the chosen classifier ASCAD f desync50 will tackle a more complex dataset -the ASCAD r desync100.Now, by evaluating the ASCAD r desync0 dataset; then, we will cover the scenario where the shared classifier comes from a more complex dataset.Still, bear in mind that it is in terms of desynchronization because it does not come from a complex dataset in terms of its secret key's nature -from random to fixed key, for example.So, we rate the "experience" of the classifier as medium level of experience.
Due to space constraints, we did not perform an inter-classifier sharing and a no-block lock sharing protocol; furthermore, we claim that the sharings addressed in our experiments represent the difficult one, being enough to prove our contribution.However, we let those experiments and further combinations of sharing protocol for future works.Fig. 10 depicts the training process of all chosen sharing protocols.It is worthy of mentioning that we did not change the loss intensity parameter (γ), reducing the effort in tuning the modular network.
For this experiments, we trained the modular networks using the early stopping framework from [20].Contrary we did in the previous section, we do stop the training when the policy finds out the best learning state.We can now know the number of epochs required to achieve good performance.
Generally, all sharing protocols perform well if we contrast the training process of Fig. 10 and Fig. 5. Nevertheless, the sharing protocols that worked best are the fully-connected block, both blocks, and the convolutional block lock.
Observe that for the fully-connected block lock, ASCAD r desync0 has a convergent guessing entropy after 9 epochs, ASCAD r desync50 at 65 epochs, and ASCAD r desync100 took the whole training process (100 epochs); even thought, it achieves good performance.Both blocks lock cases seem to require more epoch or the convergence is roughly achieved, ASCAD r desync0 and ASCAD r desync50, for instance.Finally, we notice that the convolutional block lock converges after several more epochs than the previous locks.In this case, ASCAD r desync100 did not converge within 1 000 leakage traces.We summarize in Fig. 11 the best guessing entropy from all combination of locks.
Discussion
Using a shared classifier instead of a non-trained modular network, we have reduced the training time and the effort in tuning hyperparameters while evaluating the leakage of a dataset with good results.Since we locked some blocks and the whole classifier, we reduced the number of neurons to train; consequently, the training time is reduced since the number of operations per neuron is less than a non-trained modular network.As we do not have to tune the hyperparameter of a classifier, then we do not spend time in it.Further, we are confident that the classifier has a high probability of working since it already has previous "experience".We demonstrated the latter by actually achieving good results.
Clearly, some initial effort has to be made.For instance, we were tuning the latent space and losses intensity hyperparameters.Coming up with an initial deep learning modular network could be challenging, but it is an equivalent effort in finding several small deep learning models for different datasets.Finally, bear in mind that by saying that a classifier has previous experience, we do not claim that it will work flawlessly.As we said, the experience of a shared classifier represents a neurons' weights initializer.So instead of randomly initializing the weights using any well-known function -he uniform, for instance-; we start from a state leveraged by a previous worked learning.We have proved, experimentally, that it has good results.
Conclusion
We introduced the DL-SCA modular network approach to conducting SCA evaluation reusing modules from previously trained modular networks.A DL-SCA modular network consists of two main modules; a downsampler and a classifier.We demonstrate that modules from a modular network can be detached and attached to other modular networks and conduct an efficient SCA evaluation.The strategy is to use a classifier with good performance and reuse it to conduct another evaluation in a different dataset.
Our experiments demonstrate that it is not mandatory to re-train a classifier module to effectively evaluate the aimed dataset, regardless of whether the source classifier has been trained with a dataset with a lower noise level.We systematically lock the layers of the classifier to restrict them from getting trained, replicating different sharing protocols to evaluate the effectiveness of our approach.
As we said in the paper, we aim to work with more sharing protocols and improve the performance of our modular network in future works by using other types of deep learning architecture for the downsampler.Furthermore, we look for applying methodologies that might help tun the hyperparameters of a modular network.
Fig. 2 .
Fig. 2. DL-SCA modular network architecture illustration.The encoder, with its embedding layer and the classifier ensemble the final model used to perform the attack.
Fig. 3 .
Fig. 3.The truncated model shares weights with the DL-SCA modular network; while the latter is training, the former updates his weights.The early stopping framework uses the truncated model to compute the guessing entropy at the end of each epoch, and it stops the training when it meets the conditions.
Fig. 4 .
Fig. 4. The training process of the modular network for ASCAD f datasets.The surface represents the values of the guessing entropy during a chosen number of epochs.Stopping condition success GE=0.
Fig. 8 .
Fig.8.Comparison between heatmaps of the ASCAD f desync50 classifier and classifiers from all the ASCAD r datasets.Notice how ASCAD f desync50 heatmap resembles all other heatmaps.It indicates that ASCAD f desync50 classifier's convolutional layer fires its neurons according the data received.
Fig. 9 .
Fig. 9. Comparison between gradient activation per sample of the ASCAD f desync50 classifier and classifiers from all the ASCAD r all desync.
Fig. 10 .
Fig. 10.The training results of the knowledge transferability experiments.Through the columns lies the levels of desynctronization [0, 50, 100]; while through the rows, lies the different block lock cases -ConvLock, BothLock, and FCLock.
Table 4 .
Summary of the similarity analysis between ASCAD f desync50 classifier and ASCAD r all desync classifiers
Table 5 .
Combination of sharing protocols used for the ASCAD f desync50 classifier | 9,735.6 | 2022-03-16T00:00:00.000 | [
"Computer Science"
] |
A Semantic Transformation Methodology for the Secondary Use of Observational Healthcare Data in Postmarketing Safety Studies
Background: Utilization of the available observational healthcare datasets is key to complement and strengthen the postmarketing safety studies. Use of common data models (CDM) is the predominant approach in order to enable large scale systematic analyses on disparate data models and vocabularies. Current CDM transformation practices depend on proprietarily developed Extract—Transform—Load (ETL) procedures, which require knowledge both on the semantics and technical characteristics of the source datasets and target CDM. Purpose: In this study, our aim is to develop a modular but coordinated transformation approach in order to separate semantic and technical steps of transformation processes, which do not have a strict separation in traditional ETL approaches. Such an approach would discretize the operations to extract data from source electronic health record systems, alignment of the source, and target models on the semantic level and the operations to populate target common data repositories. Approach: In order to separate the activities that are required to transform heterogeneous data sources to a target CDM, we introduce a semantic transformation approach composed of three steps: (1) transformation of source datasets to Resource Description Framework (RDF) format, (2) application of semantic conversion rules to get the data as instances of ontological model of the target CDM, and (3) population of repositories, which comply with the specifications of the CDM, by processing the RDF instances from step 2. The proposed approach has been implemented on real healthcare settings where Observational Medical Outcomes Partnership (OMOP) CDM has been chosen as the common data model and a comprehensive comparative analysis between the native and transformed data has been conducted. Results: Health records of ~1 million patients have been successfully transformed to an OMOP CDM based database from the source database. Descriptive statistics obtained from the source and target databases present analogous and consistent results. Discussion and Conclusion: Our method goes beyond the traditional ETL approaches by being more declarative and rigorous. Declarative because the use of RDF based mapping rules makes each mapping more transparent and understandable to humans while retaining logic-based computability. Rigorous because the mappings would be based on computer readable semantics which are amenable to validation through logic-based inference methods.
INTRODUCTION
It is a well-accepted fact that drugs may still have serious side effects (Nebeker et al., 2004), even after they are marketed. Since the scope and duration of clinical trials are limited, postmarketing drug surveillance has been a necessity in order to capture Adverse Drug Events (ADEs). Pharmacovigilance is the science focusing on the detection, assessment, and prevention of the ADEs and any other drug-related problems (World Health Organization, 2002). Historically, drug safety surveillance research in pharmacovigilance has depended on the mandatory reports produced by randomized trials of the industry and the case reports that are voluntarily submitted to the regulatory authorities. As a promising alternative, there is a growing interest in the secondary use of observational healthcare datasets for postmarketing surveillance.
Electronic Health Records (EHR) available as healthcare datasets cover extended parts of the patient medical history and include more complete information about the risk factors compared to spontaneous case reports. Despite their drawbacks such as potential bias (Moses, 1995;Kunz and Oxman, 1998), this broad range of clinical information could be highly beneficial for surveillance studies to complement and strengthen the existing postmarketing safety studies (Suling and Pigeot, 2012;Coorevits et al., 2013). In order to increase the effectiveness, studies should be extendible and make use of the data from various sources. However, healthcare datasets are generally stored in different heterogeneous information models by organizing the data in different formats and making use of local, diverse vocabularies, and terminology systems. In order to utilize data from heterogeneous systems, either the analysis should be tailored for each data model and the underlying terminologies, or transformation to a common data model (CDM) should be performed (Overhage et al., 2012). Without a CDM, researchers need to develop custom analytical methods that can run on each of these heterogeneous data models, which is costly. In addition to this, it presents significant limitations since detailed knowledge about the disparate data models and underlying vocabularies would be required (Bright and Nelson, 2002). Another drawback of dealing with heterogeneous data sources without a common model is the challenge of reproducibility of results across sites. A CDM not only facilitates the understanding of the analysts, uniform implementation, and reproducibility of results, but also enables large scale systematic analysis over disparate sources by producing comparable results through standard analysis routines (Reisinger et al., 2010). Such CDM based interoperability approach is also studied in the context of data exchange between EHR systems in order to improve interoperability between EHR providers.
The importance of interoperability between EHR systems is obvious. There exists numerous standards such as HL7/ASTM CCD and IHE PCC templates, FHIR Specification 1 , HITSP C32/C83 23 components, ISO/CEN 13606 Reference Model 4 with the aim of providing common representation format for observational EHR data. Such standards foster automated exchange of medical records between EHR systems, and provide a consistent way to represent and update medical records.
FHIR particularly attaches importance to defining set of data formats and resources which comprise the building blocks of majority of interoperability scenarios. This way medical information available in heterogeneous models can be exchanged in structured way on various levels. Inline with its objectives, FHIR is actively developing a framework with the purpose of executing Clinical Query Language 5 (CQL) over EHR datasets (Jiang et al., 2017). EHR data available in clinical data repositories is manually mapped into common FHIR RDF model, then queries represented in CQL are automatically translated into SPARQL queries to be executed on FHIR RDF graph. However, it is important to note that these models are developed for patient care and are not suitable for research studies (Liao et al., 2015). Initiatives like Observational Medical Outcomes Partnership 6 develop and publish CDMs in order to enable large scale systematic analysis over observational EHR datasets.
Transformation of observational EHR datasets into CDM narrows the gap between clinical care and research domains as it allows utilization of existing medical summaries for postmarketing safety analysis. Carrying out clinical research studies over existing observational data converted into a CDM has potential advantages over randomized clinical trials such as better generalizability due to broader population, lower cost, and longer time span (Benson and Hartz, 2000); and over case reports suffering from differential reporting, underreporting, and uneven quality problems (Piazza-Hepp and Kennedy, 1995;Brewer and Colditz, 1999;Wysowski and Swartz, 2005;Furberg et al., 2006).
In addition, utilization of EHR datasets for postmarketing safety analysis can significantly reduce the time needed to assemble patient cohorts and thus analysis time compared with the years needed to recruit patients and collect case reports (Liao et al., 2015).
It is important to transform the source EHR data into to the CDM easily and accurately as the outcome of the clinical studies heavily depends on the quality of the transformation. ETL is the widespread transformation method where the source and target repositories have different representation formats. An ETL process is composed of three main steps that are dedicated respectively to (i) extract the data from the source repository, (ii) transform the data to the target representation format, and (iii) load the transformed data into the target repository. In order to set up an ETL operation, however, one has to understand the physical data models of both repositories at a detailed level in addition to the domain knowledge required to map the source model to the target model. In accordance with this, it is a wellknown fact that the transformation process requires a significant amount of expertise and time considering the complexity of existing information models in both clinical care and research domains (Reisinger et al., 2010;Overhage et al., 2012;Zhou et al., 2013;Matcho et al., 2014). The level expertise required to extract, transform, and manage the EHR data is known to be among the biggest obstacles to utilize EHR datasets in clinical research (Liao et al., 2015).
Existing work on semantic interoperability between EHR systems mostly rely on ETL operations to transform individual EHR datasets into the common representation format. For instance, Santos et al. (2010) proposes a logical EHR model based on ISO/CEN 13606 Reference Model for an integrated EHR service. Proposed interoperability scenario maintains a centralized EHR warehouse utilizing ISO/CEN 13606 and requires individual EHR systems to implement mechanisms for data exchange with the centralized repository. Each EHR provider needs to setup an ETL operation in such a scenario.
Poor documentation, high number of data sources, and everevolving nature of data models make designing ETL processes harder; which even made researchers to utilize conceptual modeling frameworks like UML and BPMN in order to address the complexity of the process (Tziovara et al., 2007;El Akkaoui et al., 2013). Furthermore, data quality problems driven by the lack of systematic validation and automated unit testing are important dissuasive factors for using ETL (Singh and Singh, 2010). Oliveira and Belo (2012) points out the lack of a simple and rigorous approach for validation of ETL processes before performing the complete transformation job.
The main goal of this study is to introduce an alternative data transformation methodology addressing the limitations of the traditional ETL approach for the secondary use of EHRs in postmarketing surveillance. The proposed method employs modular components for each step of a data transformation process (i.e., extract, transform, and load). While these steps do not have a clear separation in traditional ETL design tools, we define well-defined boundaries and make use of semantic web technologies for all transformation steps: for the representation and execution of transformation rules as well as representation of the source and target data models. To the best of our knowledge, our proposed methodology is the first automated approach that utilizes semantic web technologies both for description and verification of the target data model as well as the transformation process itself.
As a case study, we validate our methodology by transforming data from two different EHR systems to Observational Medical Outcomes Partnership (OMOP) CDM, one of the most adopted and well-known CDMs for postmarketing safety studies. Throughout this case study, we model OMOP CDM as an ontology, define and execute semantic transformation rules, and employ a software to populate a relational database keeping OMOP CDM data instances and terminology systems. Moreover, we apply descriptive analysis routines from OMOP 7 to compare the occurrences of specific records in source and target databases to validate the transformation process. Lastly, we apply Temporal Pattern Discovery, a statistical method to recognize patterns in observational data for understanding the post market effects of drugs (Norén et al., 2013).
MATERIALS AND METHODS
An overview of the proposed semantic transformation methodology is presented in Figure 1. The process is handled in three steps in analogous with the traditional ETL approach; but with clearly separated activities in each step. Initially, EHR data is retrieved in RDF format from the underlying EHR systems. This data is provided as input to the semantic transformation rules which are created by domain experts to express semantics of the data transformation. Reasoning engine generates RDF instances expressed in the ontological representation of the target CDM as a result of the reasoning process. This is an intermediate representation processed by a software module to automatically populate the target database. Once the target database is populated, standard analysis methods designed for the target CDM can be seamlessly executed.
Design and Implementation of the Transformation Methodology
The generic methodology described above is validated by transforming the observational data from two different EHR systems to OMOP CDM. One EHR source, called LISPA, is a regional data warehouse in Italy and the other one, called TUD, is an EHR database of a university hospital in Germany.
Retrieval of EHR Data in RDF Format
We follow different ways for the two EHR sources to get the data in RDF format. TUD case is trivial as the EHR system there provides the data in RDF format already. However, the LISPA system provides medical data represented in HL7/ASTM Continuity of Care Document (CCD) 8 / IHE Patient Care Coordination (PCC) templates 9 . A tool called Ontmalizer is used to convert data, which is received in native XML representations FIGURE 1 | Overview of the semantic transformation methodology-population of a CDM repository from disparate EHR datasets through semantic mapping rules.
of CCD/PCC templates, into RDF instances. The resultant RDF is a one-to-one correspondence of the CCD/PCC templates. Either obtained as it is or via a transformation process, the RDF representation at the end of this step reflects the data sources' own native formats. As depicted in Figure 1, heterogenous RDF representations are transformed into a common format via semantic mappings in order to perform the same set of analytic routines on a unified clinical research database.
XML based EHR data representation standards are widely used e.g., HL7/ASTM CCD and IHE PCC templates, FHIR Specification 10 , HITSP C32/C83 1112 components, ISO/CEN 13606 Reference Model 13 . Although some standards like FHIR are actively working on developing tools to transform existing data in RDF format, XML is still the most dominant representation used by EHR providers. Furthermore, these efforts mostly focus on developing ad-hoc mappings for a single model (Jiang et al., 2017). Therefore, having the capability of converting any XML data compliant with an XML Schema Definition (XSD) makes Ontmalizer a convenient tool to convert EHR data represented in these standards into RDF format. Medical summaries transformed into RDF format can be provided as input to the proposed semantic transformation framework. On the other hand, legacy EHR systems are based widely on relational database management systems (RDBMS). There are also efforts for accessing relational databases as virtual, readonly RDF graphs and creating dumps of the databases in RDF format (Hert et al., 2011;Michel et al., 2014). It is also possible to utilize mapping languages from relational databases to RDF model to view the relational data in the RDF format (Bizer and Seaborne, 2004). Utilization of such tools expedites the adaptation of the proposed semantic transformation approach by means of automated RDF generation.
Transformation of EHR Data in RDF Format to OMOP Ontology Instances
Observational Medical Outcomes Partnership (OMOP) is a public-private partnership funded and managed through the Foundation for the National Institutes of Health (NIH), with the overall aim to improve the safety monitoring of medical products. For this purpose, OMOP conducts comparative evaluation of analytical methods for safety surveillance of longitudinal observational databases across a spectrum of disparate data sources. OMOP methods are based on a model where data from various sources is extracted and transformed to a common structure for further analysis-OMOP CDM. Use of such common data model eliminates the need for tailoring analytical methods for each data source. Methods in OMOP Method Library 14 can be directly applied once the transformation into OMOP CDM is performed.
Unlike the aforementioned EHR data standards that have an XML Schema defining the structure, OMOP CDM is defined through entity-relationship diagrams and SQL scripts. Since semantic conversion rules are based on RDF technology, we have created the ontological representation For each construct in the original OMOP CDM, an OWL construct has been created according to mappings as given in Table 1. Generated OMOP ontology, depicted in Figure 2, is isomorphic to the original OMOP CDM. Similar to the OMOP CDM, OMOP ontology is constructed around the "Person" main class with links to other classes representing various healthcare entities. The complete ontology is available online 16 . The OMOP ontology introduces a level of abstraction and serves as a middle layer in the transformation process of the source EHR data model into the OMOP CDM. Semantic mapping rules are RDF-to-RDF transformation rules such that they take the patient data available in the RDF format and convert them into OMOP ontology instances by hiding the technical details of both the source EHR systems and the OMOP database. Proposed transformation framework hides the technical details of the transformation process and enables user to focus on expressing transformation semantics through abstract mapping rules. Figure 3A shows sample semantic mapping rules that convert semantic EHR data to an OMOP ontology representation. The mapping rules from local domain models to OMOP CDM have been created in co-production among the linked data experts of technical partners, clinical experts of the EHR data sources, and the safety analysts of the pharmacovigilance partners. The rules were finalized after a few iterations and the formalization of the rules were realized by the linked data experts.
The mapping rules are expressed with Notation 3 (N3) 17 logic, a language developed by semantic web community as an alternative non-XML, human readable serialization for RDF models. These mappings rules can be executed by a N3 reasoner, such as Euler yet another proof Engine (EYE) 18 . A mapping rule in Figure 3A (Line 1-3) depicts how gender expression from the source ontology to is mapped to OMOP ontology. This rule simply generates a one-to-one mapping between gender codes of source and target models. A more complex rule presented in Figure 3A (Line 4-9) demonstrates how birthdate value of patient is broken into components and used to populate corresponding fields in OMOP ontology. The complete set of rules is available online 19 .
Through filtering rules, our semantic rule based approach makes the transformation process easier to validate compared to the traditional ETL approach. Filtering rules can easily be represented in N3 logic and enforced by the N3 reasoner during rule execution. For example, OMOP CDM specifies gender and year of birth as mandatory, while month of birth as optional. Figure 3B depicts the representation of such constraints using OPTIONAL construct.
This approach also enables definition of transformation rules in a modular way for data entities in varying granularities e.g., for concept codes, dates, medications, or observations. Individual rules that are defined at concept code level can be extended and combined in a bottom-up manner in order to define mapping rules for more complex entities such as condition or person. Furthermore, these building blocks can easily be re-used across various versions and models. Such an approach does not only make transformation process easier to maintain compared to the traditional ETL approach, it saves domain experts from the tedious job of developing entire set of transformation rules for every data model. In addition, modularity is a critical enabler for validation of the transformation rules individually as well as the entire process holistically. Exploiting this characteristic, we developed unit tests for each rule. Figure 4A shows a very small portion of a person's EHR data; simply the birthdate of the person. On the other side, Figure 4B shows the unit test checking the validity of the rule in Figure 3A. Running this rule on the test data ( Figure 4A) produces the outcome, on which the unit test is applied. The unit test, defined as a CONSTRUCT query, aims to re-generate all the expected data elements from the outcome described above. Carrying out the data transformation with the semantic conversion rules and filtering rules by a N3 reasoner does not only make the transformation process more explicit and verifiable as depicted, but also makes it provable. The EYE reasoning engine generates logic based proofs for its reasoning process. A proof records the actions such as the data extractions and inferences that lead to the conclusions of a reasoning process. The proof is checked to build trust on the reasoning process.
Populating the OMOP CDM Database With OMOP Ontology Instances
OMOP maintains a library of statistical methods that are developed by researchers and data analysts in pharmacovigilance in order to enable researchers to conduct similar analyses on disparate data sources and obtain comparable results. Standardized methods from the OMOP Methods Library are designed for OMOP CDM, which specifies a relational database schema to define the structure as well as the standardized vocabulary to define the content. Therefore, a relational database instance implementing the OMOP CDM should be populated from the RDF based OMOP ontology instances which are produced during the previous step. At this step, first the condition and medication records along with the persons are loaded to the OMOP database.
One of the major issues at this phase is the mapping of code systems used in the source and target domains. In our case, we have three different code systems for conditions; ICD-9-CM, ICD-10-GM used in our data sources LISPA and TUD, and MedDRA used by clinical analysts for the selected analytical methods. During the transformation, we preserve the original code systems of EHR sources i.e., ICD-9-CM and ICD-10-GM. However, the built-in OMOP vocabularies do not include ICD-9-CM and ICD-10-GM. They were pre-installed into the OMOP CDM before the transformation process. Afterwards, once the analyst initiates an analysis with a MedDRA term, equivalent ICD-9-CM or ICD-10-GM terms are extracted from the Terminology Reasoning Service (Yuksel et al., 2016) and used in the rest of the analysis. By means of terminology reasoning service, our semantic transformation approach is able to run analytical methods which employ a different code system than original data source. For medications, all parties use ATC code system in our pilot scenario. On the other hand, we mapped the gender codes during the transformation time from HL7 Administrative code system to OMOP gender codes using the same Terminology Reasoning Service. Note that use of specific code system is orthogonal to transformation approach presented in this paper and our approach can be integrated with any code system through a Terminology Reasoning Service. Our Terminology Reasoning Service employs original terminology systems as formalized ontologies whose hierarchical relationships are represented with "skos:broader" property, and several reliable resources are utilized for the mappings across different code systems, again as semantic relationships. For example, OMOP project provides mapping of a selected subset of ICD-9-CM and ICD-10-CM codes to SNOMED-CT Clinical Findings; these are represented via the "salus:omopMapping" property. IMI PROTECT project created on ontology called OntoADR, which represented mappings between MedDRA and SNOMED-CT codes; these are represented via the "salus:protectCloseMatch" property in our Terminology Reasoning Service. By using such reliable relationships, we apply a series of terminology reasoning rules, again implemented on top of EYE, to deduce all the code mappings that we need in our use case in advance. These inferred code mappings are fed to the Terminology Reasoning Service, so that the OMOP queries with MedDRA codes are expanded with corresponding codes from ICD-9-CM and ICD-10-GM while being executed on top of source EHR data. Further details about our terminology reasoning approach can be found in Yuksel et al. (2016).
The OMOP DB Adapter, the software module dealing with the database-specific details, consumes the OMOP ontology instances and populates the OMOP CDM instance. In addition to the population of the OMOP CDM instance, it is also responsible for the calculation of condition and drug eras which are the chronological periods of occurrence of conditions and exposure of the drugs, respectively. OMOP CDM is designed to combine independent exposures of the same drug or occurrences of the same condition into a single era through a persistence window. Persistence windows represent the allowable timespan between the exposure of the same drug and occurrence of the same condition. As indicated in the OMOP CDM specification (OMOP CDM, 2013), the length of a persistence window is set as 30 days. In case of medications, if there is less than 30 days between consecutive exposures of the same drug, then a drug era is created and both exposures are merged. If there is no other drug exposure in the persistence window, then an era is created from that single exposure. All drug exposures are either merged into an already existing drug era or constitute a drug era itself so that all exposures are associated with drug eras. Similarly, all condition eras are calculated using the condition occurrence records. The era calculation is performed on the fly during the generation of SQL Insert statements from the OMOP Ontology instances. The OMOP DB Adapter produces one-toone correspondence of the OMOP Ontology instances that are produced in the previous step and populates the target OMOP CDM instance in fully automatized manner.
Evaluation Design
The methodology that we introduce in this study has been built within the SALUS interoperability framework. SALUS (Laleci Erturkmen et al., 2012) is a research project that aims to create a semantic interoperability layer by aligning the source and target data models as well as the code systems used to represent healthcare concepts (e.g., drugs and conditions) in order to enable the secondary use of EHR data for clinical research activities. SALUS semantic interoperability layer accepts eligibility queries and returns resultant patient summaries as instances of a RDF based data model, i.e., the Common Information Model (CIM) of SALUS. The process of getting EHR data in RDF format is realized through SALUS interoperability framework (section Retrieval of EHR Data in RDF Format). The OMOP ontology is derived from these SALUS CIM conformant patient summaries by means of semantic mapping rules (section Transformation of EHR Data in RDF Format to OMOP Ontology Instances) and loaded into the target OMOP CDM instance by means of the OMOP DB Adapter (section Populating the OMOP CDM Database With OMOP Ontology Instances). Through realworld deployments of the SALUS in TUD and LISPA, we aim to show that proposed semantic transformation framework can effectively populate an OMOP CDM instance and expedites clinical research activities on heterogeneous data sources.
We perform two main activities for the evaluation of this methodology. The first is to compare descriptive statistics from the source and target databases in order to verify the correctness of the transformation process, which is a common method in literature to assess to quality of the data transformation process FIGURE 6 | Chronograph visualizing the temporal pattern between a prescription of a drug and the occurrence of a medical event. (Santos et al., 2010;Overhage et al., 2012;Zhou et al., 2013;Jiang et al., 2017). The OMOP team created the Observational Source Characteristics Analysis Report 20 (OSCAR) to characterize the transformed data represented in the OMOP CDM format. OSCAR provides a structured output of the descriptive statistics for the data represented in the OMOP Common Data Model. Such systematic data extraction facilitates comparison of source and target databases for quality assessment and plays a key role in validation of transformation into the OMOP Common Data Model. Since OSCAR is not available in pilot settings of SALUS (i.e., TUD & LISPA), we developed set of ad-hoc SQL queries to calculate statistics similar to OSCAR. Those statistics were extracted for two specific patient cohorts in addition to the 20 http://semanticommunity.info/Data_Science/Observational_Medical_ Outcomes_Partnership#OSCAR entire dataset, namely "Patients with acute myocardial infarction (AMI) occurrence" and "Patients with nifedipine exposure." The main objective of this activity is to assess the quality of the transformation process and show that the resultant database preserves the characteristics of the source database. Similar to original OSCAR, extracted statistics do not contain any personal level data and can be shared.
The second activity is to show that clinical researchers are able to detect and evaluate potential signals from the resultant OMOP CDM instance which is populated using the proposed semantic transformation methodology. One such scenario starts with the analysis of the whole OMOP CDM instance for potential, causally related drug-adverse drug event pairs. In this step, the analysts may choose to investigate one or more (possibly all) drugs against one or more (possibly all) ADEs. Main objective is to show that existing analysis methods from literature can directly be applied on resultant database, eliminating the need for tailoring data analysis for each source data model.
Temporal Association Screening (TAS) tool of the SALUS project realizes this scenario by running IC Temporal Pattern Discovery proposed by Noren (Norén et al., 2013) from the OMOP Method Library. First, the analyst specifies drugs and conditions of interest in ATC and MedDRA terminologies, respectively as depicted in Figure 5. TAS tool queries the Terminology Reasoning Service in order to retrieve corresponding codes in the target data model. After selecting the ICD-9-CM or ICD-10-GM correspondences of the conditions coded by the analyst, Temporal Pattern Discovery is carried out with the ICD codes on OMOP CDM. In order to assess the feasibility and usability of the populated OMOP CDM Instances, the TAS tool is used to investigate drug-condition pairs, i.e., atorvastatin-rhabdomyolysis or nifedipine-acute myocardial infarction (AMI). In addition to the descriptive statistics calculated on the entire dataset, we also report the results related to patient cohort with nifedipine or AMI.
The statistical measure derived from the Temporal Association Screening tool is a measure of disproportionality taking some confounders into account. The result of the analysis is presented graphically, via chronographs, to further analyze a specific drug-event pair visually as presented in Figure 6. Analyzing a chronograph gives the safety analyst a visual representation of the empirical basis for a possible association between a drug prescription and an event (Norén et al., 2013). Our semantic transformation framework has been deployed on two different EHR systems of the pilot sites of the SALUS project, namely LISPA and TUD. LISPA database includes anonymized data of ∼16 million patients with over 10 years' longitudinal data on the average whereas TUD EHR System contains ∼1 million patient records. Figure 7 depicts the deployment of the proposed framework on the two SALUS pilot sites.
RESULTS
In this paper, we present the evaluation results obtained from the TUD data warehouse for showing the correctness and accuracy of the proposed transformation methodology.
The number of total patients in the original TUD data warehouse is 893,870 and 95.66% of those patients were accurately translated to OMOP CDM instance. The number of patients in nifedipine cohort accounts for 0.013% of the total TUD patients whereas the number of patients in AMI cohort corresponds to 0.287% of the population. All 113 patients in original nifedipine cohort were successfully preserved during the transformation process and corresponding patient entities were created in the target database. However, six out of the 2,562 patients in the original AMI cohort could not be transformed into the OMOP CDM instance. Table 2 provides the statistics on the number of patients obtained from both source and target databases.
Detailed investigation on the patients who could not be transferred to the OMOP CDM instance reveals that those patients were left out by the OMOP DB Adapter intentionally for two reasons. First, the patients without a birthdate were not imported since the year of birth is a mandatory field in OMOP CDM (see Figure 4B), which shows the effectiveness of our filtering rules in preserving data integrity. Second, the TUD data warehouse contains several medical cases without any start date; patients associated with those could not be transferred either. All 855101 patients with valid a birthdate and valid medical conditions were successfully transformed into OMOP CDM and loaded into the target database.
Demographic characteristics of the patient population were preserved to a great extent during the transformation. Average age statistics of both gender groups and the whole population are presented in Table 3. Figure 8 presents the gender distribution of the selected cohorts in both original TUD (source) and populated OMOP databases (target). Table 4 compares the nifedipine exposures and AMI occurrences in those databases. In line with the previous findings, both results point out the substantial similarity between two databases; hence the quality of data transformation.
99.93% of all medication records in the drug exposure table were transformed to the OMOP CDM instance. Top 100 most frequent medications accounts for 52.55% of the drug exposure data and 96.87% of those were correctly reflected in the populated OMOP CDM instance. Investigation on the unmapped drug exposures points out the previously discussed filtering conditions. In Figure 9, 50 most frequent medications with their incidence rates in source TUD database and target OMOP CDM instance is presented. It shows that incidence rates of medication in both databases are well-aligned. 99.59% of the condition occurrences were successfully transformed. Top 100 most occurring conditions made up 29.96% of all conditions data and 94.55% of these occurrences were mapped to OMOP CDM instance successfully. Incidence rates of the 50 most occurring conditions in both databases are presented in Figure 10. It also presents a linear trend line as medication incidence rates except a few conditions. The cause of these abnormalities is the differences between the vocabularies of the source and target databases. In TUD, the German modification of ICD10, which is not part of the standardized OMOP Vocabulary, is used to annotate conditions. Therefore, any conflict between these code-lists were handled by the OMOP DB Adapter during the transformation. For instance, the number of conditions which are not in the top 100 in original data were mapped to "Essentielle (primäre) Hypertonie"-"I.10-" and created inconsistency between the original and transformed data. However, this problem effects a statistically negligible portion of the condition records as described previously.
DISCUSSION
This research proposes a semantic conversion approach over the prevalent ETL approach to use observational health records in postmarketing safety studies, and the convenience of the proposed approach for clinical researchers is shown through a real-world deployment. The proposed semantic transformation methodology enables data sources to be transformed into a target common data model by only providing abstract mapping rules and eliminates the effort spent to address database specific details. We implement the proposed methodology based on the OMOP common data model due to rich set of standardized analysis methods and its prevalence in pharmacovigilance research. Unlike the traditional ETL approach, the use of abstract mapping rules enables researchers to focus on the semantics of the data models rather than designing the entire transformation pipeline. Furthermore, abstraction of the semantic and technical steps fosters modularity and re-use, making the proposed framework a scalable, verifiable, and provable approach.
Semantically mediating all the patient data, terminology systems, and the mappings among terminology systems in formalized representations in a knowledge base allows us to improve the quality of data and extend the capabilities of our end-user tools via introduction of new rules easily. The patient data in the EHR systems is not perfect most of the time; it can be incomplete or erroneous. With our approach, for example we are able to insert a new rule to infer diabetes diagnosis by checking the existence of specific medications (e.g., metformin) and laboratory test results (e.g., high glycosylated hemoglobin [HbA1c] result) when diabetes is not explicitly recorded in the list of diagnoses of a patient due to incomplete records. Another advantage of our semantic transformation approach is that, terminology code mapping is decoupled from the content model transformation and all the original codes from the local data (i.e., the source of truth) are preserved in the transformation process so that no meaning is lost in translation. The code mapping is handled dynamically at run time by query expansion enabled by our Terminology Reasoning Service, which is continuously and independently being enriched with validated / invalidated code mappings. In standard ETL approach though, code mapping is hardwired in the ETL scripts and hence the source of truth is lost in the target data. Even when a single new code mapping is introduced by the terminology experts, the ETL scripts have to be updated and re-run on the whole source data.
Use of semantic conversion and filtering rules based on N3 logic enables reasoning engine to generate proof records, which in turn can be automatically checked and verified by the same engine in order to build trust on the transformation process. Thus, correctness of the entire transformation process can be verified in bottom-up manner by combining the proof records generated for each conversion and filtering rule. In addition, storing mapping rules in a machine-processable format combined with robust versioning makes maintenance easier in case of changes in data sources and facilitates adaptation of new models by increasing re-use of existing rules.
On one hand the methodology, specifically the transformation process, is easier to understand as the mapping rules are welldocumented and transparent to the user; easier to maintain (thus easier to generalize and re-use) as the rules are linked to the relevant data entities and stored in a machine-processable format; on the other hand, it is more rigorous thanks to fine-grained unit testing and automated verification mechanisms on the filtering and conversion rules.
Recently, standard developing organizations investigate how semantic technologies can be adopted in order to create a formal, machine-processable representation for existing clinical data models. Yosemite Manifesto (2016) positions RDF as "the best available candidate for a universal healthcare exchange language" and asserts that healthcare information should either already be represented in RDF or have mappings to it. HL7 and CDISC groups are in an effort to publish their existing models in RDF. It is an important move toward enabling semantic interoperability in clinical research, inline with our objectives.
FHIR adapted RDF as the third instance representation format in addition to JSON and XML. In addition, FHIR developed Shape Expressions (ShEx) language to formally define data constraints in FHIR RDF Graphs, which serves as a formal language to describe and define instance graphs . Our proposed approach can complement the FHIR RDF and ShEx and makes it possible to extend this RDF based interoperability framework into clinical research domain. One can define abstract mapping rules from FHIR RDF into OMOP Ontology so that observational data represented in FHIR RDF graphs can be directly used by OMOP methods. Combined with ShEx, our N3 logic based reasoner can verify the correctness and the integrity of both source data and the transformation process itself.
With the increasing adoption of these RDF based standards, the proposed semantic transformation framework will possibly be applied on more domains and substitute the onerous ETL based approaches. From a practical point of view, tools like Ontmalizer are convenient to utilize widely used, XML-based EHR standards such as HL7/ASTM CCD and IHE PCC templates into RDF format. Also, technologies to access relational databases as virtual, read-only RDF graphs are another modality to utilize EHR in RDF format (Michel et al., 2014).
Although it is orthogonal to the transformation approach proposed in this study, terminology mapping is an important task which has to be considered during the transformation of EHRs to a particular CDM. Major sources like BioPortal (Salvadores et al., 2013) already publish some mappings between different terminology systems backed by Linked Data principles. Similarly, logical EHR model proposed by Santos et al. proposes a terminology service based on semantic web technologies where existing terminologies are encoded in RDF and OWL languages (Santos et al., 2010). In the proposed semantic transformation approach, these mappings can be exploited directly through a Terminology Reasoning Service and several manual processing tasks including retrieval and preparation of the mappings are ruled out; enabling a seamless transformation process.
CONCLUSIONS
We have developed a semantic transformation framework in order to transform available EHR data represented in proprietary or standard content models into an OMOP CDM database instance to be utilized in drug surveillance studies. The proposed framework addresses the limitations of traditional ETL based approaches and introduces an easy-to-use, modular, and verifiable approach through clear separation of technical and semantic steps of the transformation pipeline. The framework consists of a set of semantic conversion rules expressing the semantics of the transformation and transforming the EHR data to the OMOP ontology in bottom-up manner; the OMOP ontology as the intermediate semantic representation into which EHR data is transformed; the OMOP Database Adapter as the software component realizing the transformation in syntactic level and populating the relational OMOP database based on the OMOP ontology instances. In order to show that both source data and the target instance have identical descriptive characteristics, we executed a systematic descriptive analysis between the source EHR database and the target OMOP database. As a result, we observed that source data is accurately transformed into the target model and the target database instance exhibits the characteristics of the original patient population. More importantly, we executed the Temporal Pattern Discovery Methods from the OMOP Methods Library on the database populated through the proposed framework. In addition to the validation of our methodology on a EHR database, applying on two very different EHR sources indicate a promising result that EHR data residing in different systems conforming to different data models can be unified in the same database through the same methodology and by using common components such as the semantic conversion rules and the OMOP Database Adapter.
AUTHOR CONTRIBUTIONS
AS and GL: initiated the study design; AP, SG, GL, MY, and AS: realized the implementation of the proposed framework; AP: planned the comparative analyses and conducted the analyses with the help of SG, GL, and AS; authors wrote the first draft of the manuscript together; AP: rewrote new drafts based on input from co-authors. All authors contributed to refinement of the manuscript and approved the final manuscript. | 9,536.8 | 2018-04-30T00:00:00.000 | [
"Computer Science",
"Medicine"
] |
Zero Range Process and Multi-Dimensional Random Walks
The special limit of the totally asymmetric zero range process of the low-dimensional non-equilibrium statistical mechanics described by the non-Hermitian Hamiltonian is considered. The calculation of the conditional probabilities of the model are based on the algebraic Bethe ansatz approach. We demonstrate that the conditional probabilities may be considered as the generating functions of the random multi-dimensional lattice walks bounded by a hyperplane. This type of walks we call the walks over the multi-dimensional simplicial lattices. The answers for the conditional probability and for the number of random walks in the multi-dimensional simplicial lattice are expressed through the symmetric functions.
Introduction
The zero-range process is a stochastic lattice gas where the particles hop randomly with an on-site interaction that makes the jump rate dependent only on the local particle number. The zero range processes (ZRPs) belong to a class of minimal statistical-mechanics models of the low-dimensional non-equilibrium physics [12,30]. Being exactly solvable the model and its other variations are intensively studied both by mathematicians and physicists [9,15,16,21,25,26].
In this paper we consider the totally asymmetric simple zero range process (TASZRP) [13,17], which describes a system of indistinguishable particles placed on a one-dimensional lattice, moving randomly in one direction from right to left with the equal hopping rate on a periodic ring. The dynamical variables of the model are the phase operators [11] which can be regarded as a special limit of q-bosons [5,19]. The application of the quantum inverse method (QIM) [13,17,20] allows to calculate the scalar products and form-factors of the model and represent them in the determinantal form [3,8]. The relation of the considered model and the totaly asymmetric simple exclusion process (TASEP) was discussed in [8,27].
Certain quantum integrable models solvable by the QIM demonstrate close relationship [2,6,7] with the different objects of the enumerative combinatorics [31,32] and the theory of the symmetric functions [22]. It appeared that the correlation functions of some integrable models may be regarded as the generating functions of plane partitions and random walks.
This paper is a contribution to the Special Issue on Recent Advances in Quantum Integrable Systems. The full collection is available at http://www.emis.de/journals/SIGMA/RAQIS2016.html In this paper we shall calculate the conditional probability of the model and reveal its connection with the generating function of the lattice paths on multi-dimensional oriented lattices bounded by a hyperplane.
The layout of the paper is as follows. In the introductory Section 2 we give the definition of the TASZRP. The solution of the model by QIM is presented in Section 3. In Section 4 conditional probability is calculated. The random walks over the M -dimensional oriented simplicial lattice with retaining boundary conditions are introduced and the connection of their generating function with the conditional probability is established.
2 Totally asymmetric simple zero range hopping model Consider a system with N particles on a periodic one-dimensional lattice of length M + 1, i.e., on a ring, where sites m and m + M + 1 are identical. Each site of a lattice contains arbitrary number of particles. The particles evolve with the following dynamics rule: during the time interval [t, t + dt] a particle on a site i jumps with probability dt to the neighbouring site i + 1. There are no restrictions on the number of particles on a lattice site. A configuration C of the system is characterized by the list of all possible arrangements of N particles amongst the M + 1 available sites. The total number of configurations is, therefore, The probability P t (C) of finding the system in configuration C at time t satisfies the master equation where M is the Markov matrix of the size Ω×Ω. For C = C the entry M(C , C) is the transition rate from C to C . It is equal to unity, if the transition is allowed, and is zero otherwise. The diagonal entry −M(C, C) is equal to the number of occupied sites in the configuration C. The elements of the columns and rows of M add up to zero and total probability is conserved The process described is stochastic, the unique stationary state of this system being the one, in which all Ω different configurations C have equal weight.
A configuration C is represented by the sequence (n 0 , n 1 , . . . , n M ) of occupation numbers n j , satisfying the condition 0 ≤ n 0 , n 1 , . . . , n M ≤ N , n 0 + n 1 + · · · + n M = N . We can rewrite the master equation (2.2) for the TAZRP in the form where K is the number of occupied sites in the configuration (n 0 , n 1 , . . . , n M ), i.e., the number of n j = 0. This equation has to be supplemented by the condition that P t (n 0 , . . . , n M ) = 0, if at least one of the n j < 0.
To formulate the dynamic of the system in terms of a quantum mechanical model we denote a particle configuration as a Fock vector |n 0 , . . . , n M and define a probability vector Here, the phase operators φ n , φ † n were introduced. They satisfy commutation relations whereN j is the number operator and π i = 1 − φ † i φ i is vacuum projector: φ j π j = π j φ † j = 0. This algebra has a representation on the Fock space: The evolution of the quantum system |P t = e tH |P 0 is governed by the imaginary time Schrödinger equation which is equivalent to the master equation (2.3) for the probabilities P t (n 0 , . . . , n M ) equal to the matrix elements The initial (t = 0) probability distribution defines the state |P 0 φ j π j−1 · · · π 1 N is a left eigenvector of the model, which obeys Correspondingly, H has one right eigenvector with eigenvalue zero, which is associated with the state |Ω = 1 Ω |S : H|Ω = 0, where Ω is (2.1), S|Ω = 1. The vector S| does not evolve in time and, therefore, corresponds up to normalization factor to a steady state distribution of the system S|P t = S|P 0 = 1.
In this paper we shall calculate the conditional probability which is equal to probability, that in a time t the system will be in a pure state defined by the occupation numbers (n 0 , . . . , n M ) provided that initially the system was prepared in a pure state |P 0 = |m 0 , . . . , m M .
Solution of the TASZRP
To apply the scheme of the QIM to the solution of the Hamiltonian (2.4) we define L-operator [8] which is 2 × 2 matrix with the operator-valued entries acting on the Fock states according to (2.5): where u ∈ C is a parameter. This L-operator satisfies the intertwining relation The monodromy matrix is the matrix product of L-operators The commutation relations of the matrix elements of the monodromy matrix are given by the same R-matrix (3.2) The transfer matrix τ (u) is the trace of the monodromy matrix in the auxiliary space The relation (3.5) means that [τ (u), τ (v)] = 0 for arbitrary u, v ∈ C. From the definitions (3.1) and (3.4) one finds by direct calculation that the entries of the monodromy matrix are polynomials in u 2 . For A(u) and D(u) one has where the dots stand for the terms not important for further consideration. We also find that The representation (3.7) allows to express the Hamiltonian (2.4) through the transfer matrix (3.6) Since the Hamiltonian (2.4) is non-Hermitian we have to distinguish between its right and left eigenvectors. The N -particle right state-vectors are taken in the form where B(u) is defined in (3.8), and u implies a collection of arbitrary complex parameters u j ∈ C: u = (u 0 , u 1 , . . . , u N ). The left state-vectors are equal to where C(u) is given by (3.9). The vacuum state (2.6) is an eigenvector of A(u) and D(u), with the eigen-values where f are the elements of the R-matrix (3.3). In the explicit form the Bethe equations are given by (3.14) There are Ω equation (2.1) sets of solutions of these equations. The eigenvalues Θ N (v, u) of the transfer matrix (3.6) in the general form are equal to
For the model under consideration
Here, the generating function of complete symmetric functions h Equation (3.10) enables to obtain the spectrum of the Hamiltonian (2.4). The N -particle eigenenergies The steady state (2.10) corresponds to a special solution of Bethe equations (3.14) when all u j = ∞.
The calculation of conditional probability
For the models associated with the R-matrix (3.2) the scalar product of the state-vectors (3.11) and (3.12) is given by the formula [1]: where V N (x) is the Vandermonde determinant, and the matrix Q is characterized by the entries Q jk , 1 ≤ j, k ≤ N , with α(u) and δ(u) given by (3.13).
The norm of the state-vector N 2 (u) ≡ Ψ N (u)|Ψ N (u) is defined by the scalar product (4.1) when the arguments v and u satisfy the Bethe equations (3.14). For the present case of the generalized phase model we substitute v k = u k , ∀ k, respecting the Bethe equations (3.14) into the entries of the matrix Q. The resulting matrix is denoted as Q, and its entries at j = k are equal to where U 2 is given by (3.14). L'Hôspital rule gives the diagonal entries of Q As a result, the squared norm N 2 (u) on the Bethe solution takes the form The state-vectors belonging to the different sets of solutions of the Bethe equations (3.14) are orthogonal. The eigenvectors (3.11) and (3.12) provide the resolution of the identity operator where the summation {u} is over all independent solutions of the Bethe equations (3.14). Inserting the resolution of the identity operator (4.5) into (2.11), one obtains the general answer for the conditional probability For the simplicity let us consider the initial state equal to |N, 0, . . . , 0 and the final one respectively to 0, 0, . . . , N |. The conditional probability (2.11) of this process is specified as follows where equation (2.7) has been used. Inserting the resolution of the identity operator (4.5) into (4.7), we obtain where the summation is over all independent solutions of equations (3.14). The decomposition (3.9) for B(u) and C(u) gives that and eventually the answer is where N 2 (u) is given by (4.3), (4.4).
To obtain the explicit answer for the conditional probability in the general case (4.6) we shall express state vectors (3.11) and (3.12) in the coordinate form. The state-vector (3.11) has the representation where the symmetric function χ R λ is equal, up to a multiplicative pre-factor, to Here λ denotes the partition (λ 1 , . . . , λ N ) of non-increasing non-negative integers, There is a one-to-one correspondence between a sequence of the occupation numbers (n 0 , n 1 , . . . , n M ), n 0 +n 1 +· · ·+n M = N , and the partition where each number S appears n S times (see Fig. 2). The sum in equation (4.8) is taken over all partitions λ into at most N parts with N ≤ M . Acting by the Hamiltonian (2.4) on the state-vector (4.8), we find that the wave function (4.9) satisfies the equation together with the exclusion condition The energy E N is given by (3.15). The state-vector (4.8) is the eigenvector of the Hamiltonian (2.4) with the periodic boundary conditions if the parameters u j satisfy the Bethe equations (3.14). The relations (4.8), (4.10) and (4.11) can be viewed as an implementation of the coordinate Bethe ansatz [17], which is an alternative to the approach of the algebraic Bethe ansatz considered in Section 3. Although the model is solved by the algebraic Bethe ansatz, representations of the type of (4.8) are especially useful in discussing the combinatorial properties of the quantum integrable models [2,6,7].
The set Simp (N ) Z M +1 is compact M -dimensional, and we shall call it simplicial lattice. A twodimensional triangular simplicial lattice is presented in Fig. 3. A sequence of K + 1 points in Z M +1 is called lattice path of K steps [18]. Directed random walks on M -dimensional oriented simplicial lattice are defined by a step set Γ M = (k 0 , k 1 , . . . , k M ) so that k i = −1, k i+1 = 1 for all pairs (i, i + 1) with i ∈ M and M + 1 = 0 (mod 2), and k j = 0 for all j ∈ M\{i, i + 1}. It may occur that some points on the boundary of the simplicial lattice also belong to a random walk trajectory. Therefore, the walker's movements should be supplied with appropriate boundary conditions. The boundary of the simplicial lattice consists of M + 1 faces of highest dimensionality M − 1. Under the retaining boundary conditions the walker comes to a node of the boundary, and either continues to move in accordance with the elements of Γ M , or keeps staying in the node. An oriented two-dimensional simplicial lattice with the retaining boundary conditions is presented in Fig. 4.
To establish the connection of the exponential generation function of lattice paths and the conditional probability (2.11) we shall interpret the coordinates n j of a walker n = (n 0 , n 1 , . . . , n M ) ∈ Z M +1 in a simplicial lattice Simp (N ) Z M +1 as the occupation numbers of (M +1)-component Fock space and describe the steps of a walker with the help of the Fock state-vectors |n ≡ |n 0 , n 1 , . . . , n M . Operator φ j shifts the value of the j th coordinate of the walker downwards n j → a b n j −1, while φ † j upwards n j → n j +1. If the walker arrived at an arbitrary node of s th component of the boundary, it can make either allowed step or stay on it due the action of operator π s (2.9). The number operator N j acts as the coordinate operator N j |n 0 , n 1 , . . . , n j , . . . , n D = n j |n 0 , n 1 , . . . , n j , . . . , n D . Since the occupation numbers are non-negative integers and their sum is conserved (2.8), we can regard operator where k ≥ 1, and it is natural to impose the condition G 0 (l | j) = M l=0 δ l l j l .
The exponential generating function of lattice paths in Simp (N ) (Z M +1 ) is defined as a formal series Due to definition (5.2) it may be expressed as Taking into account the connection (5.2) we obtain the desired relation of the conditional probability (2.11) and generation function of lattice paths (5.3): From the expression (4.13) it follows that the number of lattice paths made by a walker in the M -dimensional simplicial lattice in k steps with the ending nodes (j 0 , j 1 , . . . , j M ) and (l 0 , l 1 , . . . , l M ) is equal to where the function h 1 u −2 is introduced in (3.15). | 3,774.4 | 2017-03-22T00:00:00.000 | [
"Mathematics",
"Physics"
] |
Widely Distributed Radar Imaging: Unmediated ADMM Based Approach
In this paper, we present a novel approach to reconstruct a unique image of an observed scene with widely distributed radar sensors. The problem is posed as a constrained optimization problem in which the global image which represents the aggregate view of the sensors is a decision variable. While the problem is designed to promote a sparse solution for the global image, it is constrained such that a relationship with local images that can be reconstructed using the measurements at each sensor is respected. Two problem formulations are introduced by stipulating two different establishments of that relationship. The proposed formulations are designed according to consensus ADMM (CADMM) and sharing ADMM (SADMM), and their solutions are provided accordingly as iterative algorithms. We drive the explicit variable updates for each algorithm in addition to the recommended scheme for hybrid parallel implementation on the distributed sensors and a central processing unit. Our algorithms are validated and their performance is evaluated exploiting Civilian Vehicles Dome data-set to realize different scenarios of practical relevance. Experimental results show the effectiveness of the proposed algorithms, especially in cases with limited measurements.
I. INTRODUCTION
Widely distributed radar systems are robust and faulttolerant systems that provide high angular resolution and permit the exploitation of spatial diversity and occlusions avoidance [1]. With applications in surveillance, assisted living, and health monitoring, radar systems with widely distributed antennas are expected to play a vital role in emerging sensing paradigms [2], [3]. Under this architecture, the observed targets feature an aspect dependent scattering behavior restricting the employment of conventional imaging methods. The main impediment arises due to the adoption of the isotropic point scattering model of targets, thereby preventing algorithms like back-projection (BP) to provide an adequate imaging performance [4].
The problem of radar imaging with widely distributed sensors has not received enough attention in the literature. The works [5]- [7] considered radar imaging with distributed antennas and the related issues due to ambiguity in antenna positions and clocks synchronization. In these works, modelbased optimization algorithms are utilized to jointly achieve the imaging task and resolve such issues. However, in these works, an isotropic scattering model, suitable when antennas are closely spaced, is assumed; this is clearly not suitable when widely separated antennas are considered. On the other hand, in wide-angle synthetic aperture radars (WSAR), which bear a close resemblance to a widely distributed architecture, two approaches exist for imaging [8]. The first one is based on parametric modeling that characterizes the canonical scattering behavior of scatterers [9]- [12]. Correspondingly, the scene image is reconstructed through joint processing of the measurements from the whole aperture exploiting the model. Nevertheless, the imaging involves a dictionary search process that is computationally cumbersome [13]. The other approach is composite imaging [14]- [17] in which the full aperture is divided into sub-apertures within which the point scattering model holds. Respectively, images of each sub-aperture are formed through regularized optimization exploiting specific features such as sparsity. As a final step, individual images are fused to constitute an aggregate image of the scene through simple techniques such as the generalized likelihood ratio test (GLRT). This approach does not fully exploit the information from different aspects where the final image of the scene is only a fused version of the images reconstructed with subaperture data.
While in this paper we propose a sub-aperture method, unlike composite imaging, we propose to solve the problem of widely distributed radar imaging by directly reconstructing a global image that is introduced as an aggregate view of the scene. Besides, the prior information is only imposed on the global image rather than the local images of individual sensors. Concurrently, the correspondence between the local images and the global one is defined as a constraint to the optimization problem. Our approach allows for better data exploitation by including the global image as a decision variable in the optimization problem. We then provide a solution based on Alternating direction method of multipliers (ADMM) framework [18]. ADMM is a powerful distributed optimization regime suitable for systems that incorporate collection of measurements through a distributed architecture. In [19] and [20], ADMM has been introduced as a fast reconstruction method for generic imaging inverse problems. Further, in [21] it is applied to reconstruct complex SAR images with enhanced features in particular, and to perform imaging with undersampled measurements in the presence of phase errors in [22].
While in these works ADMM has been mainly utilized to facilitate the solution of a non-constrained optimization problem by the virtue of variable splitting, we employ its constrained formulation directly in the interest of exploiting the system architecture and implementing parallelizable image reconstruction algorithms. Accordingly, we establish two problem formulations inspired by consensus ADMM (CADMM) and sharing ADMM (SADMM). The first formulation comes as a generalization of our previous work [23] in which CADMM is utilized to mitigate the layover artifacts in widely distributed radar imaging by considering sub-aperture measurements from different elevations. In this work, however, we present CADMM formulation to introduce the association between sub-aperture images and the global image, and generally reconstruct the image of the scene without restriction on data viewing angles. Moreover, by stipulating the more relaxed sharing association in the constraints, we introduce the second problem formulation based on SADMM. The different association introduced by SADMM formulation enables another exploitation of the relationship between the data collected by the sub-apertures. Additionally, it provides an alternative realization of the system architecture through the ensuing unalike solution. We provide the solutions as iterative algorithms with a recommendation of a parallel implementation paradigm. Finally, Civilian Vehicles Dome data-set [24] is used to realize three experiments which comprise different practical use cases. Through them, we validate our algorithms and show the performance of CADMM and SADMM, where the latter is found to provide an enhanced imaging performance in most of the scenarios. Our proposed approach can be regarded as a general framework suitable to be implemented on various architectures including WSAR and radar systems with collocated antennas.
II. SIGNAL MODEL AND BACKGROUND
In this section, we provide the signal model we adopted for our distributed architecture and provide background about the imaging problem formulation in the state of the art.
Throughout this paper, vectors are denoted by lower case bold font, while matrices are in uppercase bold. I L is the identity matrix of size L × L and 1 N is a vector of all ones of size N × 1. The superscripts . T and . H denote respectively the transpose and the complex conjugate transpose of a vector or a matrix. On the other hand, superscripts in parenthesis denote the iteration count. The symbol ⊗ is used for the Kronecker product.
Considering the system geometry illustrated in Fig. 1, a group of red crossed circles constitute a cluster of antenna phase centers (APCs). The figure shows the case we consider in our paper where the Q sensors would each form a single cluster are at identical elevation angles. We consider a monostatic configuration where each sensor receives the reflections due its own illumination of the scene and does not process the reflections induced by transmissions from others. Accordingly, our proposed algorithms can be applied for architectures that be formed either by a real or a synthetic aperture. At each cluster, the isotropic scattering model of the targets in the scene is assumed. This way, the problem of aspect-dependant scattering behavior can be relaxed and Q local images can be formed by processing the measurements of individual clusters. Since our goal is to form a reflectivity image of the scene, we adopt the 2D tomographic radar imaging framework [25]. Accordingly, the signal received at the q th cluster after dechirping is where w = 1, . . . , W is the index of the sampled fast time frequency, m = 1, . . . , M is the index of an APC within the cluster,x q (x, y) indicates the complex reflectivity coefficient of a ground target at coordinates (x, y) with respect to the q th cluster, f w denotes the beat linear frequency, θ m is the azimuth angle of the m th element, and ϕ q is the elevation angle of the q th cluster. Approximating the scene with a uniform grid of N = N x × N y pixels and stacking the M vectors containing the frequency domain samples received by the APCs in the q th sensor, the phase history measurements can be written in a matrix form as where A q ∈ C W M ×N is the system model based forward operator,x q ∈ C N ×1 is the vector containing the complex scattering coefficients of the entire scene with respect to the q th cluster, and n q ∈ C W M ×1 summarizes all errors including receiver and measurement noise as well as model imperfections.
Composite imaging algorithms obtain local images utilizing the signal received at each cluster and subsequently fuse them into a global image. The scene size is usually much larger than the number of measurements N >> W M and the imaging task is the inverse problem of (2) which, consequently, becomes ill-posed. Compressed sensing methods are commonly used to solve this inverse problem. Particularly, local images are obtained by solving Q regularized least square optimization problems for each cluster of the form wherex q is the estimated local image using the measurements y q for q = 1, . . . , Q and h(·) is a regularization function that imposes apriori information about local images. Different choices of regularization function h(·) exist to enhance some image features such as sparsity and smoothness, among others. when h(·) is a separable function (e.g. l 1 -norm), the Q problems can be represented as a single optimization problem in Q variables since the least squares term is naturally separable. Explicitly, the problem can be written as (4) is an unconstrained regularized optimization problem which has been tackled through different optimization techniques in the literature. Finally, the image of the scene is obtained through a fusion step of the Q reconstructed images which can be as simple as a pixel-wise maximization among the Q local images.
As mentioned in the introduction, we alternatively reconstruct the global image of the scene by introducing its variable in the objective function and imposing the l 1 -norm on it directly for a sparsity-driven solution. Simultaneously, the relationship between the global image and local images is defined as a constraint for our optimization problem. In the next section, based on ADMM framework, we provide two alternative problem formulations along with their solutions.
III. ADMM FRAMEWORK FOR DISTRIBUTED RADAR IMAGING
ADMM is a powerful framework that renders itself amenable for optimization problems of distributed nature. It is a suitable tool to be utilized in a distributed radar system especially when the component sensors are equipped with some computation power capabilities. Although this computation power might be limited, it can be exploited to process some information in order to reduce the communication overhead and the computational burden. It also reduces latency as certain operations can already be performed in parallel at the nodes. Here, we first give a brief introduction of general ADMM formulation followed by our proposed reformulations of the problem in (4) according to ADMM framework.
Consider the following constrained optimization problem with linear constraints over two separable functions in two variables u and z arg min where G, H, and c are the matrices and vector of appropriate dimensions that establish the constraints on the variables u and z. The augmented Lagrangian function of the above problem becomes where σ is the dual variable, β is the augmented Lagrangian parameter, and ·, · denotes the inner product of vectors. The ADMM solution to the above problem is obtained by iteratively minimizing the augmented Lagrangian function with respect to both the variables u and z in an alternating fashion in addition to updating the dual variable each iteration. Accordingly, after the k th iteration, the ADMM variable updates consist of [18] Embracing ADMM framework, we propose two different formulations for (4). By introducing a new variable x G ∈ R N ×1 representing the magnitude of the global image, both the formulations will have the same objective function of minimizing the sum of the least square terms with respect to local images, in addition to minimizing the l 1 norm of the global image. The formulations differ in the constraints which define the relation between the global and local images. We consider the magnitude of the images as our optimization variables assuming that the phases are estimated in a previous step. Specifically, we assume that we have estimated Θ q ∈ C N ×N , the diagonal matrix containing the phase of all pixels of local image over its diagonal such thatx q = Θ q x q . For ease of notation, from now on we will consider the matrix Θ q included in the measurement matrix A q . The details regarding the estimation of Θ q will be discussed in the next section. Accordingly, with reference to (5), our first variable is x ∈ R QN ×1 containing the magnitude of all local images x = {x q } Q q=1 , and the second variable represents the magnitude of the global image x G . Consequently, our objective function will be f ( In the sequel, we will provide our proposed aforementioned formulations and their solutions in terms of variable updates according to (7).
A. Consensus ADMM (CADMM)
As the name suggests, by posing the problem according to this formulation, we pursue a solution which, at optimum, provides a global image on which all clusters reach a consensus. Consequently, the constraints, in this case, are defined to impose this relationship between the global and local images. Additionally, as mentioned earlier and by following our paper [23], we impose the l 1 -norm function to promote sparse global image solution. The problem becomes arg min where λ and µ are positive hyperparameters set to penalize less sparse global image solutions and trade-off the data fidelity term, respectively. Note that the Q constraints in (8) can be written in the form of the constraint in (5) by having G = As indicated in (7), the solution of (8) can be obtained by alternately minimizing its associated augmented Lagrangian with respect to x, x G , and the dual variable σ. The augmented Lagrangian is The resulting updates of each variable according to CADMM formulation are provided hereinafter in details.
1) Update of x (Local Images): Let x G
(k) and σ (k) denote the values of x G and σ after the k th iteration. Since (9) is decomposable with respect to x q , the updated x (k+1) can be obtained by updating all local images x (k+1) q for q = 1, . . . , Q in parallel as The problem in (10) is differentiable with respect to x q and the (k+1) st update can be obtained in a closed-form by letting ∇ xq L = 0 resulting in Note that the inverse in (11) is possible since µA H q A q + βI N is a positive definite matrix.
2) Update of x G (Global Image): For the global image update, following the ADMM framework, we consider This objective function above involves information from all Q clusters and is not decomposable with respect to x G . It further involves a non-differentiable function x G 1 . Thus, it can neither be parallelized nor solved in a closed form like (10). As a result, it is more suitable for the global image update to be carried out in a central processor after collecting local updates calculated at the distributed clusters. Moreover, for the subsequent update of local images, global image needs to be broadcast to all the clusters. Alternatively, if the global image update were to be carried out at distributed clusters, a fully meshed communication network would be needed to exchange all local updates among the Q clusters. Later in III-C, we will show how to solve (12) in the central node.
3) Update of σ (Dual Variable): After updating the global image, the dual variable can be updated by The Kronecker product is used to replicate the global image to the same size of the vectors x and σ. Since (13) is decomposable, it can be carried out in parallel as well as local images updates. Instead, it is more convenient for the dual variable to be updated in the central node subsequent to that of global images and broadcast to the distributed clusters for the next update of the local image.
B. Sharing ADMM (SADMM)
Under this formulation, we impose a different constraint in the optimization problem to explore a different relationship between local images and global image. The constraint is set such as the reconstructed global image is the average of all local images. Accordingly, the problem becomes arg min x q contains the sum of magnitudes of local images. Note that the size of constraints is reduced to the size of a single image instead of Q images in the consensus formulation. The nomenclature stems from the constraint above since the global image is considered a shared combination of all local images. We can again write the constraint of (14) in the form of the constraint in (5) The augmented Lagrangian of (14) can then be written as Note that since the number of constraints are reduced, the dual variable σ has a size N × 1 instead of QN × 1 as in CADMM. Next, we provide the variable updates due to SADMM formulation.
1) Update of x (Local Images): Unlike the consensus case, the augmented Lagrangian function (15) is not directly decomposable into Q terms because of the sum variablē x inside the augmented quadratic term. However, we show here that it is still possible to solve for each local image variable x q in parallel. Similar to (10), we use the values of x (k) G and σ (k) q in order to solve for x q at the (k + 1) st iteration. However, since we have alsox in (15), we fix also all other local image variables to x q . Consequently, the q th local image update can be obtained by Now similar to (10), the problem in (16) is fully differentiable with respect to x q and the k + 1 st update can be obtained in the closed-form From (17), we can observe that in SADMM, the q th local image update requires the previous state x q . This suggests the need for extra memory with respect to CADMM to track the previous state at each cluster. Additionally, the distributed clusters will need to receive each other updates. This can be broadcast by the central node subsequent to the update of global image and dual variables. The central node will have such values any way since they are needed for the global image update. Although the exchanged information between the central node and the distributed clusters in SADMM seems to be more than CADMM, the size of those variables to be exchanged is still less than CADMM by a factor of (Q + 2)/3 due to the reduced size of the dual variable in SADMM. This provides a significant reduction in communication bandwidth requirements between the central node and the sensors especially for a large number of distributed sensors.
2) Update of x G (Global Image): Similar to the global image update in Section III-A, after collecting the local images updates from the distributed sensors, the global image update for the sharing formulation is obtained by minimizing the augmented Lagrangian with respect to x G as follows Again, we here assume that both the global image and the dual variable is calculated at the central node. As a result, the sum of the local imagesx (k) is calculated directly at the central node following the local updates collection needed for the global image update. The solution of (18) will be detailed later in the section.
3) Update of σ (Dual variable): The dual variable update then is a straight forward step of the ADMM algorithm To summarize, despite the performance of each formulation in terms of image quality which will be examined in the next section, SADMM provides an alternative processing architecture with respect to CADMM in which: 1) extra memory is needed at the distributed clusters for local image updates, 2) communication overhead between the distributed clusters and the central node is reduced by a factor of (Q + 2)/3 due to the size difference of the constraints. In both CADMM and SADMM, local images can be updated in parallel at each cluster node and communicated back to the central node. The central node in turns updates both the global image and the dual variable. Subsequently, both the global image and dual variable updates are broadcast back to the distributed clusters in the case of CADMM in addition to the sum of the previous local images in case of SADMM in order to calculate the next local image updates. Other comparisons and performance metrics such as image reconstruction quality and convergence rate will be provided later in Section IV.
C. Solution Techniques
Considering the above formulations, in this section, we provide the techniques used to solve the sub-problems for local and global images updates. Additionally, we provide the stopping criteria adopted to terminate both algorithms. Lastly, we show how to obtain the phases of complex-valued local images prior to ADMM iterations.
1) Variable Updates:
The update of local images employs a matrix inversion in a closed-form solution. However, due to the large size of the problem, it needs to be solved iteratively using a numerical procedure. In particular, we carry out the inversion in the local update using conjugate gradient (CG) method [26]. Being a numerical method, the output of CG is a complex-valued imaged since both the measurements y q and the forward model A q are complex. However, the optimization is carried out over the real-valued magnitude of the images where the phase of the images is included in the measurement matrix. Therefore, subsequent to the update of the local image using CG, a projection of the resulting complex image on the real positive orthant is applied to obtain the magnitude of the local image. This projection implicitly states that the phase of the complex-valued output of CG is regarded as a numerical phase error.
The global image on the other hand, requires solving a LASSO like optimization problem which can be solved using a proximal gradient method. In our numerical experiments, we used the accelerated proximal gradient [27] to calculate the global image updates.
2) Stopping Criteria: Variable updates are repeated until termination which is decided upon comparing the values of the primal and dual residuals with their corresponding feasibility tolerances pri and dual respectively. Following the definitions of the residuals and the stopping criteria brought up in [18], let η pri and η dual denote the primal and dual residuals respectively. The dual residual is defined over the successive updates of global image variable. Hence, it is the same for both CADMM and SADMM. Accordingly, at the k th iteration, the dual residual for both formulations is given by On the other hand, the primal residual measures the constraint satisfaction and takes a different form in the two formulations. For CADMM, the primal residual is given by Similarly, in SADMM the primal residual is The feasibility tolerances can be chosen based on an absolute tolerance abs and a relative tolerance rel . Similar to the primal and dual residuals, the feasibility tolerances indicate non-identical definitions depending on the formulation due to the different constraints. In CADMM, they are given by (23) Likewise, for SADMM, feasibility tolerances are Lastly, for both CADMM and SADMM, the algorithm is terminated either when a defined maximum number of iterations is reached or when both the ensuing inequalities are satisfied where the variables in the above inequalities are calculated according to the definitions of the corresponding formulation.
3) Phase Matrix Θ: As mentioned earlier, we assume that the phase of local images is already provided prior to carrying out the optimization algorithms. Our proposed imaging methods can be considered partially non-coherent imaging method since the phases are only used within the data-fidelity term in the objective function. Thus, a coarse estimated phase of local images is sufficient for our algorithms to perform satisfactorily. Therefore, we use the phase of the images obtained by backprojection for each cluster as an estimate of the phase of local images. Accordingly, for each cluster q, the diagonal matrix containing the phase of all pixels of its local image is constructed as
IV. PERFORMANCE EVALUATION
In this section, we validate and evaluate the performance of the algorithms proposed in Section III to reconstruct radar images using distributed sensor clusters. To achieve this goal, we use the publicly available civilian vehicle dome (CV domes) data-set which offers simulated scattering data of civilian vehicle facet models. Although the data-set is originally intended to simulate circular synthetic aperture radars, a particular configuration of WSAR, it can also be used to simulate a mono-static distributed radar sensors system.
First, we give a brief introduction of the data-set and its parameters. Consequently, we define the performance metrics used in our evaluation to compare both algorithms. Finally, we evaluate our algorithms on three different scenarios of practical relevance for several applications. The scenarios are realized by different combinations of full/limited views and full/limited bandwidth measurements as we will show later in this section.
A. Data-set Introduction
CV Domes data-set contains simulated high-frequency scattering data of ten civilian vehicles. For each model, an Xband electromagnetic mono-static scattering is simulated in a far-field scenario. Scattered waves are simulated with full polarization over an azimuth extent of 360°where 16 viewing angles per degree of azimuth are considered. Similarly, data are simulated over the range of elevation angles from 30°to 60°. For each azimuth and elevation viewing angle tuple, 512 frequency samples of complex-valued scattering coefficients centered at 9.6 GHz and spanning a bandwidth of approximately 5.35 GHz are provided. The range information of those frequency measurements is compressed already resulting in what is usually referred to as phase history.
B. Performance Metrics
As discussed in the previous section, the difference in problem formulation between CADMM and SADMM has induced slightly different system implementation features in terms of memory requirements and communication bandwidth. Additionally, to compare the performance of the proposed algorithms, the following aspects are considered. 1) Convergence rate: it can be assessed by the number of iterations needed to reach the stopping criteria that is defined identically for both algorithms. 2) Computational complexity: the main computational burden of both algorithms lies in the local image updates (11) and (17), where the matrix inversion term is present. Due to their large size, the measurement matrices A q are realized through matrix operators based on twodimensional non-uniform Fast Fourier transform (2D NuFFT) [28]. Moreover, the inversion step is carried out numerically using CG as mentioned earlier. Thus, while the complexity of both algorithms seems to be equivalent, the convergence of CG highly depends on the other variables in the update formulas (11) and (17). As a result, the comparison solely in terms of the number of iterations is not indicative since a single iteration in each of the algorithms may realize a different cost. Accordingly, computational complexity can be measured by calculating the total processing time spent until termination. 3) Image reconstruction quality: the data-set does not contain a reference image with which a comparison can be made in order to evaluate the quality of reconstructed images. Correspondingly, we use the image entropy as a quantitative metric to assess the image quality being a measure of its sharpness or constituent randomness. The smaller the image entropy the sharper is the reconstructed image and vice versa. The entropy of an image is calculated in bits after intensity saturation for the values beyond a desired dynamic range in the dB scale followed by image translation to the grayscale. Consequently, a randomly generated image would have an entropy equal or close to 8 bits. Additionally, as a subjective measure of images quality, the images reconstructed utilizing full aperture and full bandwidth measurement could act as a visual reference for the other scenarios when the aperture and/or the bandwidth measurements are reduced.
C. Experiments
In this subsection, we illustrate the reconstructed images by CADMM and SADMM using the simulated data of two different vehicle models differing in type and geometry. The first is of a Jeep Cherokee (SUV) 'Jeep99', while the second is of a Toyota Tacoma (Pick-up) 'Tacoma'. We consider image reconstruction utilizing the data-set according to the following scenarios: 1) Full aperture views measurements: for a general validation of both algorithms, the full 360°aperture measurements of the entire available bandwidth is considered. 2) Full views and limited bandwidth measurements: limited frequency samples of the full aperture measurements are considered for image reconstruction realizing a typical use case of WSAR imaging. 3) Limited views and limited bandwidth measurements: assuming a distributed system of radar sensors illuminating the scene according to a time division multiplexing (TDM) scheme, limited frequency samples of limited aperture measurements are considered for images reconstruction. For all experiments, measurements are taken at a fixed elevation angle of 30°and with 'HH' polarization. Moreover, all measurements are impaired with a white Gaussian noise realizing a signal to noise ratio (SNR) of 15 dB. A fine grid of 256 cells in both range and cross-range directions (7 meterlong each) is used resulting in a total number of N = 65536 pixels. Additionally, images are reconstructed considering an elevated image plane at 1 meter from the ground level. This renders the projection of layover-ed elements to be mostly contained within vehicles outlines and permits a better visual interpretation.
The choice of µ, β, and λ for both CADMM and SADMM is made through a parameter sweep guided by normalized image sparsity and image entropy as performance metrics. The normalized sparsity considered is the percentage of the non-zero pixels in the image. Hyperparameters used to reconstruct the illustrated images throughout this section are those that guarantee a similar degree of sparsity for both CADMM and SADMM images at a lower entropy value. Also, we tried to pick the parameters where β is as close as possible in both methods for a fair convergence comparison. Given the considered scene size and typical dimensions of a vehicle, a sparsity level around the range of 5% − 10% is considered in our experiments based on the scenario and the vehicle. Needless to say, parameter sweep analysis is conducted separately for each data-set and each scenario. It is worth mentioning that, for a given β, the ratio λ/µ can be automatically selected given the desired sparsity range following our method proposed in [29]. However, since for both CADMM and SADMM an empirical search for β is needed and at almost exact sparsity levels, a parameter sweep would facilitate finding more accurate parameters for the sake of comparison. An example of image sparsity and image entropy versus different parameters is shown for the first scenario. For later experiments, such analysis will be omitted for brevity. CADMM and SADMM are run for a maximum number of 100 iterations while the feasibility tolerances abs and rel are both set to 10 −2 . Scenario-specific parameters and reconstructed images of all the aforementioned experiments are provided and discussed in the sequel.
1) Full Views -Full Bandwidth (FVFB):
Using the full 360°azimuth extent and the full bandwidth of the dataset (approximately equal to 5.35 GHz), we reconstruct the images of the two vehicles to validate our proposed methods and show their superior reconstruction quality with respect to the conventional BP method. The reconstructed images in this experiment can also be used as a reference for the subsequent scenarios when images are reconstructed using limited views and/or limited bandwidth measurements. As mentioned previously, to avoid anisotropic scattering over the large angular azimuth extent, it is divided into sub-apertures (clusters) in which point isotropic scattering assumption is valid. The angular width of a cluster has a double-faced effect on the imaging performance trading-off between angular resolution and homogeneous targets scattering. The wider the cluster the better the resolution while the scattering becomes anisotropic. Therefore, in this experiment, the cluster width is chosen to maximize the imaging performance in a balanced manner. We use a cluster width of 5°which is a good tradeoff since it allows for a cross range resolution approximately equivalent to the range resolution provided by the bandwidth while maintaining a minimum degree of scattering anisotropicity of the targets.
As anticipated earlier, we perform a hyperparameter sweep over different values of β and λ/µ to obtain the values that provide good reconstruction quality given image sparsity level. The sparsity level is decided based on the assumed dimensions of the observed targets relative to the scene dimensions. They are set separately for each scenario due to the dissimilarities of aperture size and signal bandwidth utilized, hence images exhibit a different resolution in each scenario. Fig. 2 shows the normalized sparsity level and entropy of the reconstructed image of the 'Jeep99' data-set versus the varying parameters of β and λ/µ for both SADMM and CADMM. The desired sparsity level range is highlighted with a light green color in the figure. As expected, the higher the ratio λ/µ, the more sparse are the reconstructed images for both the algorithms and naturally the lower the entropy. This is true except for some values of β for which SADMM optimization does not converge to a sparse solution. Such divergence for those parameters can be seen from the entropy values which indicate reconstruction of random images. Moreover, beyond those values of β, reconstructed images using the same λ/µ ratio exhibit similar sparsity levels and attain close values for entropy as shown in the magnified part of Fig. 2.
To first show the performance of both methods, CADMM and SADMM images reconstructed considering two different sparsity levels are shown for the 'Jeep99' data-set in Fig. 3. The left column images have a normalized sparsity degree of 12% versus 5% for the images in the right column. For lower sparsity images, CADMM show a sharp high intensity reconstruction of the strong features of the vehicle such as edges and ceiling structure, and a lower intensity reconstruction of the weaker features such as the projection of the tire wheels. On the other hand, SADMM images manifest an averaged intensity of the different parts of the vehicle resulting in less sharp images. While the behavior is maintained for the images at higher sparsity, the weaker features are further suppressed in the images of both methods. Moreover, the images of both methods at a similar sparsity level have similar entropy values. This behavior is again confirmed by the reconstructed images of 'Tocoma' vehicle. At a sparsity level of 10%, the Fig .4 in addition to the images reconstructed through conventional back-projection averaged over all clusters. The red dots on the images show the angular aperture views which cover 360°i n this experiment. Note that due to the abundance of the available bandwidth and the full aperture measurements, both CADMM and SADMM images have very high resolution and show the detailed structure of the imaged objects.
Additionally, the processing time until termination in SADMM is slightly higher than CADMM. The numerical values of the parameters used in image reconstruction and the corresponding metrics are reported in Table I while the ratio between SADMM and CADMM processing time and number of iterations are reported in Table II. Thus, analyzing the performance of our proposed algorithms in this case is of high interest and is a relevant use case in WSAR imaging. To realize a limited bandwidth measurement scenario, we utilized an equivalent of 600 MHz samples around the center frequency from the samples of phase history. In this experiment as well, SADMM images exhibit the property of higher averaging than that of concentrated intensity resulting from CADMM images as shown in Fig. 5. This peculiarity makes it capable of capturing the true structure of the vehicles even with low bandwidth measurements when referring to the images in Fig. 4 of FVFB experiment. On the other hand, images reconstructed using CADMM have higher intensity around the strong reflectors and weaker or no intensity of the poorly reflective components of the vehicles; this is evident in the reconstructed images of both vehicles in Fig. 5. For example, the crossing of the beams in the rear part of 'Jeep99' data-set is localized and well identified with strong intensity in SADMM image, while in CADMM image, the same area exhibits only a strong intensity in a wider region. Similarly, in 'Tacoma' images, CADMM fails to capture the outline of the vehicle at this sparsity level and the image is dominated by the strong trunk while in SADMM image the vehicle outline makes an appearance and even the trunk is better identified. The depicted images have a sparsity level approximately equals to 10%. Note that by considering lower sparsity, images of both algorithms will have an increase of the background intensity around the strong features without capturing the general structure of the object differently. An example is shown in Fig. 6 where the images of 'Tacoma' vehicle are reconstructed at lower normalized sparsity level of about 15%. In summary, although CADMM images have similar entropy values as their SADMM counterparts at the same sparsity level, the latter provides higher capability of capturing the structure of the observed targets given relatively low bandwidth measurements. The exact values of sparsity and entropy for each image are reported in Table I.
The superior performance of SADMM comes at the cost of increased computational complexity to reach convergence. This complexity is manifested through the higher number of iterations and the longer processing time required for SADMM to reconstruct the shown images. The ratio of these two quantities for SADMM and CADMM is depicted in Table II.
3) Limited views -Limited Bandwidth (LVLB):
A monostatic distributed sensing scenario can be realized by considering far-field illumination with limited-narrow views of the full aperture measurements. In this experiment, we consider using the data-set to realize a system of distributed radar sensors in which TDM scheme is used to illuminate the scene of interest where a single cluster transmits at a time. Consequently, in addition to the limited bandwidth measurements of 600 MHz introduced in the previous experiment, we further consider limited aperture measurements representing the views of distributed sensors. In particular, 16 clusters of 1°width each are uniformly distributed around the scene and considered the viewing angles of the sensors. This number of clusters makes a realistic choice for the number of sensors where they cover only a span of about 4.4% of the full aperture measurements.
For this experiment, the reconstructed images of the two vehicles are shown in Fig. 7. Similar to the previous exper- Tacoma (right column) iment, SADMM captures both vehicles' structure better than CADMM. However, due to the limited aperture measurements, the artifacts present in the images of both methods are stronger. Increasing the sparsity would eliminate the artifacts but further limits the reconstruction of the entire outline of the vehicles. Of course, reconstructed images are views dependent. However, the performance of both methods is the same when different orientation of the sensors is considered. For example, the images reconstructed using another random orientation of views are shown in Fig. 8. The images confirm the capability of SADMM to retain the original structure of the imaged target while CADMM has a better ability to diminish the artifacts. On another note, processing times of both algorithms are roughly similar in this experiment given a limited amount of measurements. Finally, the parameters used to reconstruct the images in Fig. 7 and the corresponding values of entropy and sparsity are reported in Table I while processing time and Table II. To summarize, in the first experiment where a plenitude of measurements in both aperture and bandwidth is available, both CADMM and SADMM are capable of reconstructing detailed and super-resolution images of the observed targets far surpassing the conventional methods. On the other hand, in the latter experiments where measurements are limited in aperture and/or bandwidth, SADMM exhibits superior performance over CADMM in terms of capturing the structure of the target and reconstructing smoother images. Although they have similar entropy in all cases, the depicted images reconstructed by both algorithms show a clear visual advantage of SADMM when compared with the images of full measurements. Such higher quality comes at the expense of computational cost. Surprisingly, in terms of convergence and complexity, SADMM fell behind the most in the second experiment where the full aperture measurements with limited bandwidth are considered. This can be owed to the fact that CADMM features a high convergence rate requiring less than a third of SADMM iterations to reach such concentrated intensity images and by having less degrees of freedom.
V. CONCLUSION
In this paper, a novel approach for widely distributed radar imaging based on ADMM framework is proposed. Sparsity prior has been imposed on a defined global image assumed to represent an aggregate view of the scene. Then, developing on top of our previous work, the problem formulation is tailored to this approach and a new formulation has been introduced. The two formulations named CADMM and SADMM are designed to mathematically stipulate the relationship between the images of individual sensors and the global image. The solutions to the proposed formulations have been provided as iterative algorithms that are flexible and amenable to be implemented on a distributed architecture. We have demonstrated the performance of our proposed algorithms through several experiments and showed their significant edge over conventional methods in terms of reconstructed images quality. Moreover, we showed that SADMM outperforms CADMM by reconstructing images of high resolution that better exhibit the structure and the shape of the observed objects, especially when the measurements are limited in bandwidth and/or sparse in aperture. As we illustrated in our experiments, our proposed algorithms are applicable in many scenarios of distributed radar systems and WSAR imaging. Following our approach, various formulations can be further studied and developed either by imposing different prior on the global image and/or by imposing alternative associations with the images of the individual sensors. | 9,948.4 | 2022-03-10T00:00:00.000 | [
"Computer Science"
] |
Nucleon Tomography and Generalized Parton Distribution at Physical Pion Mass from Lattice QCD
We present the first lattice calculation of the nucleon isovector unpolarized generalized parton distribution (GPD) at the physical pion mass using a lattice ensemble with 2+1+1 flavors of highly improved staggered quarks (HISQ) generated by MILC Collaboration, with lattice spacing $a\approx 0.09$~fm and volume $64^3\times 96$. We use momentum-smeared sources to improve the signal at nucleon boost momentum $P_z \approx 2.2$ GeV, and report results at nonzero momentum transfers in $[0.2,1.0]\text{ GeV}^2$. Nonperturbative renormalization in RI/MOM scheme is used to obtain the quasi-distribution before matching to the lightcone GPDs. The three-dimensional distributions $H(x,Q^2)$ and $E(x,Q^2)$ at $\xi=0$ are presented, along with the three-dimensional nucleon tomography and impact-parameter--dependent distribution for selected Bjorken $x$ at $\mu=3$ GeV in $\overline{\text{MS}}$ scheme.
We present the first lattice calculation of the nucleon isovector unpolarized generalized parton distribution (GPD) at the physical pion mass using a lattice ensemble with 2+1+1 flavors of highly improved staggered quarks (HISQ) generated by MILC Collaboration, with lattice spacing a ≈ 0.09 fm and volume 64 3 × 96. We use momentum-smeared sources to improve the signal at nucleon boost momentum Pz ≈ 2.2 GeV, and report results at nonzero momentum transfers in [0.2, 1.0] GeV 2 . Nonperturbative renormalization in RI/MOM scheme is used to obtain the quasi-distribution before matching to the lightcone GPDs. The three-dimensional distributions H(x, Q 2 ) and E(x, Q 2 ) at ξ = 0 are presented, along with the three-dimensional nucleon tomography and impact-parameterdependent distribution for selected Bjorken x at µ = 3 GeV in MS scheme. Nucleons (that is, protons and neutrons) are the building blocks of all ordinary matter, and the study of nucleon structure is a central goal of many worldwide experimental efforts. Gluons and quarks are the underlying degrees of freedom that explain the properties of nucleons, and fully understanding how they contribute to the properties of nucleons (such as their mass or spin structure) helps to decode the Standard Model. In quantum chromodynamics (QCD), gluons strongly interact with themselves and with quarks, binding both nucleons and nuclei. However, due to their confinement within these bound states, we cannot single out individual constituents to study them. More than half a century since the discovery of nucleon structure, our understanding has improved greatly; however, there is still a long way to go in unveiling the nucleon's detailed structure, which is characterized by functions such as the generalized parton distributions (GPDs) [1][2][3]. GPDs can be viewed as a hybrid of parton distributions (PDFs), form factors and distribution amplitudes. They play an important role in providing a three-dimensional spatial picture of the nucleon [4] and in revealing its spin structure [2]. Experimentally, GPDs can be accessed in exclusive processes such as deeply virtual Compton scattering [5] or meson production [6]. Experimental collaborations and facilities worldwide have been devoted to searching for these last unknowns of the nucleon, including HERMES at DESY, COMPASS at CERN, GSI in Europe, BELLE and JPAC in Japan, Halls A, B and C at Jefferson Laboratory, and PHENIX and STAR at RHIC at Brookhaven National Laboratory in the US. There are also plans for future facilities: a US electron-ion collider (EIC) [7] at Brookhaven National Laboratory, an EIC in China (EicC) [8,9], and the Large Hadron-Electron Collider (LHeC) in Europe [10,11].
Although interest in GPDs has grown enormously, we still need fresh theoretical and phenomenological ideas, including reliable model-independent techniques.
Most QCD models have issues associated with threedimensional structure that are not yet fully understood, so a reliable framework for extracting three-dimensional parton distributions and fragmentation functions from experimental observables does not yet exist. Theoretically, there are factorization issues in hadron production from hadronic reactions, and theoretical efforts are striving to answer key questions that lie along the path to a precise mapping of three-dimensional nucleon structure from experiment. It has become common understanding that we need to develop a program in both theory and experiment that will allow an accurate flavor decomposition of the nucleon GPDs, including flavor differences in the quark structure, the gluon structure and the nucleon sea-quark GPDs. Most current theoretical issues are associated with nonperturbative features of QCD, that is, where the strong coupling is too large for analytic perturbative methods to be valid. Using a nonperturbative theoretical method that starts from the quark and gluon degrees of freedom, lattice QCD (LQCD), allows us to compute these properties on supercomputers.
Probing hadron structure with lattice QCD was for many years limited to the first few moments, due to complications arising from the breaking of rotational symmetry by the discretized Euclidean spacetime. The breakthrough for LQCD came in 2013, when a technique was proposed to connect quantities calculable on the lattice to those on the lightcone. Large-momentum effective theory (LaMET), also known as the "quasi-PDF method" [12][13][14], allows us to calculate the full Bjorken-x dependence of distributions for the first time. Much progress has been made since the first LaMET paper . Most work has been done using only one lattice ensemble, but there has been some progress in determining the size of lattice systematic uncertainties. For example, finite-volume systematics were studied in Refs. [31,86]. Machinelearning algorithms have been applied to the inverse arXiv:2008.12474v2 [hep-ph] 23 Jul 2021 problem [84,105] and to making predictions for larger boost momentum and larger Wilson-displacement [106]. On the lattice discretization errors, a N f = 2 + 1 + 1 superfine (a ≈ 0.042 fm) lattice at 310-MeV pion mass was used to study nucleon PDFs in Ref. [91], and results using multiple lattice spacings were reported in Refs. [89,92,105]. The first attempt to obtain strange and charm distributions of the nucleon was recently reported [90]. However, beyond one-dimensional hadron structure, there is little work available. Last spring, the first lattice study of GPDs was made for pions [32]. Dur-ing the completion of this work, ETM Collaboration reported their findings on both unpolarized and polarized nucleon GPDs with largest boost momentum 1.67 GeV at pion mass M π ≈ 260 MeV [107,108]. In this work, we present the first lattice-QCD calculation of the nucleon GPD at the physical pion mass using the LaMET method and study the three-dimensional structure of the unpolarized nucleon GPDs.
The unpolarized GPDs H(x, ξ, t) and E(x, ξ, t) are defined in terms of the matrix elements where L(−z/2, z/2) is the gauge link along the lightcone and In the limit ξ, t → 0, H reduces to the usual unpolarized parton distributions while the information encoded in E cannot be accessed, since they are multiplied by the fourmomentum transfer ∆. Only in exclusive processes with a nonzero momentum transfer can E be probed. The one-loop matching [35,144] for the GPD H and E turns out to be similar to that for the parton distribution.
In this work, we focus on the nucleon isovector unpolarized GPDs and their quasi-GPD counterparts defined in terms of spacelike correlations calculated in Breit frame. We use clover valence fermions on an ensemble with lattice spacing a ≈ 0.09 fm, spatial (temporal) extent around 5.8 (8.6) fm, and with the physical pion mass M π ≈ 135 MeV and N f = 2+1+1 (degenerate up/down, strange and charm) flavors of highly improved staggered dynamical quarks (HISQ) [109] generated by MILC Collaboration [110]. The gauge links are one-step hypercubic (HYP) smeared [111] to suppress discretization effects. The clover parameters are tuned to recover the lowest sea pion mass of the HISQ quarks. The "mixed-action" approach is commonly used, and there has been promising agreement between the calculated lattice nucleon charges, moments and form factors and the experimental data when applicable [112][113][114][115][116][117][118][119][120][121][122][123][124]. Gaussian momentum smearing [125] is used on the quark field to improve the overlap with ground-state nucleons of the desired boost momentum, allowing us to reach higher boost momentum for the nucleon states. We calculate the matrix elements of the form . We also use high-statistics measurements, 501,760 total over 1960 configurations, to drive down the increased statistical noise at high boost momenta, P z = | We solve a set of linear equations to obtain H and E (similar to form-factor extraction) with all |q| at fixed Q 2 . Technical details (such as renormalization) and more information on how the matrix elements are extracted can be found in the supplement and our previous work [24,27,29,126].
The nonperturbatively renormalized matrix elements are then Fourier transformed into quasi-GPDs through two different approaches. Following the recent work [24,27,29], we take the matrix elements z ∈ [−12, 12] and apply the simple but effective "derivative" method, R /x, to obtain the quasi-GPDs. Alternatively, we adopt the extrapolation formulation suggested by Ref. [127] by fitting |z| ∈ {10, 15} using the formula c 1 (−izP z ) −d1 +c 2 e izPz (izP z ) −d2 , inspired by the Regge behavior, to extrapolate the matrix elements into the region beyond the lattice calculation and suppress Fourier-transformation artifacts. Then, both quasi-GPDs are matched to the physical GPDs by applying the matching condition [24,27,41,67]. Examples of the GPDs at momentum transfer Q 2 ≈ 0.4 GeV 2 are shown in Fig. 1. Figure 1 compares the H and E GPDs at Q 2 ≈ 0.4 GeV 2 with the quasi-distribution and matched distribution using P z ≈ 2.2 GeV. The matching lowers the positive mid-x to large-x distribution, as expected; as one approaches lightcone limit, the probability of a parton carrying a larger fraction of its parent nucleon's momentum should become smaller. We also find that the derivative and Regge-inspired extrapolation agree in the mid-to large-x regions, but their difference grows as x approaches zero in both the quark and antiquark distribution. This is expected, since they differ mainly in the treatment of the large-z matrix elements in the quasi-GPD Fourier transformation, which contributes more significantly to the small-x distribution. By repeating a similar analysis for each available Q 2 in this calculation, we can construct the full three-dimensional shape of H and E as functions of x and Q 2 , as shown in Fig. 2. Due to the limited reliable zP z reach of the lattice calculation, we find that the small-x region is unreliable, due to lack of precision lattice data to constrain it. Thus, due to charge conservation, the antiquark (negative-x) distribution can also be sensitive to P z . It has been found in past work [18,24,27,29] that higher boost momenta are needed to improve the antiquark region. Therefore, for the rest of the work, we will mainly focus on the x > 0.05 region. For convenience, we will focus on showing the GPD results from derivative method, and use the Reggeinspired extrapolation to estimate the uncertainties in the small-x region by reconstructing GPD moments from our x-dependent GPD functions.
Since this is the first lattice calculation with full threedimensional x and Q 2 dependence of the H and E GPD functions, we would like to check how the new results using LaMET approach compare with the previous moment approaches to the generalized form factors. In the ξ → 0 limit, the H and E GPDs decrease monotonically as x or Q 2 increases. We take Mellin moments of the GPDs to compare with previous lattice calculations done using local matrix elements through the operator product expansion (OPE). Taking the x-moments of H and E [128,129]: where the generalized form factors (GFFs) A ni (Q 2 ), B ni (Q 2 ) and C ni (Q 2 ) in the ξ-expansion on the righthand side are real functions. When n = 1, we get the Dirac and Pauli electromagnetic form factors F 1 (Q 2 ) = A 10 (Q 2 ) and F 2 (Q 2 ) = B 10 (Q 2 ). To compare with other lattice results, we plot the Sachs electric and magnetic form factors using F 1,2 as G E (Q 2 ) = F 1 (Q 2 ) + [18], while the pink band corresponds to the matched GPD using quasi-GPD from the extrapolation formulation suggested by Ref. [127]. We find both methods give reasonable agreement in the x-dependent behavior, except in the small-x region, which is dominated by the large-z matrix elements that rely on the extrapolation. Fig. 3 with selected results from near-physical pion masses. PACS has the largest volume among these calculations and is able to probe the smallest Q 2 . Overall, our results are not only consistent within errors with the earlier PNDME study using the same ensemble (but which used local operators) but are also in good agreement with other lattice collaborations. When n = 2, we obtained GFFs of A 20 (Q 2 ) and B 20 (Q 2 ) so that we can compare our moment results with past lattice calculations using the OPE, as shown in Fig. 3. We compare our moment results with those obtained from simulations at the physical point by ETMC using three ensembles [130] and the near-physical calculation of RQCD [131]. We note that even with the same OPE approach by the same collaboration, the two data sets for A 20 in the ETMC calculation exhibit some tension. This is an indication that the systematic uncertainties are more complicated for these GFFs. Given that the blue points correspond to finer lattice spacing, larger volume and larger m π L, we expect that the blue points have suppressed systematic uncer- tainties. Our moment result A 20 (Q 2 ) is in better agreement with those obtained using the OPE approach at small momentum transfer Q 2 , while B 20 (Q 2 ) is in better agreement with OPE approaches at large Q 2 . The comparison between the N f = 2 ETMC data and N f = 2 RQCD data reveals agreement for A 20 and B 20 . However, the RQCD data have a different slope than the ETMC data, which is attributed to the different analysis methods and systematic uncertainties. Both our results and ETMC's are done using a single ensemble; future studies to include other lattice artifacts, such as lattice-spacing dependence, are important to account for the difference in the results.
Note that the error bands in Fig. 3 include the systematics from the following: 1) The systematics due to the negative-and small-x regions of the current GPD extraction. We create pseudo-lattice data using input of CT18NNLO PDF [138] with the same lattice parameters (such as z and P z ) used in this calculation and apply the same analysis procedure described above. We take the upper limit of the reconstructed and original CT18 moments as an estimate of the systematics introduced by the analysis procedure (e.g. by Fourier truncation); 2) We vary the maximum Wilson-line length z within 2 lattice units and take half the difference as an estimate of the systematic due to finite z; 3) We estimate 1/P z systematics due to higher-twist effects by comparing our LHPC17 2+1f PACS18 2+1f ETMC18 2+1+1f PNDME19 2+1+1f 0.09fm PNDME19 2+1+1f 0.06fm (top) The nucleon isovector electric and magnetic form factor results obtained from this work (labeled as "MSULat20 2+1+1") as functions of transferred momentum Q 2 , and comparison with other lattice works calculated near physical pion mass: N f = 2 ETMC18 [132] N f = 2 + 1 LHPC14 [133], LHPC17 [134], PACS18 [135]; N f = 2 + 1 + 1 ETMC18 [136], PNDME19 [137] with 2 lattice spacings of 0.06 and 0.09 fm. (bottom) The unpolarized nucleon isovector GFFs obtained from this work, compared with other lattice results calculated near physical pion mass as functions of transfer momentum Q 2 : ETMC19 [130], and RQCD19 [131]. Q 2 = 0 PDFs and to those in the previous works with 3 boost momenta [24,27]. The final errors are summed in quadrature to create the final error bands shown in Fig. 3.
The Fourier transform of the non-spin-flip GPD H(x, ξ = 0, Q 2 ) gives the impact-parameter-dependent distribution q(x, b) [139] q where b is the transverse distance from the center of momentum. Figure 4 shows the first results of impactparameter-dependent distribution from lattice QCD: a three-dimensional distribution as function of x and b, and two-dimensional distributions at x = 0.3, 0.5 and 0.7. The impact-parameter-dependent distribution describes the probability density for a parton with momentum fraction x at distance b in the transverse plane. Figure 4 shows that the probability decreases quickly as x increases. Using Eq. 4 and the H(x, ξ = 0, t = −q 2 ) obtained from the lattice calculation at the physical pion mass, we can take a snapshot of the nucleon in the transverse plane to perform x-dependent nucleon tomography using lattice QCD for the first time.
In this work, we compute the isovector nucleon unpolarized GPDs at physical pion mass using boost momentum 2.2 GeV with nonzero momentum transfers in [0.2, 1.0] GeV 2 . We are able to map out the first threedimensional GPD structures using lattice QCD in the special limit ξ = 0. There are residual lattice systematics are not yet included in the current calculation: In our past studies, we found the finite-volume effects to be negligible for isovector nucleon quasi-distributions calculated within the range M val π L ∈ {3.3, 5.5}. We anticipate such systematics should be small compared to the statistical errors. The lattice discretization has been studied by MSULat collaboration in Refs. [89,105] with multiple lattice spacings in the LaMET study of pion and kaon distribution amplitudes and PDFs; similarly, a comparison of nucleon isovector PDFs with 0.045 and 0.12 fm lattice spacing is shown in supplementary materials. There was mild lattice-spacing dependence for a majority of the Wilson-link displacements studied with similar largest boost momenta with same valence/sea lattice setup. EMTC also report LaMET isovector nucleon PDFs in Ref. [140] using twisted-mass fermion actions and reports different findings. Future work will investigate ensembles with smaller lattice spacing to reach even higher boost momentum (either directly or with the aid of machine learning [106]) so that we can push toward reliable determination of the smaller-x and antiquark regions.
We thank the MILC Collaboration for sharing the lattices used to perform this study. The LQCD calculations were performed using the Chroma software suite [141]. In the LaMET (or "quasi-PDF") approach, we calculate time-independent spatially displaced matrix elements that can be connected to the parton distributions. The operator choice is not unique at finite nucleon momentum; a convenient choice for leading-twist PDFs is to take the average of the hadron momentum (0, 0, P z ) and the quark-antiquark separation to be along the z direction where |N (P i,f ) denotes a nucleon state with momenta P µ i = (E i , −q x /2, −q y /2, P z ) and P µ f = (E f , q x /2, q y /2, P z ), ψ (ψ) are the (anti-)quark fields, n µ is a unit vector and W (λẑ, 0) is the Wilson link along the z direction. See Fig. 5 for an illustration. There are multiple choices of operator in this framework that will recover the same lightcone PDFs when the largemomentum limit is taken. For example, Γ can be γ z or γ t [33,57,81,142]; both will give the unpolarized PDFs in the infinite-momentum frame. In this work, we use Γ = γ t for the unpolarized GPDs calculations. Since the systematics of the LaMET methods, such as the higher- , decrease as the momentum increases, we use P z ≈ 2.2 GeV in this calculation. (A study of the P z convergence of the LaMET approach on the same ensembles can be found in Refs. [24,27,29].) This also allows us to reach smaller-x regions of the GPDs functions.
We use Gaussian momentum smearing [125] ψ(x) + α j U j (x)e ikêj ψ(x +ê j ), where k is the input momentum-smearing parameter, U j (x) are the gauge links in the j direction, and α is a tunable parameter as in traditional Gaussian smearing. We vary the α values in this study so that we can have multiple ways of removing the excited-state contributions from matrix elements later on. To better extract the boostedmomentum ground-state nucleon, we apply the variational method [143] to extract the principal correlators corresponding to pure energy eigenstates from a matrix of correlators. We use a 3×3 smeared-smeared two-point correlation matrix, which can be decomposed as with eigenvalues λ n (t, t r ) = e −(t−tr)En by solving the generalized eigensystem problem C ij where V is the matrix of eigenvectors and t r is a reference time slice. The resulting 3 eigenvalues (principal correlators) λ n (t, t r ) are then further analyzed to extract the energy levels E n . Since they have been projected onto pure eigenstates of the Hamiltonian, the principal correlator should be fit well by a single exponential and double checked for the consistency of the obtained energies. The leading contamination due to higher-lying states is another exponential having higher energy; we use a two-exponential fit to help remove this contamination. The overlap factors (A n ) between the interpolating operators and the n th state are derived from the eigenvectors obtained in the variational method.
To calculate the GPD matrix elements at nonzero momentum transfer, first calculate the matrix element χ N ( P f )|O µ |χ N ( P i ) , where χ N is the nucleon spin-1/2 interpolating field, abc [q a (x)Cγ 5 q b (x)]q c (x). O µ = ψγ µ W (z)ψ is the LaMET Wilson-line displacement operator with ψ being either an up or down quark field, and P {i,f } are the initial and final nucleon momenta. We integrate out the spatial dependence and project the baryonic spin, using projection operators P ρ = 1+γt 2 (1+iγ 5 γ ρ ) with ρ ∈ {x, y, z}, leaving a time-dependent three-point correlator, C 3pt .
where f n,n (P f , P i , E n , E n , t, t i , t f ) contains kinematic factors involving the energies E n and overlap factors A n obtained in the two-point variational method, n and n are the indices of different energy states and Z O is the operator renormalization constant (which is determined nonperturbatively). We also use high-statistics measure-ments, 501,760 total over 1960 configurations, to drive down the increased statistical noise at high boost momenta, P z = Figure 6 shows an example of the ground-state matrix elements (shown as the gray band) from the variational analysis along with the two-state fitted results with P f = {1, 0, 10} 2π L and P i = {−1, 0, 10} 2π L with projection operator 1+γt 2 (1 + iγ 5 γ z ). The ground-state matrix elements can be extracted from v T 0 C ij 3pt (t)v 0 using the eigenvectors from the two-point variational-method analysis (shown in the left panel of Fig. 6), simultaneous two-state fitted results using source-sink separation of t sep = [8,12] lattice units, two-state fitted results as functions of t min sep . The consistency of these methods demonstrates that our extracted ground-state matrix elements are stably determined. For more details on the lattice study of nucleon matrix elements comparing the variational and two-state fit methods, we refer readers to Ref. [126], which contains a very detailed discussion.
The overdetermined system of linear equations (using multiple q and projection operators) allows for solution of h H (P z , Q 2 , z) and h E (P z , Q 2 , z), similar to vector form factor calculations with our chosen projection operator and momentum configuration: The ground-state matrix elements are proportional to with / p = iE N ( p )γ 4 + p · γ. We obtain a linear system of equations by using different projection operators P ρ , momenta P f and P i to solve for H and E in coordinate space, h {H,E} (P z , Q 2 , z). Selected Q 2 values of h {H,E} (P z , Q 2 , z) normalized by h {H,E} (P z , Q 2 = 0, z = 0) are shown in Fig. 7. The real matrix elements decrease quickly to zero due to the large boost momentum used in this calculation. This helps us to use smaller-displacement data to avoid large contributions from higher-twist effects in the larger-z region.
To obtain the quasi-GPDs, we first apply nonperturbative renormalization (NPR) in RI/MOM scheme to the bare matrix elements, using the NPR done in previous work using the same lattice ensembles [24,27,29], itself following the same strategy described in Refs. [22,41]. The RI/MOM renormalization constant Z is calculated nonperturbatively on the lattice by imposing the following momentum-subtraction condition on the matrix ele-ment: where On the lattice, ps|O t (z)|ps is calculated from the amputated Green function of O t with Euclidean external momentum. In this work, we use the same renormalization scales as used in the previous work [24,27,29]: µ R = 3.8 GeV, p R z = 2.2 GeV, µ MS = 3 GeV. The renormalization-scale dependence was studied in Ref. [67]. We also vary the values of P R z , and the results are shown in Fig. 8, focusing on the ξ = 0 GPDs, where the matching formula is the same as that in the PDFs, as discussed in Ref. [144]. We normalize all matrix elements by h H (P z , Q 2 = 0, z = 0), as in our previous PDF work [24,27,29]. Using matrixelement ratios reduces the lattice systematic error, since in the continuum limit h H (P z , Q 2 = 0, z = 0), the vector charge, goes to 1.
FIG. 6: An example of the ground-state matrix element determination using variational method using rotated two-and three-point correlators (left), two-state fit to multiple source-sink separation (middle), along with the ratio plots data and reconstructed fit to the ratio points. The rightmost plot shows how much the ground-state matrix elements change as we reduce the inputs to the fits; we see some fluctuations and increase of errors due to the reduction of the available data but overall steady determination of the ground-state matrix element. The example used here is from P f = {1, 0, 10} 2π L and Pi = {−1, 0, 10} 2π L with projection operator 1+γ t 2 (1 + iγ5γz). We also perform checks of the z max input in the Fourier transformation and lattice-spacing dependence. The ef-fects are documented in Fig. 8. | 6,218.8 | 2020-08-28T00:00:00.000 | [
"Physics"
] |
Active Galactic Nucleus Feedback in an Elliptical Galaxy with the Most Updated AGN Physics: Parameter Explorations
In a previous work, we have proposed a sub-grid model of active galactic nucleus (AGN) feedback by taking into account the state-of-the-art AGN physics, and used that model to study the effect of AGN feedback on the evolution of an isolated elliptical galaxy by performing two dimensional high-resolution (i.e., the Bondi radius is well resolved) simulations. In that work, typical values of model parameters were adopted. In the present work, we extend that study by exploring the effects of uncertainties of parameter values. Such a study is also useful for us to understand the respective roles of various components of the model. These parameters include the mass flux and velocity of AGN wind and radiative efficiency in both the hot and cold feedback modes, and the initial black hole (BH) mass. We find that the velocity of AGN wind in the hot mode is the most important quantity to control the typical accretion rate and luminosity of AGN, and the mass growth of the BH. The effect of the wind on star formation is less sensitive. Within the limited parameter range explored in the current work, a stronger AGN wind suppresses star formation within ~100 pc but enhances star formation beyond this radius, while the star formation integrated over the evolution time and the whole galaxy roughly remain unchanged. AGN radiation suppresses the BH accretion in a mild way, but dust is not considered here. Finally, a smaller initial BH mass results in a more violent evolution of the BH accretion rate. The corresponding AGN spends more time in the high-luminosity state and the percentage of BH mass growth is higher. Our results indicate the robustness of AGN feedback in keeping the galaxy quenched.
INTRODUCTION
In the centre of every massive galaxy with a bulge there exists a supermassive BH (see, e.g., Kormendy & Ho 2013 for a review). Observations have found tight correlations between the mass of the BH and the properties of the classical bulge, including its stellar mass (Magorrian et al. 1998;Häring & Rix 2004;Kormendy & Ho 2013), luminosity (Marconi & Hunt 2003;Gültekin et al. 2009), and stellar velocity dispersion (Gebhardt et al. 2000;Ferrarese & Merritt 2000;Tremaine et al. 2002). These correlations suggest the coevolution of the central BH and its host galaxy, and AGN feedback might play a major role in this coevolution.
The literatures regarding AGN feedback has increased greatly in the past twenty years (Harrison et al. 2018). Due to the complexities of this relatively young topic, AGN feedback is studied mainly ★ E-mail<EMAIL_ADDRESS>† E-mail<EMAIL_ADDRESS>by numerical simulations. The gap of the physical scales between the BH and the galaxy can be as large as nine orders of magnitude, indicating all the current simulations require sub-grid assumptions and approximations. Di and Springel et al. (2005) first studied AGN feedback by using hydrodynamical cosmological simulations. Although the AGN feedback was simply employed through a free parameter (the feedback efficiency f , which determines the fraction of the AGN bolometric luminosity that couples to the gas near the BH), they found a tight correlation between the BH mass and the stellar velocity dispersion. On the other hand, by including AGN feedback, both semi-analytic modeling (Croton et al. 2006;Bower et al. 2006; Monaco et al. 2007) and hydrodynamical cosmological simulations (Vogelsberger et al. 2014;Dubois et al. 2014;Khandai et al. 2015;Schaye et al. 2015;Pillepich et al. 2018) obtained much better fittings to observations, e.g., the galaxy luminosity function at the massive end of the galaxy mass, the "downsizing problem", and the "cooling flow problem". Therefore, it is generally believed that AGN feedback does play an important role in the galaxy evolution. One of the most important quantities for the study of AGN feedback is the mass accretion rate of the BH, because it determines the power of AGN. However, in cosmological simulations, the scales relevant to the BH accretion and the ejection of AGN winds are not directly resolved due to resolution limitation. The BH accretion rate usually is calculated by the gas ∼ 1 kpc from the central BH, which could overestimate or underestimate the real BH accretion rate with the uncertainty as large as ∼ 300 (Negri & Volonteri 2017). Moreover, the AGN outputs such as radiation and matter outflows are often not speficied and the interactions between these AGN outputs and the gas in the galaxy are usually treated in a phenomenological parameterized approach.
Since AGN feedback occurs in a single galaxy rather than cosmological scale, to investigate the details of how AGN feedback works, a perhaps better approach is zooming in on a galaxy with high resolution which can resolve the Bondi radius B = 2 BH / 2 s,∞ , which is roughly ten pc for the BH mass of 10 9 and gas temperature of 10 8 K (Ciotti & Ostriker 1997Ciotti et al. 2009;Shin et al. 2010;Ciotti et al. 2010;Novak et al. 2011;Gan et al. 2014;Ciotti et al. 2017;Yuan et al. 2018, hereafter Paper I;Zeilig-Hess et al. 2019). Within the Bondi radius, the gravity is dominated by the BH, so this is the regime of accretion flow.
Various types of accretion disks have been well investigated by the accretion disk community, including cold accretion mode at high accretion rate regime (e.g. Frank et al. 2002) and hot accretion mode in the low accretion rate regime (e.g. Yuan & Narayan 2014). The cold accretion mode correspond to radiative or quasar feedback mode, while the hot accretion mode corresponds to maintenance or radio or kinetic feedback mode. Some of these names are confusing or even misleading so in Paper I we suggest to call them cold and hot feedback modes respectively. Since the Bondi radius is commonly regarded as the outer boundary of accretion flow, once it is resolved as it is in our simulations, we can precisely calculate rather than estimate the accretion rate. Given the BH accretion rate, we can then adopt the accretion knowledge to calculate the outputs, including radiation and matter outflow, and further calculate their interaction with the gas in the host galaxy.
Taking an isolated elliptical galaxy as an example, Paper I investigated the effects of AGN feedback by establishing a sub-grid model of AGN feedback with state-of-the-art accretion physics. The inner radius of the simulation domain is several times smaller than the Bondi radius. The mass accretion rate is thus precisely calculated and the accretion (and feedback) mode can be determined. Both radiation and momentum-driven wind were considered in the two modes. The properties of the wind in the cold mode were adopted from observations, while in the hot mode they were described based on the 3D GRMHD simulation results of Yuan et al. (2015) due to the rarity of observational data. The jet in the hot mode was temporarily omitted.
In Paper I, they focus on the case of low angular momentum of the galaxy. They examined the respective roles of radiation and wind feedback, and found that both can suppress star formation and cause the variability of the AGN, but the wind was by momentum interaction while radiation was by radiative heating. Wind was believed to play a more important role in suppressing star formation and the BH accretion rate. In the second paper of this series, Yoon et al. (2018) (Paper II) extended this model to the high angular momentum case of the elliptical galaxy. They found that while some results were qualitatively similar to those in Paper I, other results, such as star formation and black hole growth, showed a significant difference due to the mass concentration in the galactic disk as a consequence of galactic rotation. More recently, Yoon et al. (2019) specifically focused on the role of hot mode feedback. They find that although the AGN power in the hot mode is much lower compared to the cold mode, the hot mode feedback still plays an important role, because most time of the AGN stays in this mode.
One remaining question in Paper I is that, since the parameters of the sub-grid model have some uncertainties, to what extent do these uncertainties effect the galaxy evolution. We will accomplish this goal in this paper. The paper is structured as follows. In Section 2, we briefly introduce the framework of our models, including the galaxy setups, stellar feedback, star formation, and AGN feedback. In Section 3 we introduce the parameters explored in this paper. Then in Section 4, we show our results. Finally, in Section 5, we summarize our results and compare with the previous works.
MODELS
In this section, we briefly introduce the main physical processes included in our models. The simulation begins with a massive elliptical galaxy at the age of 2 Gyr. As in Paper I we adopt the sub-grid physics that divides the accretion and feedback into two modes depending on the accretion rate at the inner boundary. In each mode we consider both wind and radiation, which are then injected into the simulation region through inner radial boundary. We simulate the interactions of wind and radiation with the interstellar medium (ISM) by considering simplified radiative transfer. Stellar processes such as star formation, stellar winds, supernovae (SNe) Ia and II are taken into account as well. Readers are referred to Paper I for more information.
Galaxy Model
We focus on the secular evolution of a massive isolated elliptical galaxy. The gravitational potential we adopt is dominated by the dark matter halo beyond 10 kpc, by stars from 0.05 to 10 kpc, and by the central BH within 50 pc, respectively.
Both the dark matter and stellar components are modeled through static potentials, and the potential from newly formed stars can be neglected since they are minor compared to other components. We adopt the Jaffe stellar distribution (Jaffe 1983) embedded in the dark matter halo so that the total density satisfies the isothermal sphere assumption and decreases as −2 (Ciotti et al. 2009). The Jaffe stellar distribution is described by where ★ and ★ are the total stellar mass and the scale-length of the galaxy, respectively. ★ is set to be 3 × 10 11 and ★ is the scale length that corresponds to the effective radius e = 0.7447 ★ . The total density profile is given by where 0 = 260 km s −1 is the central projected velocity dispersion. From the Faber-Jackson relation and the Fundamental Plane, we can derive the total B-band luminosity B = 5 × 10 10 B, and the effective radius e = 6.9 kpc. The initial mass of the central BH is determined according to the empirical correlation BH /(10 9 ) = 0.49 ( ★ /10 11 ) 1.17 in Kormendy & Ho (2013), thus yielding the BH mass BH = 1.8 × 10 9 for ★ = 3 × 10 11 . The initial gas fraction is negligible, and most of the gas is provided by stellar evolution in our work. The galaxy is assumed to be slowly rotating, which is supported by observations that suggest slow rotators start appearing when ★ > 2 × 10 11 (Cappellari et al. 2013;Graham et al. 2018). Since the stars are slowly rotating and the stellar wind is the only mass source in our simulations, the angular momentum of the gas is therefore low and we do not need to handle the angular momentum transfer (for the case of high angular momentum, readers are referred to Paper II; Gan et al. (2019)). To focus on the effect of model parameters, the gaseous halo and cosmological inflow are not considered in this paper, consistent with Paper I.
Star formation and Stellar Feedback
We implement star formation by subtracting mass, momentum, and energy from the grid. We note that stellar populations and their dynamics are not explicitly tracked in the simulation. Different from Paper I, the star formation is triggered only if the temperature is lower than 4 × 10 4 K and the number density is higher than 1 cm −3 concurrently. The aim is to mimic the surface density threshold of ∼ 10 M pc −2 for star formation revealed by observations (Kennicutt 1989(Kennicutt , 1998Martin & Kennicutt 2001;Bigiel et al. 2008). The star formation rate per unit volume is given by the Kennicutt-Schmidt prescription ( Here the star formation efficiency is SF = 0.01, which is an order of magnitude smaller than in Paper I. This modification is driven by the observations on surface density relationships of galaxies (Kennicutt 1998), and local giant molecular clouds (Krumholz & Tan 2007; for a comprehensive discussion see the review by Krumholz et al. 2019). The star formation timescale is The cooling and the dynamical timescales are given by Here K ( ), , and are the Keplerian velocity at radius , internal energy density, and net cooling rate per unit volume, respectively. We adopt the formulae in Sazonov et al. (2005) to compute cooling, which includes the bremsstrahlung cooling, Compton cooling, line and recombination continuum cooling. We note that, as in many similar numerical simulation works of AGN feedback, our calculation of the star formation rate has big uncertainties. It is technically difficult to simulate the process of star formation from first principle in such large-scale simulations. These uncertainties and difficulties, including our neglect of the self-gravity of the ISM, can be absorbed in some degree in our simplified and parameterized treatment of our calculation of star formation rate.
The evolving stars inject mass and energy to the ISM, mainly during the asymptotic giant brach (AGB) phase. At the end of their lives, drastic outbursts called supernova feedback produce a large amount of energy and return most of their mass to the ISM. Based on the population of stars, SNe Ia and II are distinguished. All of these stellar feedback processes are considered in our simulations and the detailed descriptions can be found in Ciotti & Ostriker (2012). The chemical evolution and dust absorption are ignored for simplicity.
AGN feedback
The AGN feedback is divided into two modes depending on the accretion rate at the innermost grid radius. The boundary of the accretion rate can be inferred from the observations on the state transition of black hole X-ray binaries, which occurs under the critical luminosity C ∼ 2% Edd , or the equivalent critical accretion rate C ∼ 2% Edd . In principle, this critical accretion rate applies to the accretion rate at the BH horizon BH . Yet in practice, we simply rely on the accretion rate at the inner boundary ( in ) to judge the feedback mode, which is calculated by Both radiation and wind are taken into account in each mode. The radiative transport is considered in a approximated way by assuming the flow is optically thin Ciotti & Ostriker (2012). The heating terms include the Compton heating and photoionization heating driven by the central AGN. The radiation pressure is included by considering both the electron scattering and the absorption of photons by atomic lines. Wind is input via momentum, which was found to be more powerful than the thermally driven wind and better consistent with observations by previous works (Choi et al. 2012(Choi et al. , 2015. In the hot mode, the radiative efficiency used when calculating the radiation flux of AGN is a function of accretion rate (Xie & Yuan 2012), Here cold /0.057 accounts for the spin of the BH and Edd = Edd /( cold 2 ) is the Eddington accretion rate. The values of 0 and depend on BH and , which denotes the fraction of the viscously dissipated energy that directly heats electrons. Assuming = 0.1, 0 and are given by (0.12, 0.59), BH / Edd 9.4 × 10 −5 (0.026, 0.27), 9.4 × 10 −5 BH / Edd 5 × 10 −3 (0.5, 4.53), 5 × 10 −3 BH / Edd 6.6 × 10 −3 (0.057, 0). 6.6 × 10 −3 BH / Edd 2 × 10 −2 The Compton temperature in the hot mode, used to calculate the radiative heating by AGN to the ISM, is calculated based on the spectral energy distribution of low-luminosity AGNs combined from the literature (Xie et al. 2017) and is given by C,hot = 10 8 K, 10 −3 BH / Edd 0.02 5 × 10 7 K. BH / Edd 10 −3 In the hot mode, the accretion flow consists of an inner hot accretion flow plus an outer truncated thin disk. The truncation radius tr is described by (Yuan & Narayan 2014) where s is the Schwarzschild radius. The existence of a strong wind in hot accretion flows has been shown in Yuan et al. (2012). Based on 3D GRMHD numerical simulation of black hole accretion, using the "virtual particle trajectory" approach, Yuan et al. (2015) derived the mass flux and velocity of the wind: where K ( tr ) is the Keplerian velocity at tr . The velocity is ∼ 2000 km/s when ( in ) = 10 −3 Edd , and it increases to ∼ 0.08 as the ( in ) approaches 0.02 Edd . Given we can obtain the properties of the BH accretion rate and wind properties in the hot mode. Compared to jets, the opening angle of the wind is much larger, lying within ∼ 30 • − 70 • and 110 • − 150 • above and below the equatorial plane, respectively (Yuan et al. 2015). The mass flux of wind within the abovementioned tworanges is assumed to be independent of angles.
In the cold mode, the accretion rate is high and a standard thin disk model is applied. The radiative efficiency is commonly assumed cold = 0.1 (Yu & Tremaine 2002;Marconi et al. 2004), which means the BH is moderately spinning according to the thin disk model. The Compton temperature C,cold , which measures the average energy of the emitting photons, is calculated from the observed spectrum of quasars (Sazonov et al. 2004) and is given by 2 × 10 7 K.
The wind properties in the cold mode are obtained from Gofford et al. (2015). They analyzed a sample of 51 Suzaku-observed AGNs and independently detected Fe K absorption in 40 percent of the sample. After processing the data, they measured the mass flux and velocity of the wind: Obviously, the wind should not be isotropic, but the exact description of the distribution of the wind flux is still poorly constrained. Following the previous works (Novak et al. 2011;Gan et al. 2014;Ciotti et al. 2017), the mass flux of the wind is assumed to be proportional to cos 2 ( ).
To obtain the properties of the wind in the cold mode, we need to calculate the BH accretion rate and the AGN luminosity. Once the gas reaches the circularization radius cir , the accretion disk is formed. With the total mass of the gas in the disk, dg , the mass inflow rate at cir in the disk can be estimated as where the viscous timescale vis is described by (Kato et al. 2008) vis ≈ 1.2 × 10 6 yr 0.1 −1 cir 100 s 7/2 BH 10 9 .
Here is the viscosity parameter. Given we can obtain the BH accretion rate and the wind properties in the cold mode. We would like to point out that, when ( in ) is above 2% Edd , the above approach of calculating BH does not ensure BH is higher than 2% Edd , because some inflowing gas may be depleted via star formation or circularize and fall in at a slower rate. It is our future plan to improve the sub-grid physics adopted here.
Finally, an issue worth noting is whether the wind driven from the accretion flows is able to reach large scales and be injected to our simulation region. and study the large-scale dynamics of wind launched from hot accretion flow and thin disks in the cases of without and with magnetic field via analytical and numerical methods. They find that even when magnetic field is not included, wind in the hot mode can reach very large scales while wind launched by thin disk stops at a smaller distance due to its smaller Bernoulli parameter. They do not consider the radiation pressure, which might be the essential mechanism to drive the wind in the cold mode (King & Pounds 2003, 2015Costa et al. 2018), and the wind terminal velocity will be higher with the assistance of magnetic fields . Since in general the wind properties are function of radius, we have made sure that the properties adopted in the present work is suitable for the radius of our inner boundary of our simulation, ∼ in .
Simulation Setup
We use the parallel ZEUS-MP/2 code (Hayes et al. 2006) to study an isolated elliptical galaxy by adopting two-dimensional axisymmetric spherical coordinates. The simulations start at 2 Gyrs after the Big Bang to avoid the early stage of galaxy formation when the major merger plays a dominant role, since we only focus on an isolated elliptical galaxy. The initial conditions for the ISM are set by the very low density gas at the local thermalization temperature. The distribution of the gas is not important since the mass is dominated by the stellar winds as simulations begin. The mass injection of stellar winds reaches the peak at the beginning of the simulation, then decreases gently till = 0, which lasts about 12 Gyrs. The mesh in the polar direction is divided homogeneously by 30 grids. In the radial direction the simulation region is resolved by 120 grids to cover the range of 2.5 pc − 250 kpc, with the finest resolution up to ∼ 0.3 pc by adopting a logarithmic mesh.
How is this inner boundary compared to the scale of accretion flow, i.e., the Bondi radius? In our simulation, the gas is in general multi-phase, and consists of both inflowing and outflowing material. Only inflowing gas is taken into account when we estimate the Bondi radius since only these gas contribute to the accretion. For inflowing gas, each phase has its corresponding Bondi radius. We have calculated the mass flux-weighted value of Bondi radius by including all phases close to our inner boundary. The calculated Bondi radius as a function of time is shown in Figure 1. As we can see from the figure, the Bondi radius is about 5 times larger than the radius of our inner boundary. We have also calculated the Compton radius, which is defined by It is larger than the inner boundary radius for C = 10 8 K. This indicates that the Compton effect mainly plays a role of cooling inside the inner boundary. We carve a narrow range of (∼ 9 • ) at each pole to avoid the singularity there and also adopt the "outflow boundary condition" for polar boundaries. The AGN sub-grid model works as follows. After measuring the accretion rate at the inner boundary, we can calculate the BH accretion rate and wind properties by adopting the formulas in the corresponding mode. Then the BH mass is updated and the wind is injected in the inner grid within the opening angle with mass and momentum conservation. A temperature floor of 10 4 K is set in the cooling functions due to the considerations of spatial resolution in our simulations.
MODEL PARAMETER EXPLORATIONS
In this section, we introduce the explorations of our model parameters. We perform surveys on wind and radiation of the AGN in both hot and cold modes. To avoid confusion, each model of wind and radiation alters one parameter while keeping other parameters unchanged to control variables. Apart from that, we explore the case of a lower initial BH mass. Note that in this case the Eddington accretion rate decreases due to the lower BH mass. Therefore, the same mass flux of accretion flows yield higher Eddington ratio. All the surveyed parameters are summarized in Table 1. These parameters are explored within the constraints imposed by observations or theories, as detailed below.
Wind
In the hot mode due to the scarce of observational data, we model the wind properties by adopting simulation results of Yuan et al. (2015). This work only deals with non-spin black hole and normal magnetic field (SANE). On the basis of Yuan et al. (2015), Yang et al. (2020, in prep.) study the effects of magnetic field and BH spin on wind properties by performing new GRMHD simulations. Their results suggest that the magnetic field and BH spin can affect the wind properties significantly. Wind velocity in the case of strong magnetic field (MAD00, here the number denotes the spin of BH) is about three times higher than that in its weak magnetic field counterpart (SANE00) obtained in Paper I. HotWindVel is the model to explore this high velocity case. The mass flux of wind of MAD00 is given by Gofford et al. (2015) and the XMM-Newton sample in Tombesi et al. (2012) respectively. The solid lines are the best fittings to the green data points given in Gofford et al. (2015). The dashed and dotted lines vary the best fittings by a factor of ten (top) and three (bottom), representing the ColdWindFluxHigh/Low models and the ColdWindVelHigh/Low models, respectively. The magenta and gray dotted lines represent the critical luminosity (2% Edd ) for the Fidu and BHmass models. This is several times lower than that of SANE00 when the mass accretion rate approaches 10 −2 Edd . On the other hand, the mass flux of the high BH spin model SANE98 given by is three times higher than that of SANE00. It is worth noting that when ( in ) is small, all three models yield W close to ( in ), because BH of all three models is orders of magnitude smaller than W . We perform the HotWindFluxLow and HotWindFlux-High models to simulate these two cases of MAD00 and SANE98, respectively. The mass flux of the HotWindVel model and the wind velocity of the HotWindFluxHigh/Low model remain unchanged under the same ( in ). This indicates the lower wind density of the HotWindVel model and higher wind density of the HotWindFlux-High/Low model given the definition of mass flux where is the constant area that the wind blows out of the inner boundary.
In the cold mode we estimate wind properties through observations on broad absorption line (BAL) outflows (Arav et al. 1999;Tombesi et al. 2010Tombesi et al. , 2012Arav et al. 2013;Gofford et al. 2013Gofford et al. , 2015. Paper I adopts the relation between wind properties and bolometric luminosity from Gofford et al. (2015). Figure 2 compares this relationship with the observational data on UFOs (ultra-fast outflows). In addition to the data from Gofford et al. (2015), the data from Tombesi et al. (2012) is also shown in the figure. It is worth noting that because the wind injection radius, i.e., the inner boundary, in our sub-grid model is within the Bondi radius, we only choose observations of UFOs that are on sub-pc scales. From Figure 2, it is easy to see that the uncertainties of the wind flux and velocity are considerable. Therefore, we perform four models ColdWindFluxHigh/Low, ColdWindVelHigh/Low, trying to cover the scatters in the observations. The flux-variant models vary the mass flux by an order of magnitude and the velocity-variant models alter the wind velocity by a factor of three. Note that an upper limit of 10 5 km/s is applied in all the models. Similarly, the mass flux of the ColdWindVelHigh/Low model and wind velocity of the ColdWindFluxHigh/Low model remain identical.
Radiation
In addition to wind properties, the effect of radiation is studied as well in both hot and cold modes. The HotRad model studies the effect of radiation in the hot mode. As shown in Section 2.3, the radiative efficiency is a function of , which denotes the fraction of viscously dissipated energy that directly heats electrons. The value of is constrained to be ∼ 0.1 − 0.5 (Yuan & Narayan 2014), stronger radiation corresponding to larger = 0.5 is considered in the HotRad model. Figure 3 shows the radiative efficiencies corresponding to two different as a functions of BH accretion rate. The radiative efficiency corresponding to = 0.5 can be an order of magnitude higher than that of = 0.1 when the BH accretion rate is low. The radiative efficiency of the HotRad model in the cold mode remains same.
In the cold mode, the ColdRadHigh model explores the high spin of the BH with radiative efficiency cold = 0.3, while the ColdRadLow model investigates the zero spin of BH, of which the radiative efficiency cold = 0.057. In order to study the effects of radiation in the cold mode, we keep the the radiative efficiency in the hot mode identical to that of the Fidu model. Moreover, in order to study radiation, the wind properties remain unchanged under the same ( in ).
Initial BH mass
Other than wind and radiation properties, we explore the effect of the initial BH mass using the BHmass model. The initial BH mass adopted in Paper I is BH,i = 1.8 × 10 9 for ★ = 3 × 10 11 (Kormendy & Ho 2013), but recent works suggest the discrepancy of BH mass measurement between AGN hosts and ellipticals/classical bulges (Ho & Kim 2014;Reines & Volonteri 2015;Shankar et al. 2016Shankar et al. , 2019. Shankar et al. (2016) use Monte Carlo simulations to illustrate the selection bias in local, dynamically measured BH samples of ellipticals/classical bulges, in which only the massive BHs are measured. Therefore, the BH mass in Kormendy & Ho (2013) may be overestimated. In their model the BH mass is BH,i = 2.7 × 10 8 for ★ = 3 × 10 11 . The BHmass model studies this possibility, with the initial BH mass in this model being six times lower than that in Paper I.
RESULTS
In this section, we present the results of our parameter explorations. We essentially investigate the effects of parameters in four major domains of galaxy evolution: AGN luminosity, BH mass growth, star formation, and the AGN duty cycle. The fiducial model is essentially identical to the fullFB model in Paper I but with two improvements as discussed in Yoon et al. (2019). One is the calculation of star formation as described in Section 2.2, the other is the correction of a bug in computing the energy flux of the wind. This bug caused a lower energy flux of the wind in the hot mode while a higher energy flux in the cold mode. Figure 4 shows the evolution of the accretion rate at the inner boundary ( in ), BH accretion rate BH , and bolometric luminosity bol for all models. We note that the time interval of two adjacent data points is ∼ 1 Myr for clarity, thus some outbursts are filtered out in this case. For the fiducial model, ( in ) oscillates around ∼ 10 −3
AGN luminosity
Edd . The value of this "baseline accretion rate" is determined by the momentum balance between the AGN wind and the inflow at the inner boundary of our simulation, reflecting the characteristic of the hot mode in our AGN sub-grid model. When the accretion rate decreases, both the mass flux and the velocity of the wind become lower and yield a lower momentum of the wind. According to our sub-grid model, the decrease of wind momentum is more rapid than that of the inflow, thus making the accretion rate increase to reach momentum balance. Oppositely, when the accretion raises, the wind momentum grows faster than the momentum of inflow, thus making the accretion rate decrease. Therefore, we can see small oscillations of ( in ) at ∼ 10 −3 Edd . However, if the inflow is massive enough to overcome the momentum of wind, such as driven by the cold clumps, the accretion rate is able to enter the cold mode and trigger a strong AGN wind and radiation of the cold mode. The wind and radiation push the accreting gas outwards so the mass accretion rate at the Bondi radius drops drastically. In addition, they interact with the ISM of the nuclear region and heat the surrounding gas. The subsequent galactic wind can reach several kpc scale, but most of the wind can not break out of the halo. The compression of the kicked gas is likely to form the cold clumps once more, yet in a moderate way because of the lower density of the ISM. These cold clumps fall back to the centre and initiate another episode of the AGN outburst. Eventually, the surrounding gas is almost cleared out and the AGN returns back to be quiescent. Because the mass supply from the stellar wind decreases with time, the massive inflow characterized by the peaks of accretion rate occurs more frequently at the early stage of the evolution due to abundant supplies from the stellar wind. Actually, the accretion rate stays in the hot mode after ∼ 6 Gyr. However, the accretion rate for the momentum balance maintains the same level throughout the evolution, keeping the typical mass accretion rate unchanged.
As shown in the middle panel of Figure 4, the BH accretion rate oscillates around 10 −5 Edd , and the amplitude of the oscillation is larger than that of ( in ). Given that ( in ) ∼ 10 −3 Edd , this implies that 99% of the inflows are restored to the simulation region by the strong wind. The right panel shows the light curve of the AGN bol = BH 2 . Overall, we can see that the galaxy spends most of its time in the low-luminosity phase with the bolometric luminosity lying in the range 10 −6 − 10 −4 Edd , which is consistent with observations that a median Eddington ratio of ∼ 10 −5 is found for ellipticals (Ho 2009). Our typical luminosity of Fidu is 1-2 orders of magnitude lower than that of the fullFB model in Paper I, which is ∼ 10 −4 Edd . This is caused by the correction of the bug that under-produced the wind power in the hot mode.
As the parameters vary, Figure 4 shows that ( in ) varies the least among the three physical quantities, which reflects the nonlinear characteristics in our AGN sub-grid models. Here, variations are measured through typical values the curves fluctuate around and the frequency of strong oscillations, such as outbursts and sudden drops to very low accretion rate or luminosity. For the HotWindVel model, ( in ) oscillates around a value a factor of three smaller than that of the Fidu model ( Figure 5), resulting in orders of magnitude lower BH accretion rate and bolometric luminosity 1 . As mentioned above, the oscillations are caused by the momentum balance between the AGN wind and the inflow. When the wind velocity is three times higher, it requires ( in ) three times lower compared to the Fidu model to achieve a new momentum balance between the inflow and wind at in . This is because, a three-times lower ( in ) will result in both the wind flux and velocity three times lower, while the higher wind velocity in the HotWindVel model will compensate for the decrease of velocity. So the overall decrease of momentum of wind is three times, which balances with the decrease of inflow.
We find from Figure 4 that the typical accretion rates of the HotWindFluxHigh and HotWindFluxLow models vary little compared to the Fidu model. This is because, the mass fluxes of wind in these two models differ from the Fidu model only when the accretion rate is close to 10 −2 Edd , as we emphasize in Section 3.1. When ( in ) < ∼ 10 −3 Edd , the mass fluxes of the hot wind for all the models are similar, and are roughly equal to ( in ). Yet the HotWindFluxHigh model has fewer outbursts, while for the HotWindFluxLow model, more frequent outbursts can be seen during its violent evolution. Such a difference is caused by the different mass fluxes of hot wind when ( in ) approaches 10 −2 Edd . For the HotWindFluxHigh model, the wind is stronger than that of the Fidu model; thus with increasing accretion rate, the accretion mode is more likely to stay in the hot mode. For the HotWindFluxLow model, the wind is weaker, thus with the increase of accretion rate, it is easier for the accretion to enter the cold mode. Since the wind is stronger in the cold mode, it can cause larger oscillations characterized by the outbursts and suppressions of accretion rate and luminosity.
The HotRad model shows similar ( in ) and BH compared to the Fidu model, but the typical bol is an order of magnitude higher due to the higher radiative efficiency. The results of the four models concerning properties of cold wind exhibit less variation than those of the Fidu model. Comparing the High-suffix models (i.e., ColdWindVelHigh and ColdWindFluxHigh) to the Low-suffix counterparts (i.e., ColdWindVelLow and ColdWindFluxLow) of the cold mode, we can see from the figure that the strong suppression of the accretion rate appears more frequently and has larger amplitudes due to stronger wind feedback. But the effect of radiative efficiency in the cold mode is less obvious. Finally, the accretion rate of the BHmass model shows strong oscillations, which is similar to that of the HotWindFluxLow model. This is because the Eddington accretion rate is lower due to the lower BH mass. Therefore it becomes easier for the accretion to enter into the cold mode given the same amount of mass supply from the stellar wind, which produces a stronger wind. l, we find that the galaxy spends more time in the low-luminosity phase with increasing mass flux and velocity of the hot wind as shown by HotWindFluxLow, HotWindFluxHigh, and HotWindVel. Correspondingly the BH growth and the datum line luminosity decline as we mentioned above. Yet all of these three models spend less time in the cold mode. We suppose that the reason of HotWindFluxLow is unphysical as we have discussed already. Furthermore, all of these three models differ from the observations by Ho (2009), indicating these cases are a rarity. Although the galaxies of HotWindVel and HotWindFluxHigh stay longer time in the hot mode, the emitted energy fraction of the cold mode is higher than that of Fidu. This is because the energy/time shown here is the proportion of the total energy/time. Owing to the fact that they spend more time during the lower luminosity phase, in other words, the datum line luminosity is lower, the total energy they emit is lower than that of Fidu, which causes their cumulative energy above 2% Edd relatively higher. Overall, the variation of the cumulative time and energy above 2% Edd is within a factor of 2.
BH mass growth
The BH mass growth for various models are listed in the second column of Table 2. Generally all the models yield mass growth of less than 4% of the initial BH mass, suggesting that the AGNs are quiescent overall regardless of the parameters we explore. Moreover, the models with stronger wind or radiation have lower BH mass growth, suggesting that both AGN wind and radiation have negative effects on the BH mass growth, which is easy to understand. Specifically, as a result of increasing the wind velocity by a factor of three, the HotWindVel model has lower BH mass growth due to its lower typical BH accretion rate in the hot mode. The HotWind-FluxHigh model reduces the BH mass growth by over a factor of two, while the HotWindFluxLow model has higher BH mass growth due to the lower mass flux of the wind. In the cold mode, while the BHs have over 50% higher mass growth by reducing velocity and mass flux of the cold mode as shown by the ColdWindVelLow and ColdWindFluxLow models, respectively, the BH mass varies little when increasing the velocity and mass flux by the same magnitude, as shown by the ColdWindVelHigh and ColdWindFluxHigh models. This is caused by the small fraction of BH mass growth in the cold mode. Since the growth of BH mass is already dominated by the accretion in the hot mode, stronger feedback in the cold mode reduces little BH growth. However, weaker feedback in the cold mode can increase the accretion during the cold mode significantly, thus resulting in more substantial BH mass variations.
The models with stronger radiation either in the hot mode or the cold mode show negative effects on the BH accretion. Weaker * The fraction of time that galaxy spends above 0.02 Edd . ** The BH mass growth of the Fidu model is an order of magnitude smaller than that of the fullFB model in Paper I. It is reasonable considering an order of magnitude lower typical BH accretion rate for the Fidu model. radiation in the cold mode represented by the ColdRadLow produces higher BH mass growth. But the effect of radiation is smaller than the wind, consistent with Paper I. However, we should be cautious in drawing the conclusion that wind is more important than radiation in controlling the BH accretion, since we have not considered dust in our simulations, which might play an important role in the radiation feedback processes.
Finally, the BHmass model shows much smaller growth of the BH mass than that of the Fidu model. The reasons are twofold. One is that the lower BH mass provides a shallower gravitational potential that reduces the accretion. The other is that the Eddington accretion rate is lower for the BHmass model. Consequently, the accretion is easier to enter the cold mode given the same amount of mass supply from stellar wind, while the cold mode has stronger wind and radiation feedback. However, the percentage of the mass growth of the BHmass model is higher than that of the Fidu model, caused by the higher ratio of stellar mass to the initial BH mass. Figure 6 shows the distribution of the newly born stars for various models. We ignore the gravitational potential provided by the new formed stars and the stellar movements. The results differ significantly from Paper I because we now apply different density and temperature thresholds to star formation, i.e., only the gas with density over 1 cm −3 and temperature under 4 × 10 4 K can form stars, and star formation efficiency SF is reduced by an order of magnitude. Generally, we find that star formation is significantly reduced compared to that of the fullFB model in Paper I, and is highly concentrated inside 1 kpc. The new stars density at the end of the simulations, shown in the left panel of Figure 6, decreases with increasing radius due to the increasing star formation timescale and decreasing gas density. The total mass of new stars is around (4 − 9) × 10 6 for various models, shown in the middle column of Table 2, which accounts for less than 0.01% of the total stellar mass. In fact, the total star formation is lower than the BH mass growth for all the models.
Star formation
We do not find obvious correlations between the wind properties and the total star formation, as shown by the upper right panel of Figure 6, which presents the cumulative mass of new stars integrated over time for models with various wind parameters. In the simulations, the sites of star formation are determined by the distribution of the cold and dense gas. A stronger AGN wind is able to push the gas toward larger radii, and results in the accumulation and condensation of the cold gas there. Thus higher wind power increases star formation at large radii while decreasing it at small radii. On the other hand, wind feedback does strongly suppress the star formation, as shown by, e.g., the right plot of Fig. 8 in Paper I. The absence of the correlation found here is because the range of wind parameters we have explored is relatively too small.
In the center right panel, the HotRad model shows that higher radiative efficiency of the hot mode yields lower star formation inside 100 pc, caused perhaps by stronger radiative heating and radiation pressure; while at large radii, star formation is higher, which is perhaps because the radiation pressure pushes the gas from small radii to large radii, similar to the role played by wind. For radiation in the cold mode, however, stronger radiation suppresses the star formation at both small and large radius, which is likely because the radiative heating is very strong in the cold mode. The bottom right panel shows that the lower BH mass of the BHmass model produces higher star formation overall.
Apart from the spatial properties of star formation, another major difference from Paper I is the temporal evolution of the specific star formation rate (sSFR). Figure 7 shows the evolution of the sSFR for the Fidu model as an example. Throughout the lifetime of the galaxy, the sSFR is weak for most of the time with the time-averaged value of 10 −15 yr −1 . The quiescence of the galaxy, however, is punctuated by sporadic and short episodes of strong star formation outbursts that can reach the sSFR of 10 −12 yr −1 especially at the early stage. This star formation history is strongly correlated with the AGN luminosity light curve. We find the concurrent outbursts in the star formation and the AGN light curve, and the same quiescent time during 7-9 Gyr and 9-11 Gyr. This indicates the tight link between star formation and BH accretion. As mentioned before, the star formation takes place in the cold, dense clumps. If these clumps are not consumed by star formation before they fall into the central BH, a strong AGN outburst is expected afterwards. Both processes are driven by cooling flow infall of gas.
AGN Duty cycle
The duty cycles for different models are shown in the left panel of Figure 8. Each line represents the fractions of time that the galaxy spends above a given Eddington ratio. For the Fidu model, the AGN spends over 99% of its evolution time with the Eddington ratio below 2 × 10 −4 . Comparing to 80% of the total time in the fullFB model of Paper I, the AGN of the Fidu model spends more time in the low-luminosity phase, which is consistent with the lower typical AGN luminosity mentioned in Section 4.1.
In general, wind and radiation of the hot mode exert more remarkable influences on the cumulative time during the lowluminosity phase, while wind and radiation of the cold mode have larger effects above 10 −2 bol / Edd . Specifically, we can see from the top left panel of Figure 8 that the AGN spends more time below the Eddington ratio of 10 −4 bol / Edd as the velocity or mass flux of wind in the hot mode increases. For the Eddington ratio above 10 −2 bol / Edd , however, the AGN spends less time with increasing velocity or mass flux of wind in the cold mode. Particularly, the AGN of the ColdWindFluxHigh model spends little time in the cold mode throughout its evolution. This suggests that AGN wind of both hot and cold modes has negative effects on the AGN duty cycle, which is consistent with the findings that the AGN wind suppresses the BH accretion in Section 4.2.
The center left panel of Figure 8 shows the models with regard to radiation. The AGN in the HotRad model spends less time in the low-luminosity phase compared to the Fidu model. Yet we caution that BH accretion is slightly suppressed when the radiation is stronger as discussed in Section 4.2. The higher AGN luminosity is caused directly by the higher radiative efficiency. From the ColdRadLow model to the ColdRadHigh model, the AGN has decreasing cumulative time above a given Eddington ratio owing to the declining BH accretion rate which overwhelms the effect of increasing radiative efficiency. The bottom left panel exhibits the duty cycles of the BHmass model. The BHmass model shows more cumulative time of the AGN in the high-luminosity phase due to the lower value of Eddington luminosity. The specific numbers of the AGN duty cycles above 0.02 Edd are listed in the right column of Table 2. It is obvious to find that the duty cycles of the cold-mode models have stronger variations than those of the hot-mode models.
The right panel of Figure 8 shows the fraction of total energy emitted above a given Eddington ratio. Compared to the left panel, models reveal minor differences at low Eddington ratios, while the differences are amplified at high Eddington ratios. For the Fidu model, the AGN emits roughly 25% of its total energy in the cold mode. This number is larger than the 6% in the fullFB model of Paper I, which is caused by the correction of the bug that over-produces the energy flux of the wind in the cold mode. However, it is still not consistent with "the Soltan argument", which claims that AGNs emit most of their energy during the high-luminosity phase (Soltan 1982;Yu & Tremaine 2002;Marconi et al. 2004). As already discussed in Paper I, there are two main reasons for this discrepancy. The most important reason is that our simulations begin from 2 Gyr with a massive, mature elliptical galaxy. The central BH mass is already 10 9 with minor growth in its subsequent evolution via accreting mass supplied by the stellar wind. In other words, the essential parts of the BH mass growth and the energy release occur before our simulations begin. The other reason is that we only focus on an isolated galaxy without considering the gaseous halo and cosmological inflow. The fraction of the energy emitted is expected to increase if we include the gas supply from the cosmological inflow and the gaseous halo.
Among all models, the fraction of cumulative energy emitted in the cold mode (the Eddington ratio > 10 −2 bol / Edd ) is the largest for the ColdWindFluxLow model, which is ∼ 70% of the total energy, while for the ColdWindFluxHigh model, almost zero energy is emitted in the cold mode. This suggests that the properties of wind in the cold mode are crucial to the high-luminosity phase of the galaxy. For the models studying the wind of the hot mode, it is interesting to find that although the time spent in the cold mode is roughly the same for the HotWindFluxHigh and HotWindFluxLow models, the AGN of the HotWindFluxHigh model emits a larger fraction of energy in the cold mode than that of the HotWindFluxLow model. This is because, on one hand, we find that the totally emitted energy in the cold mode in the two models are roughly the same; on the other hand, the typical AGN luminosity of the HotWindFluxHigh model in the hot mode is lower than that of the HotWindFluxLow model thus the emitted energy in the hot mode of the HotWindFlux-High model is lower. This is also the reason for the larger fraction of the energy in the cold mode for the HotWindVel model.
In the center right panel, the AGN in the HotRad model emits a smaller fraction of energy in the cold mode than that of the Fidu model because of its higher total energy. The ColdRadLow model has a larger fraction of energy emitted in the cold mode due to its longer cumulative time spent in the cold mode. In particular, almost half of its energy is emitted in the cold mode for the ColdRadLow model, suggesting the non-negligible role of radiation. The Col-dRadHigh model also has a larger fraction of energy emitted in the cold mode, although its AGN spends less time in the cold mode. This is caused by the higher radiative efficiency for the ColdRad-High model. The bottom right panel presents the results for the BHmass model. The AGN of the BHmass model emits nearly half of the total energy in the cold mode.
SUMMARY AND DISCUSSION
In Paper I, we have proposed a sub-grid model of AGN feedback and used it to study the AGN feedback in an elliptical galaxy. In that work, all the model parameters are set to be at their typical values. But some uncertainties still exist although black hole accretion is a relatively mature field compared to AGN feedback. In this paper, we perform simulations to study the effects of parameters in the AGN sub-grid model of Paper I. Such a study is also useful for us to understand the role each model component plays in the feedback. The fiducial model is the updated version of the fiducial model in Paper I. Based on this model, we vary one parameter while keeping other parameters unchanged to study its effect. The models are listed in Table 1. By comparing models to the fiducial model, we are able to find the effect of this corresponding parameter on the AGN and galaxy evolution.
AGN Wind suppresses the BH accretion overall. Particularly, the wind velocity in the hot mode is the most important in controling the typical accretion rate and luminosity of the AGN. For example, a hot wind with three times higher velocity results in the typical AGN luminosity two orders of magnitude lower. When the accretion is in the cold mode, a powerful wind stifles the accretion dramatically and renders the accretion back to the hot mode. This feedbackregulated BH emits at the typical luminosity between 10 −6 Edd and 10 −4 Edd for most of the time, which is consistent with the observations on the nearby, early-type galaxies (Ho 2009). In the case of a stronger AGN wind, the mass growth of the BH decreases, and the AGN spends a larger fraction of time in the low-luminosity phase. Star formation, however, takes place in a more complicated way. A direct correlation between the AGN wind power and the total star formation integrated over cosmological time and the whole galaxy is not found, which is because the range of wind parameters explored here is not large enough.
AGN radiation also suppresses the BH accretion, although not as violently as the AGN wind. But it is premature to conclude that AGN wind plays a more important role than radiation in modulating BH accretion, because we have not considered dust in our simulations. Despite the negative effect on BH accretion, compared to the Fidu model, when the radiation of the hot mode becomes stronger, the AGN duty cycle shows a smaller fraction of time spent in the low-luminosity phase. This is caused by the larger radiative efficiency adopted. Similar to AGN wind, radiation of the hot mode also plays a role in reducing star formation at small radii while enhancing star formation at large radii. For radiation of the cold mode, however, stronger radiation suppresses the star formation at both small and large radii compared to the Fidu model.
Finally, we perform an extra model to investigate the effect of a lower initial BH mass, which is possibly suggested by the obser-vations (Ho & Kim 2014;Reines & Volonteri 2015;Shankar et al. 2016Shankar et al. , 2019. A direct consequence is the stronger oscillations of the BH accretion rate and AGN luminosity. The AGN spends more time and emits more energy in the cold mode, and correspondingly the percentage of the BH mass growth increases. The star formation is also enhanced.
In summary, however, given all the parameters we explore, the variations of the mass growth of BHs and the total star formation are within an order of magnitude, suggesting that our results are relatively insensitive to the parameters of the AGN sub-grid model.
In the initial state of our simulations, the galaxy already contains a very massive black hole and the star formation is already very low. We have shown in this paper (and our previous series of works, e.g., Yuan et al. 2018) that AGN feedback can keep the galaxy quenched. However, the gaseous halo and the cosmological inflow have so far been neglected. When these components are taken into account, it will be important to assess whether AGN feedback can still keep the galaxy quenched. This issue will be discussed in detail in our subsequent paper (Zhu et al. 2020, in preparation). Another issue is the effect of AGN feedback in the high redshift Universe. In that case, the black hole is small and the gas is abundant, so both the activity of the central AGN and star formation are much stronger. It is then interesting to ask what is the effect of AGN feedback; specifically, whether the AGN feedback can quench the galaxies. This question will be investigated in our future works. | 13,272 | 2020-11-30T00:00:00.000 | [
"Physics"
] |
Hydrogenolysis of cellulose to valuable chemicals over activated carbon supported mono-and bimetallic nickel/tungsten catalysts
The hydrogenolysis of cellulose was systematically investigated at 488 K and under 65 bar H 2 in the absence of a catalyst and over six di ff erent catalytic systems containing nickel and/or tungsten on activated carbon (AC) in order to understand the role of individual active components (AC, W/AC, Ni/AC, a physical mixture of Ni/AC + W/AC, and two di ff erently prepared Ni/W/AC catalysts) with respect to the product distribution wherein polyols ( e.g. ethylene glycol (EG), propylene glycol, and sorbitol) are highly valuable chemicals. Without a catalyst and when using only AC, a hydrochar, due to hydrothermal carbon-ization of cellulose, was obtained. Although the catalyst W/AC was e ff ective for the degradation of cellulose (high conversion of 90%) and facilitates C – C bond cleavage, selective production of any product was not possible, and the carbon e ffi ciency (CEL) is the lowest (9.1%). Also, with highly dispersed Ni on AC the polyol yield was only 5.3%. The desired behavior showed Ni/W/AC provided its preparation occurs by a two-step incipient wetness (IW) technique. Starting with a remarkably high cellulose/catalyst ratio of 10, a cellulose conversion of 88.4%, CEL of 78.4% and EG yield of 43.7% were achieved (overall polyol yield = 62.1%). Drastically lower yields towards EG by an order of magnitude and decreased CEL were obtained by a co-impregnated Ni/W/AC catalyst and the Ni/AC + W/AC mixture. By the detailed analysis via XRD, TPR and CO chemisorption, it can be concluded that in the Ni/W/AC catalyst, after the fi rst IW step of the activated carbon with ammonium metatungstate hydrate and the following reduction in H 2 up to 1128 K, metallic tungsten was formed. This leads, in combination with the hydrogenation properties of nickel introduced in the second IW step, to a virgin bimetallic catalyst, i
Introduction
Development of novel chemical processes replacing fossil fuels is one of the main roles of chemistry today. 1 Besides wind-, solar-and hydro-power, which cannot be used for the production of chemicals, the utilization of biomass as feedstock is a sustainable and green method to produce valuable fuels and chemicals. 2Cellulose constitutes the biggest part of lignocellulosic biomass and is the most abundant biopolymer in the world.][5][6] The early work in 1913 from Bergius 7 showed that it is possible to form bio-coal from cellulose under high temperature.Currently, catalytic conversion of lignocellulosic biomass via hydrolysis, solvolysis, hydrothermal liquefaction, pyrolysis or gasification are potential ways to produce valuable fuels and chemicals. 8As a homogeneous process, liquid acid catalysis is a very effective way to obtain molecules like glucose, xylose or cellobiose from various types of lignocellulosic biomass. 9Typically, mineral acids like H 2 SO 4 or HCl, organic acids such as various carboxylic acids and p-toluenesulfonic acid show good performance in liquid acid-catalyzed hydrolysis of cellulose.Nevertheless, these types of cellulose utilization suffer from costly product separation, severe corrosion, and neutralization of waste acids.In contrast, heterogeneous catalyst systems have significant advantages (e.g.easy separation and recycling of the catalyst after reaction, a non-corrosive procedure and no waste production). 10n 2006, Fukuoka and Dhepe 11 reported about the hydrogenolysis of cellulose to sugar alcohols (yield of sorbitol 31%) over supported noble metal catalysts in the aqueous phase and under a hydrogen atmosphere.They used ruthenium and platinum on different support materials, for example SiO 2 -Al 2 O 3 , γ-Al 2 O 3 or HUSY.The combination of a metal, which is a hydrogenation component, and a solid acid support which can replace the liquid acid seems to be a very promising green route for cellulose conversion.After this pioneering work, many researchers use various combinations of hydrogenation active sites and acidic solids for the production of commodity chemicals on the basis of cellulose.For example, Ru/C 12 or a combination of Ru/C with heteropoly acids 13 (e.g.H 4 SiW 12 O 40 ) is very effective for the formation of hexitols (sorbitol and mannitol).If ethylene glycol (EG) is the desired product, catalysts with tungsten species represent a very good choice.For example, a Ni-promoted tungsten carbide catalyst (2%Ni-30% W 2 C/AC; AC = activated carbon) catalyses the conversion of cellulose to ethylene glycol with 61% yield at full conversion of cellulose within 30 minutes at 518 K and 60 bar hydrogen pressure (measured at room temperature) with a cellulose/catalyst ratio of 3.3. 14Furthermore, it was shown that not only the W 2 C catalyst but also WO 3 , W or H 2 WO 4 combined with nickel or noble metals (Ru, Pt, Pd, and Ir) were effective for the production of polyols. 15,16By optimization of the nickel (or noble metal)/tungsten ratio, high yields of ethylene glycol (>60%) can be achieved.Tai et al. 17 assumed that tungsten bronze (H x WO 3 ) is the real active component, which is formed during reaction via dissolving of tungsten compounds.Cao 18 focused his attention on SBA-15 supported Ni-WO 3 catalysts.A catalyst with a nickel load of 3% and a tungsten trioxide load of 15% had a full conversion of cellulose and a yield of ethylene glycol of 70.7%.The cellulose/catalyst ratio was 4 and the reaction was carried out at 503 K and 60 bar hydrogen pressure measured at room temperature.
In this work, we systematically investigate the behavior of different catalytic systems containing nickel and/or tungsten on activated carbon, which was selected because of the hydrothermal stability, for the hydrogenolysis of cellulose in order to understand the role of individual active components during the conversion of cellulose, and the influence on the obtained product distribution.By a detailed co-analysis of taken liquid samples by HPLC, GC and GC-MS, we were able to quantify 28 various chemical compounds formed during the conversion of microcrystalline cellulose and to follow their complex reaction pathways.We additionally prepared Ni-W catalysts on different kinds of activated carbon and evaluated their performance in the hydrogenolysis of cellulose.The structural features of the catalysts, characterized by means of X-ray diffraction (XRD), temperature-programmed reduction (TPR) and CO chemisorption, were markedly dependent on the preparation method, and had a strong influence on cellulose hydrogenolysis.
Catalytical reaction
The experiments were carried out in a stainless steel autoclave (Parr Instrument, 300 mL).Usually, 0.5 g catalyst, 5 g microcrystalline cellulose (Merck) and 100 mL water were stirred (1000 rpm) for 3 hours at a reaction temperature of 488 K and under a hydrogen atmosphere of 65 bar (at reaction temperature).For the reaction the nickel containing catalysts were in situ reduced under a hydrogen atmosphere at 753 K.During the reaction, liquid-phase samples were taken and analyzed by HPLC, GC and GC-MS.
Catalyst preparation
The catalysts used in this work were usually prepared via the incipient wetness (IW) method.IW-impregnation of a support material with an aqueous precursor solution and, subsequently, thermal pre-treatment and reduction were performed.In detail, the support material (activated carbon (AC), Norit Rox 0.8 from Cabot Norit Company/Nederland, particle size 0.063-0.2mm) was first impregnated with aqueous solution of ammonium metatungstate hydrate and then dried at 383 K for 15 h.The dried sample was pretreated under an argon atmosphere at 823 K for 300 minutes.Then, the hydrogen reduction was conducted with a heating ramp: from room temperature to 808 K in 26 minutes and then to 1128 K in 64 minutes.The last step of this procedure was holding this temperature for 30 minutes.After cooling down, the catalyst W/AC was obtained.A second IW-impregnation of the W/AC catalyst with an aqueous solution of nickel nitrate hexahydrate and, subsequently, drying under air conditions at 383 K for 15 h were performed.Subsequently, the catalyst was reduced under hydrogen flow at 713 K for two hours, cooled down and passivated in air/argon flow for 2 hours.The resulting catalyst is Ni/W/AC.The catalyst Ni/AC was prepared only by the second IW-impregnation procedure as described above.The catalyst referred to as Ni/W/AC-coIW was prepared via simultaneous co-IW-impregnation of ammonium metatungstate hydrate and nickel nitrate hexahydrate aqueous solution followed by the same treatment as for the W/AC catalyst.Also, the AC type was varied (besides Norit Rox 0.8, also Elorit, MRX, SX Plus, and SX Ultra).Note that the use of different AC particle sizes has been avoided by pestling and sieving each kind of activated carbons in a fraction in the 0.063-0.2mm range, but the particle size distribution within this range is still unknown.
2.3.2TPR.Temperature programmed reduction measurements were done on an apparatus TPD/R/O 1100 (Thermo Fisher Scientific).Therefore the quartz U-tube reactor was loaded with a sample and the catalyst was pretreated in Ar flow (30 mL min −1 ) at 383 K for 60 minutes and cooled to 303 K.The H 2 -TPR was performed using 30 mL min −1 of 5.1 vol% H 2 /Ar by heating the sample from room temperature (303 K) to 1073 K with a heating rate of 5 K min −1 while monitoring the TCD signal.
2.3.3CO chemisorption.Prior to the determination of the dispersion measurements via CO pulse chemisorptions, the catalysts were reduced at 623 K for 1 hour followed by cooling down to 273 K in H 2 flow.CO chemisorption was performed according to a well-established standard procedure 19 and measured at 273 K by introducing CO pulses with a volume of 0.473 mL into the flowing hydrogen.The latter can be used as a carrier gas because it was shown that pre-adsorption of hydrogen on nickel did not influence the amount of adsorbed CO. 20
Experimental evaluation
The conversion X of cellulose was determined gravimetrically based on the weight loss of cellulose during the reaction.In detail, after each experiment, the reaction solution was filtered and the solid residues of cellulose and the catalyst obtained in the folded filter were dried overnight.The difference between the cellulose weight at the beginning of the reaction m Cellulose,start , plus the weight of the catalyst m Cat plus the weight of the empty folded filter m FF , and the solid residues inclusive filter m FF,residues yields the amount of utilized cellulose m Cellulose,utilized .
It is important to mention that the determination of cellulose conversion is not valid if any unknown solid products are formed during the reaction.The yields Y i in this work are based on the moles of carbon in the product i divided by the moles of carbon in cellulose.
where c i is the carbon content in the product i; c C 6 H 10 O 5 equals 6; n Cellulose,start is the molar amount of cellulose at the beginning of the reaction; and n i is the molar amount of the product i determined by GC or HPLC.The carbon efficiency CEL was calculated using the ratio of carbon of the known liquid phase products to carbon of all liquid products (known + unknown liquid products).
Results and discussion
The heterogeneously catalysed conversion of cellulose to valuable chemicals was under examination in this work.Polyols, e.g.ethylene glycol, propylene glycol and sorbitol, were the desired products by a hydrogenolysis process with the heterogeneous Ni-W catalysts described in section 2. However, six different materials (Table 1), including also monometallic catalysts, were prepared to understand the role of individual catalyst components for cellulose hydrogenolysis/hydrogenation.Thus, the performance of the Ni/W/AC catalyst prepared by the described two-step IW impregnation technique was compared to the following catalysts: AC, W/AC, Ni/AC, a physical mixture of Ni/AC + W/AC, and the co-impregnated catalyst Ni/W/AC_coIW.The experiments showed remarkable differences in product spectra and conversion of cellulose (Table 2).
The reaction without a solid catalyst was also examined.
Reaction without a catalyst
Under hydrothermal conditions a hydrochar, a muddy black solid product, was obtained due to hydrothermal carbonization (HC).As mentioned above, if unknown solid products were formed during the reaction, determination of cellulose conversion by eqn ( 1) is incorrect because the obtained solid products are included in the folded filter.Visual tests showed that no white leftover of cellulose could be found in the solid residues.Because of this, it was assumed that full conversion of cellulose in the experiment without a catalyst was achieved.
A change in chemical structure of the hydrochar obtained by the HC reaction of cellulose in comparison with raw cellulose was examined by an analytical method based on diffuse reflectance coupled with infrared Fourier transform spectroscopy (DRIFTS, Bruker Equinox 55 IR spectrometer).Bands typical for cellulosic material and its carbonaceous degradation products were found in our samples (Fig. 1).The hydrochar spectra consist of several bands: a wide band at 3650-3000 cm −1 shows the presence of O-H stretching vibrations in hydroxyl or carboxyl groups. 21Bands in the 1300-1000 cm −1 region present C-O stretching vibrations, characteristic for hydroxyl, phenol or ether and also for O-H bending vibrations. 22,23The presence of other oxygen groups is suggested by the band at 1710 cm −1 , which corresponds to CvO vibrations of carbonyl, quinone, ester or carboxyl 24 groups.In the region 1450-1380 cm −1 , C-H deformations (symmetric or asymmetric) can be found. 22A band typical for the aromatic skeletal is at 1510 cm −1 . 22The signal at 1610 cm −1 belongs to CvC vibrations of aromatic rings. 25hemical structure differences are evident in DRIFTS traces between raw cellulose and hydrochar.A significant increase (in comparison to raw cellulose) of the intensity of the bands at 1510 cm −1 and 1610 cm −1 suggests that more aromatic structures are present in the hydrochar.Also the strong increase of the band at 1710 cm −1 which is typical for carbonyl, quinine, ester or carboxyl CvO vibrations confirms that the cellulose was strongly changed.Sevilla et al. 26 reported about dehydration during the HC reaction of cellulose, which was identified by a decrease (in relation to raw cellulose) of intensity of the bands at 1460-1000 cm −1 and 3700-3000 cm −1 .These results were also confirmed during the measurements of our samples.
Aldehydes, ketones, and organic acids (e.g.hydroxy acetone, 1-hydroxy-2-butanone, 3-hydroxy-2-butanone (acetoine), or acetic acid) were detected in the liquid phase (Table 2).C6 and C5 compounds like levulinic acid, furfural, xylitol, HMF or glucose were the predominant products.The received product distribution, the low summarized yield of known products in the liquid phase (11.8%) and also the low carbon efficiency (16.8%) prove true the mechanism of hydrothermal carbonization of cellulose giving rise to the formation of hydrochar, which is a well-studied process. 26,27The latter includes several steps (e.g., cellulose hydrolysis, glucose isomerisation to fructose, dehydration, aldol condensation, agglomeration/growth of particles producing hydrochar) 26 explaining the observed products.
Activated carbon
Very similar products to those obtained in experiments without a catalyst were also found in experiments using only AC (Table 2).However, the amount of identified known soluble compounds was higher in comparison with the HC reaction with a summarized yield of ∼23%.Especially, HMF, furfural and levulinic acid were produced with yields of 6%, 1.8% and 4.4%, respectively.9][30][31][32][33][34][35][36][37][38] For the reactions over AC or sulfonated AC, usually glucose is a degradation product 39,40 which was produced in our case with minor yield (1.8%).However, hydrochar was also formed in the reaction with AC and an unknown amount of carbon atoms ends up in undesired solid products.Consequently, similar to the HC reaction, the value of the carbon efficiency coefficient was low, 29.9%.Short-chain polyols like propylene glycol or butanediol were not detectable and the yield of ethylene glycol was <0.1%.
W/AC
This catalyst showed a high cellulose conversion of nearly 90% (W/AC, Table 2).Although this high value was achieved, the carbon efficiency coefficient was only 9.1%.Note that the amount of unknown soluble products was higher compared to other experiments.Non-identified gray/black oily solid products were also obtained during this reaction, which shows that an unknown amount of carbon can be found in the solid phase.It is important to mention that, based on a visual test, the solid product was different compared to the hydrochar obtained during the HC reaction of cellulose without a catalyst and with AC.Due to the high activity of W/AC, the formation of gas phase products is also possible and, indeed, methane and carbon dioxide were formed during the reaction.It is interesting to note that in the case of W/AC the yield of typical acid-catalyzed degradation products of cellulose (HMF, furfural, levulinic acid) is considerably lower compared to AC (see Table 2).The amount of C6 and C5 compounds formed over W/AC was drastically smaller (2.1%) than with AC (17%) or without the catalyst (7.6%) and short-chain compounds like acetic acid were predominant.Consequently, the catalyst containing tungsten is effective for the degradation of cellulose in order to achieve high cellulose conversion and facilitates C-C bond cleavage, but a highly selective production of any product is not possible.
Ni/AC
If nickel on activated carbon was used as a catalyst, a new product distribution was achieved (Table 2).The active sites of nickel catalysts are well-known for their hydrogenation properties 30 and during the conversion of cellulose many different reaction pathways are possible.Short-chain compounds like ethylene glycol, propylene glycol, butanediol, or glycerol were formed.Also, other polyols like sorbitol, erythritol, 1,2-hexanediol, or 1,2,6-hexanetriol were built up in reactions with Ni/AC.Interestingly, ketones like hydroxy acetone, 1-hydroxy-2-butanone, acetonyl acetone, and acetoine were identified with yields of 9.1%, 7.3%, 2.6% and 1.5%, respectively.The experiment with Ni/AC opened reaction pathways towards the production of polyols, but their yield is rather low (5.3%).In the next step, catalysts which compromise nickel and tungsten in one catalytic system were studied.Surprisingly, the yield of these compounds for the catalyst Ni/ W/AC was only 4.3%.As expected, the amount of polyols was also changed in these investigated catalyst systems.For the reaction with the mixture of Ni/AC and W/AC and the Ni/W/ AC_coIM catalyst, the yield of polyols was only 7.9% and 10.3%, respectively.This is different from the literature, 41 where it was claimed that for high EG yields the two functional components for cellulose degradation and hydrogenation can be combined by simple co-reduction or by simple physical mixing.The desired behavior in order to produce ethylene glycol with a very high yield showed the catalyst Ni/W/AC.Starting with a high cellulose/catalyst ratio of 10, a cellulose conversion of 88.4% and a yield of ethylene glycol of 43.7% were achieved.The summarized yield of all obtained polyols is equal to 62.1%.This corresponds to a space-time-yield of 1.6 g ethylene glycol (g catalyst h) −1 and 2.2 g polyols (g catalyst h) −1 .Note that the quantity of ketones was drastically smaller compared to the co-impregnated catalyst and the mixed catalyst.
Structural features of the catalysts
The XRD patterns of the catalysts are shown in Fig. 2. Keeping in mind that, after IW impregnation of the activated carbon with ammonium metatungstate hydrate and the following drying steps (see section 2.2), this material was reduced in H 2 up to 1128 K, the formation of metallic tungsten can be expected, which is typically formed in the temperature range 1023-1273 K. 42 For the catalyst Ni/AC (Fig. 2, curve (a)) no peak for nickel could be found.The catalyst Ni/W/AC (Fig. 2, curve (c)) showed a very small peak corresponding to nickel at 2Theta = 44.2°.This can be caused by a passivation of nickel catalysts after the preparation procedure.Because of the passivation, we reduced the catalyst Ni/W/AC in situ before the XRD pattern was measured (Fig. 2, curve (d)).After the in situ reduction, peaks corresponding to metallic nickel species (at 2Theta = 44.2°,51.5°, and 75.7°) were obtained with a higher intensity.The comparison of the passivated and in situ reduced Ni/W/AC catalyst is much more clearly illustrated in Fig. 3.If the catalyst was prepared via the co-IW method (Ni/W/AC_coIW), then W 2 C, Ni 2 W 4 C and WO 3 were detected, but the peaks were very small (not shown in this paper).
TPR curves (Fig. 4) of various catalysts give more information about H 2 consumption, reducibility of the catalyst surface and also about metal-metal interaction.For the AC support (Fig. 4, curve (a)) a broad peak with low intensity around 900 K was observed.By coupling TPR with a mass spectrometer, CH 3 and CH 4 signals were detected, which clearly indicates the methanation of this material.The W/AC catalyst showed a very similar TPR profile (Fig. 4, curve (b)) in comparison with AC, suggesting no existence of reducible tungsten species.However, as mentioned above this is caused by the use of a high reduction temperature (1128 K) during the preparation procedure for tungsten pre-catalyst reduction.This also corresponds with XRD results showing only the presence of irreducible metallic W in the W/AC catalyst.For the catalysts containing nickel, there are more sets of peaks in TPR profiles, one in the region of 400-500 K, and another broad peak around 750 K.Because of the passivation procedure after the catalyst preparation, NiO was formed.The observed reduction peak in the range of 400-500 K could be allocated to NiO reduction, which is typical for Ni catalysts.The second reduction peak belongs to the methanation of the AC support, which was confirmed by the presence of CH 3 and CH 4 signals in the mass spectrometer (Fig. 5).As expected, the Ni/AC catalyst and a physical mixture of Ni/AC + W/AC gave related profiles (Fig. 4, curves (c) and (d)).Therein, no interaction was found between tungsten and nickel.The catalyst Ni/W/AC (Fig. 4, curve (f )) showed lower hydrogen consumption for the first reduction peak at 400-500 K, in contrast to Ni/AC, where the hydrogen consumption for this peak was very high (comprises more NiO).It can be disclosed that the catalyst obtained after the two-step IW impregnation must contain two active constituent parts: metallic tungsten for the degradation of cellulose and nickel for the subsequent hydrogenation step.Both components must be metallic in the virgin bimetallic catalyst, i.e. before hydrogenolysis starts, which is a prerequisite for high polyol production.Indeed, it is confirmed by the high yield of polyols (62%) during reaction with the Ni/W/AC.That the catalyst preparation method has a significant effect on reduction properties and performance of a catalyst is wellknown.This influence can be seen in the TPR pattern of the co-impregnated catalyst Ni/W/AC_coIW (Fig. 4, curve (e)).In comparison with the Ni/W/AC catalyst prepared via the twostep IW procedure, a different reduction behavior was achieved.A peak with a plateau was observed in the range from 500 K to 600 K and the hydrogen consumption was significantly lower in comparison with the Ni/W/AC catalyst.With reference to the XRD analysis of the Ni/W/AC_coIW catalyst, where Ni 2 W 4 C (besides WO 3 ) was found, it can be concluded that the different reduction behavior is due to the simultaneous impregnation of the active carbon with the Ni and W precursors, leading to the above mentioned nickel-tungsten phase.If the latter is present in the catalyst the yield of ethylene glycol and the carbon efficiency are much lower (Table 2).
Chemisorption of CO conducted at 273 K with the catalysts investigated in this work (Table 1) was very different.No CO adsorption took place on the catalysts W/AC and AC.This corresponds with the experimental results that no or very small amounts of hydrogenated compounds were formed during the reactions over these catalysts.The Ni/AC catalyst adsorbed a high amount of CO molecules (1097.7 μmol g Cat −1 ).The calculated dispersion of nickel was 64%, and from that, the average nickel particle size can be calculated, which is very small (2 nm).For catalysts containing tungsten, a significant decrease in CO adsorption was obtained.The calculated dispersions are not representative in these cases (values in brackets in Table 1) because of possible interactions between tungsten and nickel species (e.g.coverage of particles).
Influence of the AC support
Consequently, the most effective catalyst for ethylene glycol production from cellulose was the Ni/W/AC catalyst prepared via the two-step IW-impregnation.However, not only nickel and tungsten have a significant influence on selective production of ethylene glycol and other polyols.The kind of support material is also important and can improve the formation of desired compounds.To see this influence, nickel/ tungsten catalysts on different activated carbons were prepared and tested (Table 3, Fig. 6).Note that ethylene glycol is the main product with high yield (> 40%) for catalysts exhibiting surface areas >500 m 2 g −1 , independent of the AC type used, with the exception of Ni/W/Elorit (EG yield: 29.3%).With increasing surface area the conversion of cellulose was increased (Fig. 6).However, the differences in activity/spacetime-yield may be not only the result of their specific surface areas, but are also due to different production technologies of the activated carbons (e.g., Elorit and MRX: the steam activation process; SX Plus, SX Ultra and Norit Rox 0.8: the steam activation process and acid washing), and their surface properties.It is well-known that activated carbon in the presence of water exhibits acidic character because of the functional groups (COOH, phenolic, anhydride, etc.), 43 however, it is beyond the aim of the present study to characterize the latter in detail, for example by IR spectroscopy.The obtained spacetime-yield is the highest for Ni/W/AC SX Plus (1.9 g ethylene glycol (g catalyst h) −1 and 2.5 g polyols (g catalyst h) −1 ).
Recycling/active species
For heterogeneous catalysts, stability is one of the most important challenges.Therein, we conducted a recycling test with the Ni/W/AC-SX Ultra catalyst.The recycling test was carried out in an autoclave with 1 g catalyst, 5 g cellulose and 100 mL deionized water under a reaction temperature of 498 K and a 65 bar hydrogen atmosphere.For the re-use, it is necessary to recover the catalyst without residues of cellulose.Therefore, in the recycling test, the reaction temperature of 498 K was employed because of full conversion of cellulose which is achieved at this reaction temperature within 3 hours.At the reaction temperature of 488 K, the cellulose conversion was 88.4%.In Fig. 7, it is obvious that the yield of polyols slightly decreased from 77.8% to 63.4% in the second run.Then, in the third run, a significant decrease in polyol yield to 2.4% was obtained.After the third run, we reduced the recovered catalyst in situ under hydrogen flow followed by the fourth recycling run.The second reduction cannot renew the activity of the catalyst, and the yield of polyols was only 0.3%.Note that along with the almost complete absence of polyols, an increase of production of ketones was noticeable.Interestingly, the ICP analysis of the liquid products shows that after the first run the concentrations of nickel and tungsten in the liquid phase were 0.05 g L −1 and 0.137 g L −1 , respectively.After the second recycling run, concentrations of 0.05 g L −1 of nickel and 0.058 g L −1 of tungsten were found.The losses of Ni and W in the third repeated cycle were 0.037 g L −1 and 0.022 g L −1 , respectively, and in the last run, nearly the same (0.036 g L −1 for nickel and 0.020 g L −1 for tungsten).We can conclude that the deactivation of the Ni/W/AC catalyst is connected with an overall loss of weight of 17.2% for nickel and 7.8% for tungsten during the four recycling tests.Despite the maximum nickel and tungsten leaching after the first run of our experiments the only slight decrease of polyol and EG yield indicates that the Ni/W ratio may be still intact until further leaching of both components during the following re-use experiments changes both the activity and the product distribution.Now mainly ketones were formed (see Fig. 7) due to the decreased nickel concentration on the catalyst surface and, thus, the partial loss of the hydrogenation function.We already observed this mechanism of deactivation in the glucose hydrogenation to sorbitol. 44Note that reaction products of cellulose hydrogenolysis (glucopyranosides, polyols) have metal-binding sites 45 acting as chelating components.Therefore, although in the course of the 1st run, polyol formation is increased on the one hand, this caused leaching and the deactivation process proceeds more and more in the re-use experiments on the other hand.Moreover, polyols like EG play also a role in the formation of nickel tungstate, NiWO 4 , 46 which was detected by XRD measurement after the reaction.The formation of that kind of tungsten bronze 47 under the hydrothermal conditions of our experiments implies the presence of nickel ions (due to leaching) and tungsten oxidation (to WO 3 layers onto W) which depends on temperature and the ratio of the partial pressures of H 2 O/H 2 .For the latter process the reaction rate is approximately equal to 1.6 g W m −2 h −1 at elevated temperature and pressure (423-633 K, 70-80 bar 47 ) similar to the reaction conditions of our work.Following the recently published results, tungsten species in a broad range of valence states (0 to +6) are effective in degradation of cellulose. 16,17,41,48Similar to our observation of surface NiWO 4 , the work of T. Zhang et al. revealed that the interplay of tungstenic acid (H 2 WO 4 ) and soluble hydrogen tungsten bronze (H x WO 3 ) is important in cellulose hydrogenolysis, 17,41 the latter being the active species.
In conclusion, we investigated six catalytic systems for cellulose hydrogenolysis (and the reaction without catalyst) in order to obtain more information about the role of individual active components with respect to the formation of valuable products from cellulose such as polyols, e.g.ethylene glycol.We could confirm that tungsten species are very effective for cellulose degradation in order to achieve very high conversion; however, polyol production with the monometallic catalyst W/AC is low and requires combination with the hydrogenation properties of nickel.Based on our results, the catalyst obtained after the two-step IW impregnation must contain two active constituent parts: metallic tungsten for the degradation of cellulose and nickel for the subsequent hydrogenation step.Our investigation clearly revealed that both components must be metallic in the virgin bimetallic catalyst, i.e. before hydrogenolysis starts.Catalysts in which the two functional components for cellulose degradation and hydrogenation were combined by simple co-reduction or by simple physical mixing did not produce ethylene glycol, and the EG yield is one order of magnitude lower.Under the hydrothermal conditions of hydrogenolysis, in the presence of H 2 and reaction products (e.g.polyols), structural changes of the catalyst surface are possible.Unfortunately, they cannot be monitored by in situ characterization methods which are actually not available in the case of such kinds of gas/liquid/solid-solid reaction systems.
Fig. 1
Fig. 1 DRIFT spectra of raw cellulose (a) and hydrochar produced by the reaction of cellulose without a catalyst (b).
3. 5
Ni/AC + W/AC, Ni/W/AC_coIW, Ni/W/AC Conversions of cellulose and the yields of produced components for our experiments with catalysts containing nickel and tungsten on activated carbon are listed in Table2.To prove the synergetic effect of nickel and tungsten and the influence of structure of these catalysts on the catalytic properties, we tested three different systems: a physical mixture of Ni/AC and W/AC (Ni/AC + W/AC), the co-impregnated Ni/W/ AC_coIW and the Ni/W/AC catalyst prepared with the two-step impregnation procedure.The amounts of formed compounds differed strongly.Interestingly, notable yields of ketones occurred over the mixture of Ni/AC + W/AC and over Ni/W/ AC_coIW, which was similar to the experiment with Ni/AC.Hydroxy acetone, 1-hydroxy-2-butanone, acetonyl acetone, and acetoine were observed with a summarized yield of 20.5% for Ni/AC, 16.1% for Ni/AC + W/AC and 16.6% for Ni/W/AC-coIW.
Fig. 3
Fig. 3 Enlarged detail of the XRD patterns of (c) Ni/W/AC and (d) in situ reduced Ni/W/AC.
Fig. 5
Fig. 5 TCD signal of hydrogen consumption and ion signals for CH 3 and CH 4 during the TPR of Ni/W/AC catalyst.
Table 1
CO uptake, Ni dispersion and particle size of different catalysts a Tungsten loading = 40 wt% relating to WO 3 .Nickel loading = 10 wt%.b See text.
Table 3
Texture data of active carbon supported Ni/W catalysts and space-time-yields of ethylene glycol and polyols in cellulose hydrogenolysis a Manufactured information.b Measurements on Sorptomatic 1990. | 7,343 | 2014-06-24T00:00:00.000 | [
"Chemistry"
] |
Semiconductor Nanocrystals as Light Harvesters in Solar Cells
Photovoltaic cells use semiconductors to convert sunlight into electrical current and are regarded as a key technology for a sustainable energy supply. Quantum dot-based solar cells have shown great potential as next generation, high performance, low-cost photovoltaics due to the outstanding optoelectronic properties of quantum dots and their multiple exciton generation (MEG) capability. This review focuses on QDs as light harvesters in solar cells, including different structures of QD-based solar cells, such as QD heterojunction solar cells, QD-Schottky solar cells, QD-sensitized solar cells and the recent development in organic-inorganic perovskite heterojunction solar cells. Mechanisms, procedures, advantages, disadvantages and the latest results obtained in the field are described. To summarize, a future perspective is offered.
Introduction
To date, more energy from sunlight strikes the Earth in one hour (4.3 × 10 20 J) than all the energy consumed on the planet in a year (4.1 × 10 20 J). There is a huge gap between our present use of solar energy and its potential, which defines the grand challenge in energy research.
Currently, the photovoltaic field is divided into three generations. The first generation of solar cells refers to a single p-n junction of a crystalline Si (c-Si), exhibiting up to 25% conversion efficiency (lab), approaching the theoretical energy conversion efficiency (η) limit of 31% for single c-Si cell devices. This limit was predicted by a thermodynamic calculation of Shockley and Queisser (S-Q) for a photovoltaic conversion of solar irradiance in an ideal two level system [1]. The second generation of OPEN ACCESS solar cells included the use of amorphous-silicon, poly-crystalline-silicon or micro-crystalline-silicon (a-Si, p-Si and mc-Si), cadmium telluride (CdTe) or copper (gallium) indium selenide/sulfide.
The third generation of solar cells, developed over the last decade, aims at conversion efficiencies beyond the S-Q limit of η = 31%. At the same time, their demands include the quality of the light absorbing materials, their arrangement and their $/KW-hour cost. The third generation solar cells are broadly defined as semiconductor devices; however, they differ from the previous generations in a few aspects: (a) First generation solar cells are configured as bulk materials that are subsequently cut into wafers and treated in a "top-down" method of synthesis (silicon being the most prevalent bulk material). Third generation solar cells are configured as thin-films (inorganic layers organic dyes and organic polymers), deposited on supporting substrates or as nanocrystal quantum dots (QDs) embedded in a supporting matrix in a "bottom-up" approach; (b) Third generation solar cells do not necessarily rely only on a traditional single p-n junction configuration for the separation of the photo-generated carriers. Instead, this generation includes the use of tandem cells, composed of a stack of p-n junctions of low-dimensional semiconductor structures. Within the limit of an infinite stack of a cascade with various E g , covering a wide range of the solar spectrum, the ultimate conversion efficiency at one sun intensity can increase to about 66%; (c) Third generation solar cells can be configured as donor-acceptor (D-A) hetero-junctions, with staggered electronic band alignment (named type-II configuration). These D-A devices include the photo-electrochemical cells, polymer solar cells and QD-solar cells.
Semiconductor quantum dots exhibit significant optical and electronic properties, which can be tuned according to their size. They are strongly luminescent [2], with various possibilities of preparation methods to control their size. It is clear that these semiconductor QDs are promising alternatives to molecular species for luminescence applications [2][3][4][5]. A wide variety of papers, reviews and books highlight the vast interest generated by the QDs [6][7][8][9][10][11][12][13] .
This review focuses on QDs as light harvesters in solar cells, including different structures of QD-based solar cells-QD heterojunction solar cells, QD-Schottky solar cells, QD-sensitized solar cells and the recent development in organic-inorganic perovskite heterojunction solar cells. The mechanism, procedures, advantages, disadvantages and latest results obtained are described. In addition, a perspective on the future is offered.
Basic Terms for Photovoltaic Performance
In general photovoltaic (PV) cells, can be modeled as a current source in parallel with a diode. As the intensity of light increases, current is generated by the PV cell. Where there is no light, the PV cell behaves like a diode ( Figure 1). The total current I in an ideal cell is equal to the current I l generated by the photoelectric effect minus the diode current I D , according to the equation: where I 0 is the saturation current of the diode, q is the elementary charge 1.6 × 10 −19 Coulombs, k is a constant of value 1.38 × 10 −23 J/K, T is the cell temperature in Kelvin and V is the measured cell voltage. When taking into account the series and shunt resistances, equation 1 can be expanded to Equation (2), where n is the diode ideality factor (typically between 1 and 2) and R S and R SH represent the series and shunt resistances, respectively: When the voltage is equal to zero, the short circuit can be calculated (Jsc); The Jsc occurs at the beginning of the forward bias sweep. On the other hand, the open circuit voltage occurs when no current passes through the cell.
The solar cell is operated over a wide range of voltages (V) and currents (I). By continuously increasing the applied voltage on an irradiated cell, from V = 0 (with a short circuit current, J sc ), through the point of I = 0 (with an open circuit voltage, V oc ), to a very high value of V, it is possible to determine the maximum-power point at which the cell delivers maximum electrical power; thus, V m × I m = P max in Watts. From that point on, the fill factor, defined as FF = P max /(I sc V oc ), is determined.
A larger fill factor is desirable and corresponds to an I-V sweep that is more square-like. Fill factor is also often represented as a percentage.
The power conversion efficiency (η), defined as the percentage of the solar power that is converted from absorbed light to electrical energy, is estimated [Equation (3)]: (3) where P in is the input light irradiance, which illuminates the cell. Additional details of the operation of solar cells and their principles can be found in review articles and textbooks [17,[24][25][26].
FF V oc J sc P in
QD Heterojunction Solar Cells
In a heterojunction device, the top and bottom layers have different roles. The top layer, or window layer, is a material with a high band gap selected for its transparency to light. The window allows almost all incident light to reach the bottom layer, which is a material with low band gap that readily absorbs light. This light then generates electrons and holes very near the junction, which helps to effectively separate the electrons and holes before they can recombine.
In QD heterojunction solar cells, the bottom layer is composed of compact, mesocopic metal-oxide layers acting as electron collectors. Light is absorbed by the QDs with metal (usually gold or silver) as the top contact without additional electron blocking layers ( Figure 2A). The conduction and valence bands of the QDs permit electron injection and hole transportation to the metal oxide and the metal, respectively ( Figure 2B). The QDs in this cell structure are subsequently deposited layer by layer on the porous metal-oxide film by spin coating or dip coating of a concentrated QD solution. Each layer is cast at a high spinning rate or dipped in concentrated QD solution and then treated briefly with a solution of linker molecules intended to achieve dense and conductive QDs film. This treatment displaces the original ligand and renders the QD insoluble, allowing thin films of several hundred nanometer thicknesses to be created. The number of QD layers deposited on the metal oxide plays an important role for the photovoltaic performance. If the QD layer is too thick, the collection of photogenerated charge carriers is incomplete, while too-thin QD layers show poor light harvesting.
There are several parameters that affect the photovoltaic performance in such a device architecture. The open circuit voltage, fill factor and photocurrent decrease with increasing the QD size; however, inter-particle electron transfer is facilitated in films made of the larger QDs, because for a given film thickness, there is a smaller number of particle boundaries to cross until the electrons arrive at the metal oxide. According to Matt Law and co-authors [18], the mobility of electrons and holes increases by one to two orders of magnitude with an increased QD diameter (e.g., a 2 nm increase in the QD diameter results in a one order of magnitude increase in the electron mobility). The size-mobility trends seem to be driven primarily by the smaller number of hops required for transport through arrays of larger QDs, but may also reflect a systematic decrease in the depth of trap states with decreasing QD band gap. These authors also observed that the carrier mobility is independent of the polydispersity of the QD samples. This fact is rationalized in terms of the smaller band gap, i.e., larger diameter QDs carry most of the current in these QD solids if they can form a percolation network.
Sargent and co-authors [37] demonstrate the possibility of funneling energy toward an acceptor in QD heterojunction solar cells, involving a sequence of layers consisting of quantum dots selected as having different diameters and, therefore, different band gaps. The quantum funnel conveys photoelectrons from their point of generation toward an intended electron acceptor. This kind of solar cell benefitted from an enhanced fill factor. Another study by the same group discussed atomic ligands that make use of monovalent halide anions to enhance electronic transport and passivate surface defects in PbS QD films. Solar cells fabricated following this strategy show up to 6% solar AM1.5G power conversion efficiency [38]. Nozik et al. [39] introduce molybdenum oxide (MoO x ) and vanadium oxide as a hole extraction layer in heterojunction ZnO/PbS quantum dot solar cells. They reported on power conversion efficiency of 4.4% certified by NREL. The hole extraction layer enhances the band bending to allow efficient hole extraction. The shallow traps in the MoO x layer enhance the carrier transport to the metal anode. The same researchers demonstrate, for the first time, superior stability of cells composed of ZnO NCs using air stable 1.3 eV PbS QDs. The stability was examined in a 1,000-hour test in air under constant illumination with no encapsulation applied to the device. The device demonstrates power conversion efficiency of 3% [40]. Etgar et al. presented for the first time the use of TiO 2 nanosheets with 001 plane as the dominant exposed facet in PbS QD heterojunction solar cells, achieving a power conversion efficiency of 4.7% [33]. The better photovoltaic performance of the nanosheets compared to nanoparticles may be attributed to the higher ionic charge of the exposed (001) compared to the (101) facets, strengthening the attachment of the QDs to the TiO 2 surface. Moreover, a detailed study on the electronic properties of heterojunction solar cells was made using electrochemical impedance spectroscopy [34].
A tandem heterojunction QDs solar cell has been demonstrated recently [31,41]. The tandem solar cell was made from different sizes of PbS QDs to increase the energy harvested from the sun. In order to allow the hole and electron to recombine, a graded recombination layer was used. The open circuit voltage of the tandem solar was about 1 V, which is the sum of the two constituent single-junction devices.
Finally, multiple exciton generation (MEG) was also witnessed in a similar QD-based solar cell structure [42,43]. The MEG effect requires a photon with an energy at least twice the band gap of the QDs; This produces two or more electron-hole pairs. Therefore, it is obvious that the MEG process can enhance the photocurrent of the solar cell. The authors of these reports observed external quantum efficiency exceeding 100%. This finding opens the way for enhancing the power conversion efficiency in QD-based solar cells beyond the S-Q limit of η = 31%.
QD-Schottky Solar Cells
Schottky-based solar cells are created from the Schottky junction between a semiconductor and a metal. Solar cells of this type have a long history, dating back to 1883, when Charles Fritts coated selenium with a thin layer of gold to make one of the world's first solar cells.
How is a Schottky barrier created? When there is an interface between a metal and semiconductor, a depletion or inversion layer in the semiconductor is induced. A built-in potential, called the Schottky barrier, appears between the bulk of the semiconductor and the surface. The device architecture of a QD-Schottky barrier solar cell is shown in Figure 3A. The QDs are spin cast from solution, leading to smooth, densely packed arrays. The deposition techniques are similar to the one described for heterojunction solar cells. As shown in Figure 3B, a Schottky barrier is formed between the metal contact and the QDs film. Photogenerated holes are extracted through the transparent conducting ITO contact and a depletion region of width, W, forms near the Schottky contact. [44].
In a metal junction with semiconductor (p-type), the Voc of the cell decreases with the increased work function of the metal. However, Luther et al. [45] have found that the surface Fermi level can be pinned, so the barrier height is relatively independent of the metal.
Recent reports of QD Schottky solar cells using PbSe and PbS QDs show power conversion efficiencies (PCEs) of 1.8%-2.1% under AM1.5G illumination [46][47][48]. These results suggest that PbS and PbSe QDs films exhibit p-type semiconductor behavior after thiol treatment and form Schottky junctions on contact with metals.
QDs Schottky solar cells reach high short circuit current densities (Jsc), although in some cases, their open circuit voltage (Voc) remains low. For example, a Voc of ~0.05 V was obtained in a PbSe QD Schottky solar cell with an Au contact, due to the high work function of the Au [44]. As a result, air sensitive contacts of Ca or Mg metal coated with Al were required to increase the Voc of the Schottky junction (0.2-0.3 V of Voc) [46]. A further increase in QDs Schottky solar cell efficiency was reported recently, reaching a Voc of 0.51 V, introducing Al/LiF contact [49]. The increase in the Voc for QD Schottky solar cells, in addition to the high Jsc, puts them in a position to achieve higher efficiencies.
QD-Sensitized Solar Cells
Quantum-dot sensitized solar cells (QDSSC) are based on ensembles of nanometer size heterointerfaces between two semiconducting nanostructured materials. In this structure, QDs are attached to a wide band gap material (such as commonly used TiO 2 or ZnO) through a linker with bifunctional molecules of the form X-R-Y for linking (where X and Y are functional groups, such as carboxylic, thiol etc., and R is an alkyl group) or without a linker molecule, directly attached to the wide band gap material. Finally, a thin layer of liquid electrolyte containing a redox couple or a hole conductor (such as a hole conducting polymer) is sandwiched between this photoelectrode and a counter electrode to form the QDSSC. The device configuration depicted in Figure 4A separates the positive and negative photogenerated carriers into different regions of the solar cell using the following mechanism: after incident photons are absorbed by the QDs, photoexcited electron-hole pairs are confined within the nanocrystal. If they are not separated quickly, they will simply recombine. After the electron is injected into the metal oxide, the positively charged QD can be neutralized either by hole injection into a hole conductor or through an electrochemical reaction with a redox couple in an electrolyte. The most common deposition techniques in QDSSC are the chemical bath deposition (CBD) and successive ionic adsorption and reaction (SILAR) process where the QDs attach directly to the wide band gap material. The CBD method is one of the cheapest methods to deposit thin films and nanomaterials. The CBD technique requires solution containers and substrate mounting devices. The chemical bath deposition yields stable, adherent, uniform, robust films with good reproducibility by a relatively simple process. The growth of the thin films strongly depends on growth conditions, such as duration of deposition, composition and temperature of the solution and the topographical and chemical nature of the substrate.
The SILAR process is based on sequential reactions at the substrate surface. Each reaction is followed by rinsing, which enables a heterogeneous reaction between the solid phase and the solvated ions in the solution. Accordingly, a thin film can be grown layer-by-layer, and the thickness of the film is determined by counting the deposition reactions. Examples of materials to be used in QDSSCs are CdS and CdSe nanocrystallites. These materials have shown the possibility to inject electrons to a wider band gap material, such as TiO 2 [20][21][22]50,51], SnO 2 [52,53] and ZnO [54,55].
Various semiconductor structures were tried in QDSSCs, such as alloys of CdSeS and core shells. Tunable energy band CdSe x S (1−x) QDs were developed for QDSSCs by the SILAR technique. The results indicated that the energy band and the light absorption of CdSe x S (1−x) QDs could be controlled by the ratio of the sulfur (S) and the selenium (Se), compared with the conventional CdS/CdSe system. The alloys system shows higher light harvest ability and a broader response wavelength region expressed by its absorption spectrum and IPCE spectrum. As a result, a power conversion efficiency of 2.27% was obtained with the CdSe x S (1−x) QDSSCs under AM 1.5 illumination of 100 mW cm 2 . After being further treated with CdSe QDs, the CdSe x S (1−x) /CdSe QDSSCs yielded an energy conversion efficiency of 3.17% due to the enhanced absorption and the reduced recombination [56].
Recently, Zaban and co-authors [57] published a multilayer approach, consisting of multilayer CdSe QDs, which were assembled on a compact TiO 2 layer. They showed that the sensitization of low-surface-area TiO 2 electrodes with QD layers increases the performance of the solar cell, resulting in a 3.86% efficiency. The results showed the difference between dye-sensitized solar cells compared to QD-sensitized solar cells; when using a multilayer of dye molecules, the cell performance decreases, which is the opposite result of QD-sensitized solar cells. Further progress was achieved by Kamat et al. [58]. They demonstrate a 5.4% of power conversion efficiency by employing Mn 2+ doping of CdS in QDSSCs. QDSSCs constructed with Mn doped CdS/CdSe were deposited on mesoscopic TiO 2 film. The counter electrode in this study was Cu 2 S/graphene oxide, while the redox couple was sulfide/polysulfide. This cell showed good photostability for two hours under continuous illumination, achieving a steady photocurrent.
Organic-Inorganic Perovskite Heterojunction Solar Cells
The basic layered perovskite structures [59] are (R-NH 3 ) 2 MX 4 and (NH-R-NH )MX;(X = Cl −1 , Br −1 or I −1 ), and they are schematically depicted in Figure 5. The inorganic layers consist of sheets of corner-sharing metal halide octahedra. The M cation is generally a divalent metal that satisfies charge balancing and adopts an octahedral anion coordination. The inorganic layers are usually called perovskite sheets, because they are derived from the three dimensional AMX 3 perovskite structure, by making a one-layer-thick cut along the <100> direction of the three-dimensional crystal lattice. The structural modifications can be achieved by changing the compositions of the organic and inorganic salts in the starting solution to enable tailoring of the electronic, optical and magnetic properties.
The organic component consists of a bilayer or a monolayer of organic cations. In the case of the monolayer (monoammonium, as an example), the ammonium head of the cation bonds to the halogens in one inorganic layer, and the organic group extends into the space between the inorganic layers. For the bilayer (diammoniumcations, as an example), the molecules extend into the distance between the organic layers, which means that no van der Waals forces exist between the layers. The organic R-group most commonly consists of an alkyl chain or a single-ring aromatic group. These simple organic layers help define the degree of interaction between the inorganic layers and the properties developing in the inorganic layers. These important modifications are the result of changing the stoichiometry or composition of the organic and inorganic salts in the precursors solution used to grow the films or crystals. The layered perovskite described demonstrates that the inorganic sheets can determine the formation of single crystalline layers, which would achieve higher mobilities. The direct band gap, large absorption coefficients [61] and high carrier mobility [62,63] of organo-lead halide perovskites present good potential for their use as light harvesters in mesoscopic heterojunction solar cells. Their electronic properties can be tailored, allowing for the formation of layered materials to control the distance and the electronic coupling between the inorganic sheets, according to the structure of the organic component employed. The layered perovskites have high stability in dry air. A few reports used CH 3 NH 3 PbI 3 perovskite nanocrystals as sensitizers in photoelectrochemical cells with liquid electrolyte [23,[64][65][66]. However, the performance of these systems rapidly declined due to dissolution of the perovskite. This problem was alleviated by replacing the electrolyte with a solid state hole conductor [66]. Very recently, the tin iodide-based perovskite CsSnI 3 has been employed as a hole conductor, together with N719 as a sensitizer in solid state dye-sensitized solar cells, yielding a PCE of 8.5% [67]. Very recently, Snaith et al. [68] reported on efficient hybrid organic-inorganic solar cells, based on meso-superstructured organo halide perovskite, yielding a power conversion efficiency of 10.9%. This cell structure has few fundamental energy losses, so it can generate an open circuit voltage of more than 1V, despite the narrow energy gap (around 1.5 eV). The use of inert alumina oxide prevents the injection of electrons. As a result, the electrons are forced to reside in the perovskite and to be transported through it. In addition to this breakthrough, Etgar et al. [69] reported on the use of hole conductor free perovskite heterojunction solar cells. The authors found that the lead halide perovskite can transport holes, in addition to its functionality as an absorber, and achieved efficiency as high as 7% under low light intensity. Table 1 lists various structures of QD-based solar cells, presenting the type of QDs used and their photovoltaic parameters. Due to the many publications available in this field, the table only includes a fraction of the results to highlight the cutting edge performance of each QD-solar cell structure.
Future Perspective
Semiconductor QDs are promising alternatives to be used as light harvesters in solar cells. The properties of semiconductor QDs can be changed by tailoring their size. In addition, their band gap is tunable to different wavelengths of light, allowing them to harness energy from the visible to the infrared regions. QDs are inexpensive and easy to manufacture, making it possible to fabricate QD solar cells at low cost. There is room for major improvements in finding new semiconductors, which can be synthesized as QDs and function as light harvesters. This review shows possible architectures for QD-based solar cells, the influence of the cell structure on the cell mechanism and, hence, on the photovoltaic performance. Novel device architectures have much to offer the field, yet there is plenty of opportunity for further improvements through systematically engineering high-electron-mobility electrodes, such as nanopillars, nanowires and nanopores. The electronic interaction between QDs and electron acceptors is essential, and modifications of the photoanode surface will be required. The QD solar cells field has much to offer-devices with high performance, low fabrication cost and long-term stability can be expected in the future. | 5,514.6 | 2013-02-01T00:00:00.000 | [
"Chemistry"
] |
Nuclear pore complex plasticity during developmental process as revealed by super-resolution microscopy
Nuclear Pore Complex (NPC) is of paramount importance for cellular processes since it is the unique gateway for molecular exchange through the nucleus. Unraveling the modifications of the NPC structure in response to physiological cues, also called nuclear pore plasticity, is key to the understanding of the selectivity of this molecular machinery. As a step towards this goal, we use the optical super-resolution microscopy method called direct Stochastic Optical Reconstruction Microscopy (dSTORM), to analyze oocyte development impact on the internal structure and large-scale organization of the NPC. Staining of the FG-Nups proteins and the gp210 proteins allowed us to pinpoint a decrease of the global diameter by measuring the mean diameter of the central channel and the luminal ring of the NPC via autocorrelation image processing. Moreover, by using an angular and radial density function we show that development of the Xenopus laevis oocyte is correlated with a progressive decrease of the density of NPC and an ordering on a square lattice.
Using dSTORM super-resolution microscopy 32,33 , we showed that Xenopus Laevis oocyte development impacts on the structure and the large scale organization of the NPCs. By following some relevant parameters during developmental process, such as the internal and external nuclear pores diameter or their organization, we highlight structural modifications of the NPCs.
Results and Discussions
dSTORM imaging of the NPCs of Xenopus oocyte nuclear envelope. We have used dSTORM super-resolution imaging to gain insight for the first time into the organization and structure of the NPCs during oocytes development of X. Laevis. Xenopus nuclear envelopes were extracted and labelled at different stages of oocyte development using a protocol modified from earlier studies 34,35 in order to preserve the integrity of the membrane and to ensure adhesion of the envelope to the glass coverslip (see Materials and Methods). Membranes were labeled using either a fluorescent wheat germ agglutinin (WGA-Alexa647 or WGA-Alexa488), which has a high affinity for N-acetylglucosamine modifications of the nucleoporins present in the nuclear pore central channel 36,37 , and/or a primary antibody (AB) against the gp210 protein coupled to a secondary mouse AB with a fluorescent marker (Alexa647). Ultimate structure of the NPCs was visualized using a dSTORM microscope (Zeiss Elyra P.S.1) (Fig. 1). At least 8 samples were imaged for each condition and more than 3 images were acquired for each sample. Each reconstructed image encompassed the recording of more than 30 000 frames, with at least 1 000 000 discrete localizations per sequence. Typically, around 30 000 NPCs were visible in each image. Thus, the analysis of more than 300 000 nuclear pores for each condition enabled us to uncover very subtle changes in the measured parameters. On the reconstructed image, the center of mass for each individual nuclear pore central channel was automatically detected. The diameters of the NPC central channel and the luminal ring (Fig. 2F,G) as well as the NPC density (Fig. 2D) were calculated and near neighbor angles were measured ( Fig. 3D-F). The average image of the nuclear pore central channel was computed by combining individual reconstructed images of the pores (N = 270 000). In order to check whether the WGA labelling was not overestimating the measured densities of nuclear pores, the membrane was double stained with labeled WGA and anti-gp210 antibody (Fig. 1B). The NPCs density was similar in both cases, the percentage of pores labeled by WGA only being less than 6%.
Effect of oocyte development on the density, number and diameter of the NPCs. Despite the fact that X. Laevis oocyte is one of the favorite model for the study of NPCs (see 6 for a review), only two studies were carried out on early stage oocytes using electron microscopy 38 or Atomic Force Microscopy (AFM) 28 . The other investigations involve late stage of oogenesis, i.e. stage VI 39 , because they are easier to handle due to their large size. As shown in Fig. 2D, the early stage II exhibit the highest density of NPCs with 53.8 ± 0.9 NPC.µm −2 , while it was 46.2 ± 0.9 NPC.µm −2 at stage IV and 36.7 ± 0.8 NPC.µm −2 at stage VI (errors are experimental standard errors). The total number of NPCs per nucleus was computed from the diameter of the nuclei and nuclear pore density (Fig. 2E). Stage II oocytes had a lower number of NPCs (1.25 × 10 7 ) than stage IV oocytes (2.3 × 10 7 ) and stage VI oocytes (3.01 × 10 7 ). These results corroborate recent measurements carried out on native nuclear membranes from late stage oocytes by atomic force microscopy 28 and super-resolution microscopy 17 where the density of nuclear pores and the total number of NPCs per nucleus are in agreement with our observations. In contrast, these figures differ from the measurements obtained previously using negative staining methods and EM 38 . These discrepancies can be attributed to differences in sample preparation as discussed by Schlune et al. 28 and come from the negative staining method which tends to select nuclear envelopes fragments with the highest pore frequency.
One of the major finding of our study is the observation of a decrease of the central channel and the scaffold diameter of the NPCs. Using optical super-resolution microscopy, which is compatible with specific labelling, we show a new structural plasticity of the NPCs at the level of the FG-Nups present in the central channel and at the level of the gp210 proteins involved in the scaffold of the NPCs. The NPCs of stage II oocytes had a central channel diameter of 40.8 ± 0.8 nm, at stage IV a diameter of 38.2 ± 0.8 nm (p(II-IV) < 0.01) and a diameter of 37.4 ± 0.8 nm at stage VI (p(II-VI) < 0.01). We can notice that the size of the central channel at stage VI is similar to that reported by optical super-resolution microscopy by Löschberger et al. 17 . In our case, the majority of the central channels of the NPCs is a compact structure without hole. This observation does not depend on the reconstruction parameters or the labelling protocol of the envelope (Figs S1 and S2). A similar compact structure has also been reported by STED super-resolution microscopy by Göttfert et al. 14 . For the luminal ring, we measure a diameter of 151.8 ± 1.8 nm for stage II oocytes, 148.6 ± 2.2 nm (p(II-IV) < 0.01) for stage IV and 144.3 ± 1.5 nm (p(II-VI) < 0.01) for stage VI (errors are experimental standard errors). Precision of all measurements was assessed by bootstrapping and by comparing different rounds of experiments 40 . Since previous studies show that transcriptional activity drops very significantly from a high level at early stage to an almost undetectable level at the late stage 28,41 , we argue that the dilatation may be linked to a direct mechanical effect such as the increase of the flow of matter like RNA circulating through the NPCs. Another explanation could be an allosteric switch in the structure of the pore as already observed for some of the FG-Nups present in the nuclear pore complex 42 or other types of neuronal channels [43][44][45] . A separate study would be necessary to understand the physiological mechanism of this effect.
Effect of development stage on the lattice structure. The large-scale organization of the NPCs on the nuclear envelopes for stages II, IV and VI was determined by calculating the angle distribution between first neighbors for each NPCs and by using the angular and radial density function P(d,α). The latter describes the probability to observe on a given envelope a NPC with two neighbors at a distance d and forming an angle α. Qualitatively, we observed that the NPCs in earlier stages (II and IV) appeared to be less organized than in later stages (VI) as is it visible in Fig. 2A-C. Stage VI nuclear envelopes displayed small clusters of nuclear pores organized in square lattice with typical lateral size of 3 to 4 pores. In order to quantify this effect we measured the angle distribution function between the first two neighbors of a given NPC (Fig. 3D-F). For stage II and stage IV oocytes we can see a flat distribution which means that there are no preferential angles while at stage VI we have two distinct peaks at 90° and 180° (Fig. 3D-F). In order to investigate further this structure at all scales, different tools developed to analyze 2D and 3D crystal structure could be used such as higher order radial density function 46 or bond-orientational order parameter [47][48][49] . We chose to use the angular and radial density function P(d,α) ( Fig. 3A-C). This map can be seen as a measurement of the 2D order at all scales in the structure. A completely random structure will show a flat 2D map whereas a crystal-like structure will have a point like 2D map (Fig. S2). We observed two distinct behaviors for the early (II, IV) and late (VI) stages. In the case of stage II and stage IV we observed a map characteristic of a 2D dense amorphous structure. The maps could be divided into three regions: a low probability region corresponding to the close contact exclusion between discs of the same diameter, a high probability region corresponding to 2 neighbors in contact at a distance d and a flat region corresponding to non-overlapping NPC. In order to show that the high probability region corresponded histogram of the probability P(d,α) to observe a NPC on a given envelope with two neighbors at a distance d and forming an angle α respectively for oocytes at stage II, IV and VI. For stage VI oocytes the most probable coordinates are (135 ± 5 nm, 90 ± 6°). The frequency event is normalized by the maximum for each histogram. (D-F) First neighbor angle distribution evolution for respectively stage II, stage IV and stage VI oocytes. The red dashed line at 60° corresponds to the minimal angle possible between 3 NPCs. (G) Experimental most probable angle α * between three nuclear pore complexes as a function of the distance d between the central pore and its two neighbors (dashed line) and theoretical angle α between three nuclear pore complexes as a function of the distance d between the central pore and its two neighbors in the packing model (solid line). In this model all the nuclear pore complexes are in contact with their first neighbors but without any large-scale order.
to an amorphous organization of packed nuclear pores, we represented the most probable angle as a function of the distance between the NPC and its two neighbors. On the same axis we also represented the result of a simple packing model. In this model we assumed that the nuclear pores were closely packed. Then, the most probable angle α * formed by three NPC was determined by the contact at a distance d of the two other complexes which gives: sin α * = d NPC /d with d NPC the diameter of the complex. We observed a clear agreement between this model without any free parameter and the experimental data (Fig. 3G). This result showed that the structure at the early stages could be indeed described as a compact and amorphous structure of nuclear pore complexes without any particular order. In the case of later stage oocytes, we detected, in addition to the regions previously observed for early stages, a new localized high probability peak (d = 135 +/− 5 nm, α = 90 +/− 6°). This peak indicates the presence of neighbors positioned at the first neighbor distance and organized on a square lattice. This result showed quantitatively that the structure observed for late stage oocyte could be described as a mixture of amorphous and square lattice domains. These calculations demonstrate that the pores become organized and gradually form nano-domains with a square lattice structure on the nuclear envelope in the course of development. The first indication for a square lattice order in the NPCs organization in X. laevis oocyte has been provided by Unwin et al. 38 for stage VI. Here we directly show that this order appears actually during oocyte development. Because it is already detectable at a low NPCs density, we propose that it is mediated by the gradual production of a structural constituent of the nuclear envelope. Lamins, which interact with the nuclear basket of the pore 50-54 , could be involved in this process [55][56][57] and the molecular mechanism leading to this organization deserves further investigation.
Overall, we report for the first time that the oocyte development impacts on the nuclear organization and the structure of NPCs. Our studies could pave the way towards extensive works on how the structure of NPCs are linked with physiological activity and on the relation between the large scale organization of NPCs and the constituent of the nuclear envelope.
Materials and Methods
Living material. Xenopus laevis were purchased from "UMS 3387 -Centre de Ressources Biologiques Nuclear envelope preparation and labelling. For dSTORM imaging of the nuclear envelopes of Xenopus oocytes at different stages, we used a protocol that is modified from Peters et al. 35 , Löschberger et al. 17 and Penrad-Mobayed et al. 34 in order to preserve the integrity of the membrane and to ensure the adhesion of the envelope with the glass coverslip even for early stages oocytes. Oocytes were transferred in 3:1 medium (75 mM KCl, 25 mM NaCl, Tris-HCl, pH 7.2). The nucleus is isolated by gentle pipetting after dissecting the oocyte using two pairs of forceps (Dumont N°5). The yolk granules still adhering to the nuclear envelope is removed by gentle backs and forth movement of the nucleus in the pipette. The nucleus is then transferred into a micro-chamber, filled with the same medium. The micro-chamber consists of a well with a circular hole (2.5 mm of diameter) in a glass slide, at the bottom of which a cover slide (Zeiss type 1.5, high precision, thickness 170 ± 5 µm) is sealed with cyanoacrylate glue. To open the small nucleus at early oocyte stages (approximately 250 μm of diameter for stage II), we used insect pins of 0.1 mm of diameter (Austerlitz stainless steel insect pins 0.1 mm) instead of the Dumont forceps, largely used for IV-VI oocyte stages. Furthermore, in order to make the nuclear envelope firmly attached to the cover slide before subsequent treatments, the nucleus is pressed to the bottom of the micro-chamber using blunt-end glass capillary. After dissection, the preparation is centrifuged at 300 g for 15 min, at 4 °C. After centrifugation, the samples were fixed for 20 min with 2% paraformaldehyde in phosphate-buffered saline (PBS), twice washed with PBS, saturated with 0.5% bovine serum albumin (BSA) for 10 min, labelled with fluorescent wheat germ agglutinin (WGA-Alex647, 1 µg.ml −1 , Thermo Fisher Scientific) which is a lectin with a high affinity for N-acetylglucosamine modifications of the nucleoporins present in the nuclear pore central channel 36,37 . For the labelling of gp210, a mouse monclonal antibody against Xenopus gp210 kindly supplied by George Krohne (Wuerzburg University) have been used with the modified staining protocol of Löschberger et al. 58 , by adding a 5 min permeabilization step with 0.5% Triton-X100 between the fixation and the saturation steps. The preparations can be stored in PBS at 4 °C for 24-48 h before dSTORM imaging. dSTORM imaging. All the dSTORM imaging was performed with a Zeiss Elyra P.S.1 microscope. In order to estimate the drift sample during imaging, we used either multicolor TetraSpeck beads (100 nm diameter, TetraSpeck; Life Technologies) as fiducial markers or either cross-correlation based algorithm from Zeiss-ZEN acquisition software. The photoswitching buffer was a commercial buffer from the Abbelight Company. The preparation was covered with a cover slide, sealed with a polymerized liquid (Twinsil Silicone, Rotec) and first observed under 10x air objective to look for the nuclear envelope. Then the region to be observed with dSTORM was selected with the 100x, NA 1.46, Zeiss plan-APO objective. With laser illumination at 642 nm and an emission filter matched to A647 emission spectrum (LP 655), we acquired 30000 raw images of blinking molecules, under total internal reflection fluorescence (TIRF) microscopy mode. The camera used was an EMCCD Andor iXon 897 (pixel size 16 µm, use of an Optovar lens magnification 1.6x, so the objective 100x used is yielding a final pixel size of 100 nm). Fluorophore positions were computed using the Zeiss-Zen super-localization software (ZEN). Briefly, the determination of the x-y coordinates of the fluorophores is achieved by approximation of a two-dimensional Gaussian function to the fluorescence emission pattern of individual spatially separated fluorophores in each frame. The localization precision of the x-y coordinates is calculated at the work of Mortensen et al. 58 . We then compensated the sample drift by tracking the immobile fluorescent beads or by using cross-correlation. Only molecules localized in the range of 10-30 nm were displayed, with a fluorescent signal of more than 500 detected photons per frame. The distribution of the typical relevant parameters such as the precision of localization or the full width at half maximum of the point spread function are displayed in Fig. S3. The color intensity is directly proportional to the localization density per pixel. The dSTORM images were reconstructed with a pixel size of 10 nm. The ZEN software outputs a text file containing a list of the x and y coordinates and the precision of all detected molecules in the time-series. We used this file to further visualize and treat the data with home-made Matlab program. Image Analysis. The reconstructed super-resolution image is filtered using a hysteresis threshold. The center of mass for each individual nuclear pore central channels is detected from the regional maxima of the H-maxima transform. NPC density is computed from the number of centers of mass weighted by area occupied by the nuclear envelope on the image. The NPC central channel diameter is measured from the reconstructed image by interpolation of the full width at half maximum of the radial autocorrelation. The NPC luminal ring diameter is determined by interpolation of the third zero-value of the radial autocorrelation computed derivative. Angular and radial density function is computed from the local sampling of neighbor couples in a distance window [d − 35 nm, d + 35 nm] from the considered center of mass and then averaged on all the centers of mass.
Statistical test information.
Alpha levels used in this work are 0.01. In other words we have considered the two tailed p-value tests significant for p < 0.01. The normality of our distributions has been tested by the evaluation of the first three moments of the distributions. We have used jack-knifing methods to assess that the standard error was a good estimate of the experimental error. | 4,400.2 | 2017-11-07T00:00:00.000 | [
"Biology"
] |
Novel Agents Targeting the IGF-1R/PI3K Pathway Impair Cell Proliferation and Survival in Subsets of Medulloblastoma and Neuroblastoma
The receptor tyrosine kinase (RTK)/phosphoinositide 3-kinase (PI3K) pathway is fundamental for cancer cell proliferation and is known to be frequently altered and activated in neoplasia, including embryonal tumors. Based on the high frequency of alterations, targeting components of the PI3K signaling pathway is considered to be a promising therapeutic approach for cancer treatment. Here, we have investigated the potential of targeting the axis of the insulin-like growth factor-1 receptor (IGF-1R) and PI3K signaling in two common cancers of childhood: neuroblastoma, the most common extracranial tumor in children and medulloblastoma, the most frequent malignant childhood brain tumor. By treating neuroblastoma and medulloblastoma cells with R1507, a specific humanized monoclonal antibody against the IGF-1R, we could observe cell line-specific responses and in some cases a strong decrease in cell proliferation. In contrast, targeting the PI3K p110α with the specific inhibitor PIK75 resulted in broad anti-proliferative effects in a panel of neuro- and medulloblastoma cell lines. Additionally, sensitization to commonly used chemotherapeutic agents occurred in neuroblastoma cells upon treatment with R1507 or PIK75. Furthermore, by studying the expression and phosphorylation state of IGF-1R/PI3K downstream signaling targets we found down-regulated signaling pathway activation. In addition, apoptosis occurred in embryonal tumor cells after treatment with PIK75 or R1507. Together, our studies demonstrate the potential of targeting the IGF-1R/PI3K signaling axis in embryonal tumors. Hopefully, this knowledge will contribute to the development of urgently required new targeted therapies for embryonal tumors.
Introduction
Second to accidents, cancer is still the leading cause of death for children. Embryonal tumors represent approximately 30% of childhood malignancies and often display resistance to current therapeutic regimens. Therefore, embryonal tumors are associated with lower survival rates compared to other childhood cancers. Treatment failure for disseminated disease is frequent, and results in survival rates ,20%. Thus, novel therapeutic options are urgently needed for this group of tumors to improve survival rates and quality of life of patients. Embryonal tumors are dysontogenetic tumors whose pathological features resemble those of the developing organ or tissue of origin and include the entities medulloblastoma and neuroblastoma. Medulloblastoma is the most common malignant brain tumor in children and accounts for approximately 20% to 25% of all pediatric central nervous system tumors. Neuroblastoma is an embryonal tumor that originates from developing neural crest tissues. It is the most common extracranial solid tumor and is responsible for 15% of all cancer-related deaths in childhood. The fact that these cancers occur in infants and young children suggests that only a limited number of genetic changes may lead to tumor development, making these cancers an attractive model to identify new molecular targets. The development of novel targeted therapies is of particular importance for embryonal tumors, as these malignancies are orphan diseases. Common intracellular signaling pathways and chromosomal deletions including 1p36 and 11q loss have been previously identified in different embryonal tumors, including medulloblastoma and neuroblastoma [1][2][3][4][5][6][7][8][9][10].
Several intracellular signaling pathways have indeed been demonstrated to play a key role in embryonal tumor biology. Indeed, polypeptide growth factors such as insulin-like growth factor-1 (IGF-1), epidermal growth factor (EGF), platelet-derived growth factor (PDGF), neuregulins and neurotrophins have been shown to control embryonal tumor proliferation, survival, differentiation and metastasis [11][12][13][14][15] by binding to specific receptor tyrosine kinases (RTKs). Moreover, expression of the ErbB-2 and ErbB-4 RTKs in embryonal tumor samples was shown to correlate with reduced patient survival, while Trk receptor expression correlated with a less aggressive tumor phenotype [13]. Therefore a better understanding of the involvement of RTKs and their downstream targets in human embryonal tumor biology may yield important clues for the development of new drugs for the disease. Targeting receptor tyrosine kinases such as the IGF-1R is a promising approach to develop novel anti-cancer therapies in embryonal tumors, such as neuroblastoma and sarcoma [15][16][17][18][19][20][21][22][23]. Indeed the first results from clinical trials evaluating the safety and efficacy of IGF-1R neutralizing antibodies in children and adolescents with embryonal tumors have been reported [24,25]. In these trials, the humanized IGF-1R neutralizing antibody R1507 displayed minimal toxicities and some responses in ESFT were observed [24,25]. Importantly, no dose-limiting toxicities were identified and the maximum tolerated dose was not reached [24]. Human embryonal tumor cells have been reported to express a variety of growth factor receptors, some of which can be activated by mutations, over-expression and/or establishment of autocrine loops [13]. Amongst these polypeptide growth factor receptors are the RTKs IGF-1R, EGFR, ALK, ErbB-2, ErbB-4, c-Kit, PDGFR, Trk and fibroblast growth factor receptor (FGFR) [26][27][28][29][30][31][32][33][34][35][36][37][38][39][40][41]. Therefore, given that embryonal tumor cells express a variety of different growth factor receptors, targeting individual receptors may not provide a successful therapeutic strategy in all embryonal tumor entities. A potentially complementary approach would be to identify signaling molecules which lie downstream of several different growth factor receptors and which are essential for transmitting their proliferative and/or survival message. Combinatorial targeting of receptor tyrosine kinases (such as the IGF-1R) and their downstream signaling mediators is a very promising approach to develop more efficient anti-cancer therapies [16,17,22,[42][43][44].
The phosphoinositide 3-kinase (PI3K) plays a crucial role in controlling cell proliferation, survival and motility/metastasis downstream of many different growth factor receptors and oncogenic Ras mutants [45][46][47][48]. PI3K signaling activates a crucial intracellular signaling pathway involving phosphoinositide-dependent protein kinase-1 (PDK1), Akt, the mammalian target of rapamycin (mTOR) and the ribosomal protein S6 kinase (S6K), which controls cell growth, proliferation and survival [45][46][47]. The importance of PI3K/Akt/mTOR signaling in cancer is highlighted by the fact that mutations in the tumor suppressor gene PTEN occur frequently in human tumors, including glioblastoma [45,[49][50][51]. PTEN is a phosphatase that antagonizes the action of PI3K by de-phosphorylating the D-3 position of poly-phosphoinositides [45,49,50]. Reduced expression of PTEN resulting in activation of PI3K signaling was recently described in embryonal tumors such as medulloblastoma and neuroblastoma [52,53]. Moreover, various reports have described activating mutations in the PIK3CA gene encoding the catalytic p110a isoform of PI3K in a variety of human cancers, including, breast, colon and ovarian cancer, as well as embryonal tumors [51,54,55]. In addition, PI3K/Akt/ mTOR signaling has been demonstrated to mediate the proliferation of embryonal tumor cells [56,57] and to contribute to signaling by ErbB-2 and IGF-1R [58][59][60]. Activation of Akt was also reported in embryonal tumors, correlating with poor outcome in some entities [61]. Thus, targeting the PI3K/Akt/mTOR signaling pathway may represent an attractive novel approach to develop novel therapies for embryonal tumors [62]. Indeed, there now exist multiple pharmacological inhibitors of the PI3K/Akt/ mTOR pathway which have entered clinical trials for adult and pediatric cancer [44,46,48,[63][64][65]. The PI3K/Akt/mTOR pathway is also an important contributor to the resistance of human tumors to drugs targeting receptor tyrosine kinases [66][67][68]. Inhibitors of the PI3K/Akt/mTOR signaling pathway have also been shown to be effective in combination with IGF-1R inhibitors [20,43].
In the present report, we have evaluated the anti-proliferative potential of the humanized anti-IGF-1R antibody R1507 and of PIK75, a class I A PI3K inhibitor, in medulloblastoma and neuroblastoma cell lines. We present evidence that these agents are effective as monotherapies in subsets of embryonal tumor cell lines and can be effectively combined with standard chemotherapeutic drugs.
Cell lines, cell culture, cell proliferation and apoptosis
Human neuroblastoma cells were from the American Type Culture Collection and were kindly provided by Dr. Brodeur, Children's Hospital of Philadelphia. The cells were grown in RPMI (Life Technologies/Invitrogen) supplemented with 10% (v/ v) fetal calf serum (FCS) and penicillin/streptomycin/L-glutamine and passaged every 3-5 days by trypsinization [20,70].
The medulloblastoma cell lines' provenience has been previously described. DAOY, UW-228 and PFSK human cell lines were purchased from the American Type Culture Collection. D341 Med and D458 Med medulloblastoma cells were the kind gift of Dr. Henry Friedman (Duke University, Durham, NC) [71,72]. Cell lines that were not purchased from the American Type Culture Collection in 2009 were tested for their authentication by karyotypic analysis using molecular cytogenetic techniques, such as comparative genomic hybridization. DAOY medulloblastoma cell line was grown in Richter's MEM Zinc option medium (Invitrogen) with 10% FCS (fetal bovine serum; Sigma) and penicillin/streptomycin (Invitrogen). PFSK primitive neuroectodermal tumor (PNET) cell line [73] was grown in RPMI 1640 (Invitrogen) with 10% FCS and penicillin/streptomycin/Lglutamine. The UW-228 medulloblastoma cell line was grown in DMEM (Dulbecco's modified Eagle's medium; Invitrogen) with 10% FCS and penicillin/streptomycin/L-glutamine. D341 Med and D458 Med medulloblastoma cell lines were grown like DAOY but with the addition of 100M non-essential amino acids (GIPCO TM MEM Invitrogen). All cells were grown in a humidified atmosphere at 37u and 5% CO 2 [74,75].
For cell viability assays neuroblastoma and medulloblastoma cells were seeded in 96-well plates at a density of 3 9 000-10 9 000 cells/well and grown for 48-118 hrs in cell culture containing high (10%) serum. Cell proliferation was analyzed by the CellTiter 96HAQ ueous One Solution Cell Proliferation Assay (Promega) according to the manufacturer's instructions. For detection of apoptosis, NB and MB cells were seeded on 6-well plates and incubated for 24 h in the presence or absence of PIK75, R1507 or cisplatin. After lysis of the cells the protein samples were analyzed by SDS-PAGE and Western blot with anti-PARP and anticaspase-3 antibodies. Additionally, apoptosis was analyzed by caspase 3/7 activation using the Caspase-Glo 3/7 Assay (Promega), according to the manufacturer's instruction.
Statistical analysis
Analysis of variance was used to assess statistical significance of differences between groups. p values ,0.05 were considered as significant.
Results
Anti-proliferative activity of R1507 and PIK75 in panels of neuroblastoma and medulloblastoma cell lines We have previously described panels of neuroblastoma and medulloblastoma cell lines, which were characterized for expression of components of the IGF-1R/PI3K signaling pathway [20,70,75]. In the present study, the impact of the humanized anti-IGF-1R antibody R1507 was evaluated on cell proliferation in vitro in the panels of neuroblastoma and medulloblastoma cell lines (Fig. 1). The antibody displayed anti-proliferative activity in 2 out of 8 neuroblastoma cell lines, namely SH-SY5Y and LAN1 ( Fig. 1A and 2A). In SH-SY5Y, R1507 induced a maximal decrease in cell viability of 60% at 12.5 mg/ml (Fig. 1A). In LAN1 a maximal activity of ,25% reduction in cell proliferation was observed ( Fig. 2A). R1507 showed anti-proliferative activity in 2 out of 5 medulloblastoma cell lines, namely PFSK and D458 (Fig. 1B). In D458, a maximal decrease in cell proliferation (60%) was observed at 15 mg/ml, while in PFSK the maximal effect was 40% inhibition of the response (Fig. 1B). In NB and MB cell lines, the activity of R1507 was cell-line specific and the antibody had a profile similar to the IGF-1R tyrosine kinase inhibitor NVP-AEW541 [20]. NVP-AEW541 was more active in SH-SY5Y and LAN-1 than in other neuroblastoma cell lines [20], and PFSK were more sensitive to NVP-AEW541 than DAOY and UW228 cells. The expression levels of the IGF-1R in NB cells did not correlate with the activity of R1507, as previously observed with NVP-AEW541 [20]. Similarly, in the MB cell line panel which was previously analyzed for IGF-1R expression [75], there was apparently no correlation between receptor expression and the activity of R1507.
The impact of the class I A PI3K inhibitor PIK75 was evaluated on cell proliferation in vitro in the panels of neuroblastoma and medulloblastoma cell lines (Fig. 3). The inhibitor displayed strong anti-proliferative activity in the neuroblastoma cell line panel, with IC 50 values in the range of 50-100 nM (Fig. 3A). In medulloblastoma cell lines, PIK75 was more active in D341 and D458 cells (IC 50 = 20 nM) than in UW228 cells. The activity of PIK75 in DAOY and PFSK cells was previously described [75].
Impact of R1507 and PIK75 on intracellular signaling pathway activation
The impact of R1507 and PIK75 on the activation status of the Akt/mTOR pathway in NB cell lines was investigated by Western blot analysis (Fig. 4). R1507 strongly affected the activation status of Akt and the phosphorylation of the mTOR downstream target ribosomal S6 protein in the R1507-responsive NB and MB cell lines. (SH-SY5Y in Fig. 4A, LAN1 cells Fig. 4B, as well as in medulloblastoma PFSK cells Fig. 4D). Concentrations of 6.25-100 mg/ml R1507 reduced Akt Ser473 phosphorylation, whereas only concentrations of more than 6.25 or 12.5 were needed to reduce S6 Ser235/236 phosphorylation. In the R1507-insensitive NB cell line WAC2, the antibody did not induce a comparable response in terms of inhibition of the PI3K signaling pathway (Fig. 4C).
Also PIK75 was able to inhibit Akt/mTOR activation, as seen on the decreases in the phosphorylation of S6 protein in both NB cell lines treated (Fig. 4A+B), although only in LAN-1 cells the phosphorylation of Akt Ser473 was affected (Fig. 4B). The inhibitory effects of PIK75 in Akt/mTOR signaling in MB cell lines cells were previously described in [75] and correlate with the effects observed in neuroblastoma cells.
IGF-1R expression and activation in neuroblastoma and medulloblastoma cell lines
Further, we wanted to investigate the impact of R1507 on the expression and activation status on the IGF-1 receptor in MB and NB cell lines. Pre-treatment with R1507 inhibited the phosphorylation but not the expression level of IGF-1R in DAOY and LAN1 cells after IGF-1 stimulation (Fig. 4E+F).
Combination of R1507 with standard chemotherapeutic agents in neuroblastoma and medulloblastoma cell lines
We have previously shown that the IGF-1R tyrosine kinase inhibitor NVP-AEW541 enhances the effects of cisplatin on cell proliferation and apoptosis in neuroblastoma cell lines [20]. In support of this finding, the concomitant treatment of the R1507responsive SH-SY5Y neuroblastoma cell line with R1507 and cisplatin resulted in additive effects on cell proliferation (Fig. 5A). For neuroblastoma WAC2 cells that did not respond to R1507 in single treatment (Fig. 1A), there was no additional effect of R1507 in combination with cisplatin, doxorubicin or etoposide (Fig. S1). In medulloblastoma PFSK and UW228 cells, the combination of R1507 and cisplatin was more effective than the single agents (Fig. 6A+B). This is not surprising for PFSK, that also showed sensitivity to R1507 alone, but in UW228, cisplatin seems to confer R1507 sensibility. In R1507-insensitive DAOY cells no such effect was observed (Fig. 6C).
Combination of PIK75 with standard chemotherapeutic agents in neuroblastoma cells
In the neuroblastoma cell lines LAN1 and WAC2, the concomitant treatment with PIK75 and doxorubicin or etoposide resulted in additive effects on cell proliferation (Fig. 5B+C). At selected, relatively low concentrations, where PIK75, doxorubicin or etoposide alone reduced cell proliferation to 60-80%, combination treatments of PIK75 and one of the chemotherapeutic agents brought reductions to 40-50%. Our previous work on medulloblastoma has shown that PIK75 sensitizes medulloblastoma cell lines to doxorubicin [75].
Inhibitors of the RTK-PI3K-mTOR signaling in medulloblastoma
Beside classical chemotherapeutic agents used for cancer treatment, targeted therapies involving inhibitors of the RTK-PI3K signaling axis are considered to be a promising approach in cancer treatment. To investigate the role of receptor tyrosine kinases signaling in medulloblastoma, the effect of different targeted therapies was additionally studied. In vitro, the IGF-1R inhibitor NVP-AEW541 reduced the cell viability of the MB cell line DAOY with an IC50 of 2.5 mM and a maximal reduction of cell proliferation to 5% (Fig. 7A, 8C), whereas in UW228 cells concentrations higher than 10 mM were needed to provoke a response (Fig. 7B). Also targeting the EGFR with gefinitib or erlotinib was more effective in DAOY than in UW228 cells (Fig. 7A+B), with DAOY responding from concentrations higher than 2.5 mM. In UW228 cells erlotinib did not cause any effect, whereas gefitinib treatment reduced cell viability to 5% at the highest concentration tested (20 mM) (Fig. 7B). Rapamycin, a commonly used mTOR inhibitor, led to 50% reduction in cell proliferation in DAOY cells (0.0625 mg/ml) (Fig. 8D). UW228 cells responded to rapamycin with a maximum decrease in cell viability of 30% (2 mg/ml) (Fig. 7B). Imatinib, an inhibitor of the RTKs PDGFR and c-Kit, was able to reduce the cell proliferation in DAOY cells to 15% (Fig. 7A, 8B) and in UW228 cells to 40% (Fig. 7B). Before, it could be shown that the concomitant treatment of the IGF-1R antibody R1507 with cisplatin resulted in additive effects in R1507-responsive cell lines, or even was able to sensitize the R1507-non-responsive cell line UW228, but not DAOY (Fig. 6). Furthermore it was of interest, whether a concomitant treatment of different RTK/PI3K/mTOR inhibitors and R1507 could cause sensitization effects in non-responsive DAOY cells. Co-targeting EGFR (gefitinib) or PDGFR and c-Kit (imatinib) together with the IGF-1R (R1507) was not able to sensitize DAOY cells, and interestingly, even caused negative effects, meaning the combination treatment was less effective than the single agent (Fig. 8A+B). R1507, in combination with the IGF-1R inhibitor NVP-AEW541 or the mTOR inhibitor rapamycin could not further increase the effect of the single treatment (Fig. 8C+D).
The role of the IGF-1R/PI3K/Akt signaling axis on cell survival in neuroblastoma and medulloblastoma The impact of the IGF-1R/PI3K/Akt signaling axis on survival of NB and MB cells was investigated by treating the cells with increasing concentrations of R1507 or PIK75, and apoptosis was measured by PARP cleavage and caspase-3activation, both markers of apoptotic activity. Whereas PIK75 treatment led to enhanced PARP and pro-caspase 3 cleavage in LAN1 and SH-SY5Y cells (Fig. 9A+B) or to an increase in the caspase 3/7 activity in WAC2 NB cells (Fig. 9C), a comparable induction of apoptosis could not be observed with R1507 treatment in NB cells or by treating the MB cell line PFSK (Fig. 9A+B). Activity of R1507 and PIK75 in chemoresistant NB cell lines We next investigated whether R1507 or PIK75 had also antiproliferative effects in neuroblastoma cell lines with acquired resistance to standard chemotherapeutic agents (LAN1R). R1507 displayed no significant anti-proliferative activity in LAN1 cells with acquired resistance to doxorubicin ( Fig. 2A). In contrast, PIK75 displayed almost comparable anti-proliferative activity in either parental LAN1 or their chemoresistant counterparts (Fig. 2B). Western blot analysis of the protein expression of LAN1R cells showed that these cells express reduced levels of IGF-1R and p110a compared to the parental cell line LAN1. In addition, the phosphorylation levels of ERK1/2 and AKT at the positions Ser 473 and Thr 308 were also lower in LAN1R than in LAN1 (FIG. 2C). In order to investigate whether down-regulation of the IGF-1R could reduce the sensitivity of NB cells to R1507, observation, LAN1 cells were transiently transfected with siRNA targeting the receptor. At 96 h after transfection, a 50% reduction of IGF-1R expression was still observed (Fig. 2D). LAN1 cells transfected with IGF-1R siRNA displayed reduced proliferation, when compared to control cells transfected with non-targeting siRNA (Fig. 2D). In addition, IGF-1R silencing completely abrogated the response to R1507 (Fig. 2D), confirming that down-regulation of the receptor in the LAN1R cells contributes to the lack of effect of R1507 in these cells ( Fig. 2A).
Discussion
In the present report we have evaluated the anti-proliferative activity of the humanized anti-IGF-1R antibody R1507 in the embryonal tumors neuroblastoma and medulloblastoma in vitro. As a single agent, R1507 was effective in a subset of neuro-and medulloblastoma cell lines, while a majority of cell lines did not respond. The profile of R1507 in neuro-and medulloblastoma was similar to the IGF-1R tyrosine kinase inhibitor NVP-AEW541 in terms of the identity of the cell lines which were sensitive to the single agent [20]. The expression levels of the IGF-1R in MB and NB cells did not correlate with the activity of R1507, as previously observed with NVP-AEW541 [20]. The constitutive activation of downstream signaling pathways such as Akt and mTOR/S6K may modulate the sensitivity of the cells to R1507, as previously reported for NVP-AEW541 [20]. In neuroblastoma cell lines that were sensitive to R1507 as single agent, the effects of R1507 and chemotherapy (cisplatin, doxorubicin and etoposide) were additive, a result which was also observed with NVP-AEW541 [20]. However, neuroblastoma cells which were not sensitive to R1507, showed also no additive effects in cell growth inhibition to when combined with chemotherapies. By contrast, in medulloblastoma R1507 showed strong additive effects with cisplatin not only in MB cells which were initially sensitive to R1570 (PFSK), but also MB cells which were insensitive to R1507 as a single agent (UW228). Analysis of the mechanisms of action revealed that R1507 inhibits cell growth by attenuation of the AKT/mTOR signaling pathway in neuroblastoma and medulloblastoma cells. Similar observations were obtained by inhibition of IGF-1R with NVP-AEW541 [20]. Interestingly, the concomitant treatment with R1507 and inhibitors of the RTK/PI3K/mTOR signaling could not overcome the resistance of insensitive DAOY cells, resulted in combination with gefitinib and imatinib even in negative effects, compared to when they were used as single agent. Generally, DAOY cells responded to single treatments targeting the RTK/PI3K/mTOR signaling, such as IGF-1R (NVP-AEW541), EGFR (gefinitib, erlotinib), PDGFR and c-Kit (imatinib), and mTOR (rapamycin). Cell growth of the cell line UW228 was mostly not affected or higher concentrations were needed to induce a response by use of the same agents.
Our previous work using RNAi targeting of classI A PI3K isoforms has revealed that targeting these enzymes in neuroblastoma and medulloblastoma cell lines can induce apoptosis and decrease cell proliferation [70,75]. These results are supported by the findings presented here, which show that PIK75 displays a broad anti-proliferative activity in neuroblastoma cell lines. Also in medulloblastoma, we observed that PIK75 has anti-proliferative activity, but one cell line (UW228) was rather resistant to the drug. The exact mechanism(s) underlying this observation are at present unclear, but may be caused by an enhanced activation of Erk1/2. A decrease in activity of class I A PI3K inhibitors has been observed previously in cell lines with mutant KRAS and attributed to the enhanced activation of the Erk pathway [76]. The combination of PIK75 with chemotherapy (doxorubicin and etoposide) showed enhanced cell growth inhibition as compared with single agent treatment in neuroblastoma cell lines. Consistent with these findings, a recent report demonstrated that PI103 a dual inhibitor against p110a and mTOR strongly synergizes with various chemotherapeutics including doxorubicin, etoposide and cisplatin [77]. In medulloblastoma, our previous work has also demonstrated the anti-proliferative effects for PIK75 in combination with different chemotherapeutic agents [75].
Because neuroblastoma and medulloblastoma cells may express a variety of different growth factor receptors, we and others have postulated that targeting individual receptors may not always provide the best therapeutic option [20,70,75]. To overcome this problem, an alternative approach was proposed, which is based on targeting downstream signaling molecules that are regulated by different growth factor receptors to transmit the proliferative message. Our findings support this approach, since we observed that generally a bigger number of NB and MB cell lines most likely responded to PIK75 than to R1507. Importantly, PIK75 effectively inhibited proliferation in a chemoresistant neuroblastoma cell line in a comparable manner as in the parental cell line, demonstrating its broad anti-proliferative activity. By contrast, R1507 was ineffective in the chemoresistant neuroblastoma cells, which was most likely caused by reduced expression of the IGF-1R. The activation status of the AKT/mTOR pathway was also found to be reduced in the chemoresistant cells, pointing that this signaling pathway may not be responsible for the acquired chemoresistant phenotype of the cells. However, our previous findings in medulloblastoma cells showed elevated levels of phosphorylated Akt as a consequence of short time exposure with doxorubicin [75]. The molecular mechanisms underlying these observations are at present unclear, but may be of importance, in view of the fact that some clinical trials have been initiated with R1507 in patients previously treated with chemotherapy [78]. Figure S1 The NB cell line WAC2 is resistant to the combinatorial treatment of R1507 with chemotherapeutic agents. Neuroblastoma WAC2 cells treated with R1507 in combination with chemotherapy, incubated for 48 hours. Already shown to be insensitive to R1507 alone (Fig. 1A) | 5,511.8 | 2012-10-08T00:00:00.000 | [
"Biology",
"Medicine"
] |
DELAUNAY TRIANGULATION PARALLEL CONSTRUCTION METHOD AND ITS APPLICATION IN MAP GENERALIZATION
Delaunay triangulated irregular network (D-TIN) has been widely used in various fields and also played an increasingly important role on map generalization. But for massive data processing, current D-TIN algorithm is still not efficient enough to meet the requirements of map generalization. Data partitioning is an important step of parallel algorithm design. The load balance and efficiency of data partitioning is the precondition of improving parallel algorithm efficiency. For aggregated distributed point sets, the traditional Delaunay Triangulation parallel algorithm can’t ensure the balance and the execution’s efficiency of the partitioning result. The paper introduces a partitioning method using dynamic strips aiming to guarantee the computing load balance. We tested the speed-up of the D-TIN parallel algorithm using different type of point sets and the results of the experiments shows that the method of dynamic strips partitioning can help to get high and stable speed-up and the data distributional pattern and size has less influence to it. The paper realizes a mesh simplification algorithm based on parallel D-TIN and compares the efficiency based on parallel and serial D-TIN. * Corresponding author.
INTRODUCTION
As an important tool for geometry calculation, D-TIN (Delaunay Triangulated Irregular Network) has been widely used in various fields.D-TIN is very useful for analysing morphological structures of geometric figures because it has the properties of maximal and minimal angle, uniqueness.D-TIN has also played an increasingly important role on map generalization and gradually became an indispensable tool.D-TIN generation algorithm was first put forward by Delaunay in 1934 and has been matured within several decades of research.Tsai classified the D-TIN algorithms into three kinds, divideand-conquer methods, incremental insertion methods and triangulation growth methods.Shamos and Hoey first introduces the divide-and-conquer method, Lewis put forward the incremental insertion method and Lee, Schachter carried out improvement and perfection, Green first realized triangulation growth method.Although so many D-TIN algorithms has been developed, but for massive data processing, current D-TIN algorithm is still not efficient enough to meet the requirements of map generalization, especially for on-the-fly map generalization.Recently, high performance computing techniques has been widely used in Geo-computing.The parallelization of D-TIN algorithms began in the late 1980s, among various methods that have been proposed over the years, the problem that how to divide the computing task in parts that can be run concurrently in parallel processors is still not solved.
The present paper describes a new parallel D-TIN construction method based on dynamic strips data partitioning.This method proposed the division principle and can meet requirements of relatively balanced division results for different distributing types of points set data, including gathering point set data.Data partitioning is the most commonly used method and aims to guarantee the computing load balance, thereby it is an important premise for enhancing the performance of parallel algorithms.Data partitioning for traditional D-TIN parallel algorithms can obtain relatively balanced division results and high efficiency when it is used for the point set in uniform density.But the balance of division results can only be obtained at the expense of division efficiency under non-uniform density of point set situation.There are many map generalisation algorithms using DTIN because D-TIN is help for adjacency relation recognizing, conflict detecting and feature line extracting.The paper realizes a mesh simplification algorithm based on parallel D-TIN and compares its efficiency based on parallel and serial D-TIN.
The present study is organized as follows.Section 2 presents the dynamic strips data partitioning method for parallel D-TIN.Section 3 introduces the D-TIN parallel computing process and the D-TIN parallel algorithms speed-up.Section 4 presents computational results of mesh simplification based on parallel D-TIN and the concluding remarks of the paper are outlined in Section 5.
OVERVIEW OF D-TIN PARALLEL COMPUTING
The research of D-TIN parallel construction algorithm began in the late 1980s, after 20 years of exploration, scholars proposed many parallel methods, among the D-TIN parallel algorithms, the parallel algorithm based on data partition was the most commonly used method.With the development of computing technique, some D-TIN parallel algorithms were proposed based on different computing environment.
Davy proposed a parallel algorithm based on divide and conquer algorithm for 2D point set D-TIN (Davy, 1989), this method is a data partition strategy, the efficiency is influenced by the load balance of partition.Lee proposed a data partition divide and conquer strategy, combined with the projection method for D-TIN parallel algorithm (Lee, et al., 1997).This segmentation method can guarantee no boundaries recovery and continuous processing in the merger step.But the computing task become more complex and difficult when it deals with large quantity of data.Cignoni proposes two methods of D-TIN parallel computing on 3D space (Cignoni, et al., 1993).First algorithm is DeWall algorithm which is a divide and conquer algorithm in Ed (d> 2) space expansion.Firstly, according to a certain mode split and use the incremental insertion method to construct the border triangle, sharing boundaries subsets involved in construction Triangulation.The performance of this algorithm is limited by the data size, the cost of additional computing will increase by increasing of the data size.Other a D-TIN parallel algorithm is called the InCode algorithm, is designed on the basis of the incremental insertion algorithm, the first set of points is divided into K rectangular areas, each region carry out the D-TIN on different processors points, then merge those subnets.The performance of this method is influenced by data partition and the characteristic of data.
The requirement for different computing environment D-TIN parallel construction is different, thus needing to design different D-TIN parallel algorithm.Kohout proposed a D-TIN parallel algorithm based on the circumcircle guidelines on shared memory systems (Kobout and Kolingerová, 2003).Zhang proposed a D-TIN parallel algorithm based on divide and conquer algorithm in distributed environment (Zhang, et al.,2000).Yi proposed a D-TIN parallel algorithm based on grid environment (Yi, et al., 2001).Li used the CORBA component to implement the D-TIN parallel algorithm in the cluster environment (Li, et al., 2008).
In summary, D-TIN parallel computing methods mainly concentrated in data partitioning, in this paper we will propose a data partitioning method based on the dynamic strips.With the popularity of the multi-core processor, OpenMP(Open Multi-Processor) programming model provides a new way to improve the efficiency of D-TIN parallel construction, we design the D-TIN parallel computing based on the OpenMP.
The principle of data partitioning method oriented parallel D-TIN
In order to ensure the accuracy and efficiency of parallel D-TIN construction, some special requirements for parallel D-TIN algorithm should be considered when carrying out data partitioning .The paper concluded the data partitioning principle oriented parallel D-TIN construction summarized as: 1) Data partition region must be a convex polygon.Because the outer boundary is convex hull which is the special characteristic of D-TIN, in order to avoid the D-TIN subnet across the triangle edge, greater difficulties for the subnet merger or even can produce erroneous results, data partitioning need to ensure that every subset of the data are convex polygons surrounded and does not occur coincident.
2) The result of data partitioning must be balance.To ensure load balancing, each compute node of the computing tasks is relatively balanced, it is the important requirement for parallel algorithm design.The quantity of points in each point set directly related to the D-TIN algorithm's time complexity, therefore, the data partitioning results should ensure that the number of points contained in each region roughly to be equal.
3) Efficiency requirement.Data partitioning process is one part of the parallel algorithm, the execution time of the data partitioning included in the hole parallel execution time.In order to enable parallel D-TIN algorithm can get better acceleration, the data partitioning method should have a relatively high efficiency.Meet the above two principles, to minimize the amount of parallel D-TIN time, the time of data partitioning should be less.
The basic idea of DSP
The basic idea of the method is as follows: firstly, the minimum bounding rectangle of the point data set should be obtained, then the point set is split into m slim strips on the same direction (m∈N, larger than the number of processors for parallel computing ).Secondly, the strips are merged into a partition region from the first strip in sequence, if the total number of points in the merged strips match the condition of load balance threshold, a new strip will no longer added to the current data region but beginning with a new one.In a special condition, the number of the last strip merged to one region is too large to meet the load balance requirement, thus the boundaries of some strips should be moved.It is the rule for the boundaries moving, the distance of each movement is half of the stripe width.The iteration stopped until the number of this strip meets the requirement of load balance.In order to facilitate the description, the variables involved in DSP algorithms as defined in the following table. variables
Description of variables N
The number of points in all point clusters; K The numbers of parallel compute node (or threads); N A N A =N/K, the average numbers of points for every compute node at balance.In fact, data partitioning does not guarantee that each node in the amount of data is strictly consistent, so should set a load threshold surround Na upper and lower limits N D , N U ; N D The lower limit of the threshold, that is, a single computing node load floor; N U The upper limit of the threshold, that is, a single computing node load ceiling; N F Number of points in the merger within the region.
Table 1.Description of variables in DSP
The process of DSP
The process of dynamic strip partitioning method as follows (see Figure 1): 1) Calculation of the minimum bounding rectangle of point set, if the length of the rectangle in the X-axis direction is greater than the Y-axis direction, |X max -X min |≥|Y max -Y min |,the point set will be divided according to the X-axis direction, otherwise the delineation of the Y-axis direction.
2) According to the number of compute nodes (or threads), the first point set of minimum bounding rectangle roughly divided into m strips (m is an integer, generally set for the number of computing nodes K are integer multiples), and order of band number in accordance.
3) Successive merging began from the band of the smallest number, then the number of points in the merging region is accumulated.If N F (the number of points in the merging region) meet the load threshold range (N D <= N F <= N U ) , new strip of points will no longer be added, these points will be marked and given a new unified number and will not be merged into other strip.
4) If the i-1-th strip added points with N F within the combined region, the number of points does not meet the minimum single node load threshold N D , while point in the i-th strip to join the combined region, the N F more than a single maximum node load threshold N U , it need to move the border of i-th strip until the right amount of points allocated to the consolidated area.
Figure 1. Process of dynamic data partitioning
In order to improve the efficiency of the algorithm, the frequency of boundary moving should be reduced, we use semimobile way to fine-tuning the boundaries of merge region.If produce the situation described in 4), the lower boundary moved inward (narrow strip range) 1/2 width value of the region, then recount points in this strip.If N F can meet the balance requirement, it will be added to the combined area; otherwise, it still needs to adjust the bounder.If N F still more than the load threshold upper limit N U, continue moving boundary inward; if the area points less than the load threshold lower limit N D, moving boundary outward, the movement value is 1/2 of the width of each border.This is an iterative process of implementation, which can make N F reach the load threshold condition through a number of adjustments.This boundary finetuning rules of semi-mobile can make the combined region to minimize the number of the boundary adjustment to achieve the good effect of the Division of purpose.
There are three types of the spatial distribution of point set such as uniform distribution, random distribution and clustered distribution.The size of each region of the point set which is uniform or random distribution is almost same, ESP( Equally Strip Partitioning)method can meet the need of the load balancing.But for clustered distribution, the point group always distributed around a single core or multi-core, the data is concentrated in different particular region, the ESP method cannot meet the load balancing requirement.In order to verify the rationality and applicability of the DSP algorithms, during the experiment we select clustered distribution point set data, including single-core, multi-core clustered by artificial simulation data and thematic point sets of Nanjing city (see Figure 2).
Dynamic data partitioning
This experiment used thematic point sets of Nanjing city, which contains 500,000 points.During the experiment, we compared the result of ESP and DSP methods, in accordance with division requirements of four processing nodes.Figure 3 shows the results of the data partitioning using ESP and DSP methods, Table 2, 3, 4 shows the data size of each region after data partitioning.
It can be known from Figure 3 and Table 2, 3, 4 , the results of the ESP method shows serious data inequality, DSP method produced relative balanced quantity of data, and as the subset of distribution within the rectangular, the border of area across phenomenon can be avoided between subnets.More balanced subset of data obtained through the rough division, adjustment, and merging operations, these processes do not contain complex operations, such as global sorting, its time complexity is O(n), according with the three principles of D-TIN construction data partitioning.This part we will use the mesh simplification to simply point feature.Mesh simplification is composed of two steps, positioning and representation.The positioning step determines the number and position of the point features based on D-TIN, the representation step is used to calculate the distance for the replacement point features, then reconstruction the D-TIN (see Figure 7).The efficiency of this algorithm is affected by data characteristic and the performance of D-TIN construction, thus, in order to test the this algorithm's adaptability, we still use three types data to test its speed-up (see, Figure 9), reduced number of the points is the 30% of the original data.
Figure 9. Mesh Simplification Speed-up The figure 9 shows the speedups of the mesh simplification algorithm, obviously the speedups is increasing with the increasing the number of nodes.The performance of thematic data is better than the rests, the single-core aggregated data is senior than the multi-core aggregated.The reasons of this phenomenon are determined by D-TIN parallel efficiency and the cost of the position and rebuild the topological relation of the new D-TIN after mesh simplification on the different characteristic data.
CONCLUSION
The paper reviewed the DTIN construction algorithms and also the parallel algorithms.Based on the review, we found that data partitioning has been the main method of D-TIN parallel computing.The paper introduces a partitioning method using dynamic strips aiming to guarantee the computing load balance.We tested the speed-up of the D-TIN parallel algorithm using different type of point sets and the results of the experiments shows that the method of dynamic strips partitioning can help to get high and stable speed-up and the data distributional pattern and size has less influence to it.The paper realizes a mesh simplification algorithm based on parallel D-TIN and compares the efficiency based on parallel and serial D-TIN.
Figure 2 .
Figure 2. Example of experimental data
Figure
Figure 3.The result of data partition Method 1 2 3 4 ESP 20930 396736 61289 21045 DSP 124497 124629 125676 125198 Tab.2 Result of data partition on thematic data of Nanjing using ESP and DSP for 4 nodes Method 1 2 3 4 ESP 40930 186736 201289 71045 DSP 121497 127629 128676 122198 Tab.3 Result of data partition on single-core aggregated data using ESP and DSP for 4 nodes Method 1 2 3 4 ESP 120930 166736 91289 121045 DSP 123497 125629 127676 123198 Tab.4 Result of data partition on multi-core aggregated data using ESP and DSP for 4 nodes 4.3 D-TIN parallel computing D-TIN parallel computing includes three steps, data partitioning, parallel construction subnets and merger subnets.Serial implementation of the data division, the remaining two steps can be executed in parallel.During parallel construction subnet step, all threads simultaneously compute.During merger subnets, each of the two threads merge subnets at the same time, its time complexity is O(log N ).Figure 4. is the flow diagram of D-TIN parallel computing.Figure 5 shows the result of every step of D-TIN parallel construction using the data which is a part of the thematic point data of Nanjing city.
Figure 4. is the flow diagram of D-TIN parallel computing.Figure 5 shows the result of every step of D-TIN parallel construction using the data which is a part of the thematic point data of Nanjing city.
Figure 4 .
Figure 4.The flow diagram of D-TIN parallel computing
Figure 6 .
Figure 6.DTIN parallel algorithms Speed-upThe figure6presents the speedup of the D-TIN construction for 3 types of point set.The observed speedups, increase with the number of nodes, are almost equal between single-core and multi-core aggregated data.The speedups of the thematic data of Nanjing city are better than the other two datasets.After analyzing the spatial feature of the partitioning data, we found that some boundary lines of the strips crossing the center of the single-core and the multi-core aggregated data, therefore it would need more merger time than the thematic data of Nanjing city.
Figure 7 .
Figure 7.The flow diagram of mesh simplification5.2Experiment and analysisFigure8is an example of this approach.On the left-hand side of figure8, a dense mesh made up of large number of vertices describes the buildings of Nanjing.On the right-hand side, we used mesh simplification to generation the buildings, maintaining it outline and characteristic.
Figure 8 .
Figure 8. Example of the mesh simplification for simplifying point feature International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XXXIX-B2, 2012 XXII ISPRS Congress, 25 August -01 September 2012, Melbourne, Australia | 4,243.4 | 2012-07-25T00:00:00.000 | [
"Computer Science"
] |
Transcriptome-wide organization of subcellular microenvironments revealed by ATLAS-Seq
Abstract Subcellular organization of RNAs and proteins is critical for cell function, but we still lack global maps and conceptual frameworks for how these molecules are localized in cells and tissues. Here, we introduce ATLAS-Seq, which generates transcriptomes and proteomes from detergent-free tissue lysates fractionated across a sucrose gradient. Proteomic analysis of fractions confirmed separation of subcellular compartments. Unexpectedly, RNAs tended to co-sediment with other RNAs in similar protein complexes, cellular compartments, or with similar biological functions. With the exception of those encoding secreted proteins, most RNAs sedimented differently than their encoded protein counterparts. To identify RNA binding proteins potentially driving these patterns, we correlated their sedimentation profiles to all RNAs, confirming known interactions and predicting new associations. Hundreds of alternative RNA isoforms exhibited distinct sedimentation patterns across the gradient, despite sharing most of their coding sequence. These observations suggest that transcriptomes can be organized into networks of co-segregating mRNAs encoding functionally related proteins and provide insights into the establishment and maintenance of subcellular organization.
INTRODUCTION
Subcellular organization is critical for compartmentalization of intracellular processes and spatiotemporal control of RNA metabolism and protein translation. RNAs distribute to distinct microenvironments such as the ER (1,2), the leading edge of the cell (3), axons (4), and dendrites (5).These patterns facilitate cellular functions (6), including cell fate determination (7), directed movement (8), embryonic patterning (9), and synaptic plasticity (10). RNAs can be localized by RNA binding proteins (RBPs), via formation of ribonucleoprotein (RNP) particles or RNA transport granules that may travel on cytoskeleton (11). For example, zipcode-binding protein localizes -actin mRNA to the leading edge of fibroblasts (12), and the She2/She3/Myo4 complex localizes Ash1 mRNA to budding yeast tips (13). Cis-elements unique to each mRNA, even at the isoform level, control the repertoire of RNA binding proteins (RBPs) that they recruit. For example, constitutive or alternative 3 UTRs of mRNAs can recruit specific RBPs that influence both RNA and protein fate (14)(15)(16). Indeed, different RNPs can influence formation of granules or compartments with differing physical properties (17)(18)(19) and these properties could play a role in dictating their final destinations. One potential reason to codistribute RNAs is to facilitate efficient co-translation or co-assembly of the proteins they encode. While extensive efforts have been focused on mapping interactions between cis-elements and trans-factors (20), a major challenge remains to characterize how RNAs are distributed across different types of RNPs and whether they may be localized to distinct subcellular microenvironments.
Many techniques have been developed to study subcellular localization of RNA. In situ hybridization (21) offers high accuracy and resolution, especially with single molecule approaches, but is generally low throughput. To address this limitation, techniques such as MERFISH (22) and FISSEQ (23) have been developed to simultaneously visualize thousands of RNAs. In spite of these advances, in situ approaches do not easily reveal physical or biochemical properties of the subcellular compartments to which these RNAs localize. Without super-resolution or expansion microscopy, it can be challenging to determine whether RNAs are associated with structures such as membranes, vesicles, or the cytoskeleton. Proximity labeling techniques using BirA or APEX (24), coupled to deep sequencing, have provided alternative routes towards identifying these associations. However, it is challenging to apply these techniques to tissues in vivo, and they require exogenous introduction of fusion proteins to biotinylate specific organelles. Traditional biochemical fractionation is therefore an attractive alternative to separate RNPs with distinct biophysical properties (25). Sedimentation across density gradients have been used to stratify protein complexes across cellular com-partments (26) and analyses of sedimentation profiles reveal differences that are typically hidden from both imagebased and enrichment-based methods. Fractionation combined with sequencing has been used to analyze the transcriptome of specific cellular compartments that are purifiable (27,28), but this approach has not been used to analyze transcriptomes of many cellular compartments simultaneously with high resolution.
Here, we describe 'Assigning Transcript Locations Across Sucrose-Sequencing' (ATLAS-Seq), a detergentfree method that fractionates tissue homogenate across a continuous sucrose gradient by density ultracentrifugation, followed by RNA sequencing and mass spectrometry. We have used this approach to develop a map of the subcellular organization of the transcriptome in mouse liver and find that transcripts encoding proteins involved in similar biological processes display similar sedimentation profiles. These profiles reflect a wide array of cellular compartments and correlate with RBP sedimentation patterns, making predictions about regulatory associations. Global characterization of these profiles is a first step towards the elucidation of how RNA-protein interactions generate and maintain these subcellular compartments.
Subcellular fractionation
Wild-type FVB female mouse livers were dissected and washed in ice cold PBS. Tissue was placed in a tube containing 0.25 M buffered sucrose solution, 20mM Tris, water supplemented with protease inhibitor cocktail and 10 mM ribonucleoside-vanadyl complex (VRC) as a ribonuclease inhibitor) with 2.8 mm ceramic beads and placed in a bead homogenizer to homogenize tissue. Homogenized tissue was centrifuged at 5000 × g for 10 min to remove nuclei. A Biocomp Gradient Master™ was using to generate an 11 ml 10-50% sucrose gradient (with 10 mM VRC). Homogenate was layered onto the gradient, and components were resolved by ultracentrifugation in an SW41 rotor for 3 h at 30 000 rpm (4 • C). Twenty-four 0.5 ml fractions were collected from the gradient using the BioComp Piston Gradient Fractionator™. Fractions were split for RNA and protein extraction. RNA was extracted from each fraction by Direct-zol RNA miniprep kit. Ten equivalents of EDTA (relative to the VRC concentration) were added to each sample in Trizol-reagent before ethanol was added to remove the ribonucleoside-vanadyl complex. Protein concentrations were measured by the Pierce BCA protein assay kit.
RNA-Seq
The Kapa stranded RNA-Seq with RiboErase kit was used for prepare libraries according to manufacturer's instructions. An equal mass (500 ng) of RNA was used as input to each individual library. Libraries quality was assessed using a BioAnalyzer (Agilent, Santa Clara, CA, USA) and quantified using a Qubit (Life Technologies) prior to pooling for sequencing. Pooled libraries were 75-bp paired-end sequenced on an Illumina Next-Seq 550 v2.
Mass spectrometry
Proteins were reduced with 10 mM dithiothreitol for 1 h at 56 o C and then alkylated with 55 mM iodoacetamide for 1 h at 25 o C in the dark. Proteins were digested with modified trypsin at an enzyme/substrate ratio of 1:50 in 100 mM ammonium bicarbonate, pH 8.9 at 25 o C overnight. Trypsin activity was halted by addition of acetic acid (99.9%) to a final concentration of 5%. Peptides were desalted using C18 SpinTips (Protea, Morgantown, WV, USA) and then vacuum centrifuged. Peptide labeling with TMT 10-plex was performed per manufacturer's instructions. Lyophilized samples were dissolved in 70 l ethanol and 30 l of 500 mM triethylammonium bicarbonate, pH 8.5, and the TMT reagent was dissolved in 30 l of anhydrous acetonitrile. The solution containing peptides and TMT reagent was vortexed and incubated at room temperature for 1 h. Samples labeled with the ten different isotopic TMT reagents were combined and concentrated to completion in a vacuum centrifuge.
Read mapping, expression analysis, and isoform quantitation
Reads were aligned using Spliced Transcripts Alignment to a Reference (STAR) algorithm (29). RNA-Seq reads were quantified, pseudo-aligned to an mm10 Refseq index, and counted as transcripts per million (TPMs) using the Kallisto quantification program (30). For mitochondrial RNAs reads were pseudo-aligned to an Ensembl mm10 index and the TPM counts for annotated mitochondrialencoded RNAs from the resulting Kallisto tpm table was used to plot the distribution of mitochondrial-encoded RNAs across the ATLAS-Seq gradient ( Figure 3B). RefSeq and Ensembl TPM tables can be found in Supplementary
GO analysis
Data release from AmiGO 2 version: 2.5.12 was used to determine GO enrichments (32). Panther GO enrichment analysis (33) was used to determine GO enrichments for all analyses in the paper with one exception. P-values were determined by Fisher's exact test with Bonferroni correction for multiple testing. In Figure 4C, GOrilla (34) was used to determine cellular component GO enrichment categories for single lists. The lists for Figure 4C were ranked from highest to lowest Pearson correlation for positive association and from lowest to highest for negative correlation. GOrilla computed an uncorrected P-value according to the HG model and the FDR q-value was corrected using the Benjamini-Hochberg method.
Comparing ATLAS-Seq to ribosome profiling
Ribosome profiling was performed in mouse liver cells (35). Fastq files for ribosome profiling and RNA-Seq in mouse liver were downloaded from NCBI (GEO Accession GSE67305) and processed by Kallisto (30). Fastq files for ribosome profiling performed in HEK293T cells were downloaded from NCBI (GEO Accession GSE65778) (36). For ATLAS-Seq, x = 3 and y = 24. For polysome sequencing, x = 1 and y = 7. Essentially, TPMs were weighted by the fraction number, e.g. n i is the TPM count in the i th fraction, where i is the fraction number.
smiFISH and probes
smiFISH was performed according to (38). 3D Z-stacks were captured by epifluorescence using a Zeiss LSM880 using a 63× 1.4 NA objective and an Axiocam MRm camera. Cy3-or Cy5-conjugated Y flaps were used as secondary probe detectors for all primary probes. All probes and flaps produced and purchased from Integrated DNA Technologies (IDT) following protocols as listed in (38). All primary probe sequences are provided in Supplementary Table S4. NIH 3T3 cells were grown on chamber slides (Lab-Tek) in 10% FBS DMEM media. For smiFISH in liver, wild-type FVB mouse livers were cryosectioned into 7 uM sections and then subjected to the smiFISH protocol. DAPI staining was used to identify nuclei and all coverslips were mounted with Vectashield.
RBP analysis
RBPs were defined using publicly available datasets of previously characterized RBPS (39,40). Overlap between these datasets and our list of peptides obtained from our mass spectrometry identified 148 RBPs in our mass spectrometry dataset. The peptide profile of each RBP was correlated with the mean profile of RNAs in each ATLAS-Seq RNA cluster.
Quantification and statistical analysis
Graphs were generated using Matplotlib version 2.2.2. Statistical Analyses were performed using Python, SciPy 1.1.0 and NumPy 1.14.3 libraries. Statistical parameters, statistical tests, and statistical significance (P value) are reported in the figures and their legends. Two independent, biological replicate gradients were generated from mouse liver. Each replicate was analyzed independently, with 'gradient 2' being the replicate used for all main figures. For hierarchical clustering analysis, the SciPy.cluster.hierarchy library was used. All Correlations were calculated using NumPy corrcoef function which returns a Pearson correlation coefficient for variables. Wilcoxon rank-sum tests were used to compute statistical significance.
Detergent-free sucrose fractionation of liver lysate separates RNA signatures by their cellular microenvironments
In this study, we applied ATLAS-Seq to mouse liver. Approximately 80% of mouse liver by weight is composed of hepatocytes (41), minimizing contributions from other cell types that could confound interpretation of fractionation profiles. In addition, previous studies of liver have performed velocity sedimentation, followed by fractionation and mass spectrometry, to generate a 'fingerprint' of cofractionating proteins and protein complexes across the gradient (26). We performed similar velocity sedimentation of a detergent-free, post-nuclear liver lysate across a 10-50% sucrose gradient ( Figure 1A). Notably, although velocity sedimentation only yields modest enrichment of particular organelles at specific densities relative to density equilibrium approaches, it can be advantageous for generating unique fingerprints for a variety of RNP, RNA and membrane-associated complexes across the full spectrum of the gradient. We collected 24 fractions from homogenized supernatant and subjected 17 with sufficient protein content (fractions [3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19] to mass spectrometry (Supplementary Table S1). The normalized abundance of known organelle markers including calnexin (endoplasmic reticulum, ER), clathrin (clathrin-coated vesicles), Gapdh (cytosol), Psma1 (proteasome) and catalase (peroxisome) were plotted across the gradient ( Figure 1B) and showed patterns similar to previous studies (26). Importantly, we observed that these well-established organellar markers do not always peak at a specific density, but rather peak at different gradient fractions and exhibit distinct profiles, potentially reflecting the microenvironmental preferences of each protein in the cell.
Given our ability to separate proteins according to published expectations, we subsequently performed RNA-Seq on 22 out of 24 of the fractions with sufficient RNA content (fractions 3-24). Overall, gene expression profiles of fractions with similar densities were more highly correlated than fractions with greater density differences ( Figure 1C). Two independent biological gradients were generated and analyzed, and similar profiles were observed across the transcriptome of the biological replicate gradient (Supplementary Figure S1, Supplementary Table S2). Given the high degree of concordance between transcriptome replicate gradients, we focused subsequent analyses on the gradient with a larger number of fractions, from which matched proteomic data were also generated. Unsupervised hierarchical clustering identified groups of RNAs among whose normalized expression profiles across the gradient were highly correlated ( Figure 1D). 9269 genes were assigned to 635 distinct clusters; among these, 76 clusters contained at least 20 genes (Supplementary Table S3).
These clusters were subjected to Gene Ontology (GO) analysis. Of the 76 clusters, 53 showed enrichment for particular cellular compartments (cluster identities and GO results are in Supplementary Table S3). Similar to protein organelle markers, RNA profiles with strong GO enrichment did not always show a strong peak at any particular sucrose concentration; rather, profiles commonly showed modest enrichments of up to 4-fold at their greatest point ( Figure 1D). For example, cluster 280 showed modest depletion in the center of the gradient and was highly enriched for categories including 'chylomicron and plasma lipoprotein particle'. Clusters 522, 30 and 114 showed ∼2to 3fold enrichment at successively denser locations across the gradient, and revealed slightly different GO categories related to Golgi, aminoacyl-tRNA synthetase multienzyme complex, and mitochondrial respiration, respectively. Cluster 57, ∼2-fold enriched toward the denser part of the gradient, was enriched for proteasomal and mitochondrial categories ( Figure 1E,F). Localization patterns of RNAs encoding proteasomal components have not been previously studied as a class and these results suggest that this subset of RNAs may exhibit a shared localization signature. Interestingly, the profiles of these RNAs are distinct from the peptide profile of a proteasomal marker, Psma1, as assessed by mass spectrometry. Overall, these observations show that RNAs with similar sedimentation properties often encode proteins known to co-associate or co-assemble in the cell.
Sedimentation of RNA in ATLAS-Seq is influenced by factors beyond ribosome density
Both polysome profiling and ATLAS-Seq rely on separation by ultracentrifugation through a density gradient. However, an important difference is that polysome profiling employs detergents prior to loading onto the sucrose gradient to disrupt membranes and membranous organelles, whereas ATLAS-Seq does not (Figure 2A). To assess the extent to which ribosome density might influence ATLAS-Seq profiles, we analyzed published ribosome footprint profiles (42,43) and polysome profiles (44) together with our ATLAS-Seq profiles. As a first comparison, we correlated ribosome footprint counts to polysome profile counts (both from HEK293T cells) as estimated by a weighted sum of RNA across each polysome profile peak (see Materials and Methods). We observed a strong correlation (Pearson's R = 0.823), and importantly, a single cloud of points centered along a diagonal ( Figure 2B, left panel) indicating that polysome profiling is mainly a measure of ribosome density rather than RNAs bound to larger cellular compartments or complexes as previously reported (37).
We then correlated ribosome footprint counts to ATLAS-Seq counts, also by weighted sums across each of the 22 fractions, mirroring calculations of polysome profiling counts above. A high correlation would imply that ATLAS-Seq mirrors a polysome gradient, and a low correlation would imply that ribosome occupancy cannot fully explain ATLAS-Seq profiles. We observed a weaker correlation (Pearson's R = 0.738) and also the presence of subsets of RNAs lying off the main diagonal ( Figure 2B, right panel). Upon further inspection, we noticed that clusters we previously identified by hierarchical clustering ( Figure 1D) separated away from the diagonal and were associated with specific GO categories ( Figure 1E). For example, RNAs in cluster 589, a cluster predicted to be membrane-associated due to enrichment of RNAs encoding secreted proteins and ER components, appear less dense according to ATLAS-Seq relative to ribosome footprint profiling ( Figure 2C). Overall, these observations suggest that the mild detergent-free homogenization conditions of ATLAS-Seq allow additional cellular components besides ribosomes to influence sedimentation, providing information about the cellular compartments with which RNAs are associated.
ATLAS-Seq RNA localization patterns are consistent with those identified by orthogonal methods
Although our analyses thus far suggested that ATLAS-Seq can reveal information about the subcellular location of RNAs, we sought to compare these predictions to observations made using orthogonal methods. Crosslinking of RNAs to proteins labeled by APEX has been used to capture RNAs localized to specific subcellular locations, for example the outer surface of the ER or the mitochondrial matrix (45). First, we analyzed RNAs published to be associated with the ER according to APEX-RIP. For each ATLAS-Seq cluster, we computed the fraction of RNAs determined to be ER-associated by APEX-RIP as well as the fraction of RNAs predicted to have a signal sequence according to SignalP (46). We plotted a scatter of these metrics for each cluster ( Figure 3A) and highlighted each cluster in red if it was significantly enriched for any ER-related GO categories (Supplementary Table S3) Clusters identified to be ER-associated by ATLAS-Seq showed enrichment for RNAs identified by ER APEX-RIP and RNAs with high SignalP scores. Although one ATLAS-Seq cluster did not show ER-related GO enrichment, it did show enrichment for 'plasma membrane', and proteins in the plasma membrane are typically derived from proteins synthesized, processed, and trafficked via the endomembrane system (47,48). Interestingly, although all of these clusters were enriched for ER-related GO terms, some exhibited distinct profiles that could be further stratified by specific GO subcategories. (Supplementary Figure S2). For example, cluster 598 was enriched for the ER chaperone complex, whereas cluster 280 was enriched for RNAs encoding proteins found in lipoprotein microparticles. Therefore, our approach may provide finer resolution to identify subclusters corresponding to distinct ER microenvironments.
To further assess whether our gradient could reveal colocalized RNAs, we analyzed the 13 protein-coding mR-NAs of the mitochondrial genome, which are known to reside in the mitochondria. Profiles of these RNAs highly correlated with each other and also with the mass spectrometry profile of a mitochondrial resident protein, fumarate hydratase ( Figure 3B). The high concordance of these profiles suggests that our approach preserves the association between RNAs inside the mitochondria and proteins associated with the organelle. Taken together, these analyses confirm that ATLAS-Seq yields information related to the subcellular localization of RNA species, and that profiles of RNAs with unknown localization patterns may be used to predict their local microenvironment.
We then sought to explore subcellular distributions of RNAs for which little is known. Eleven out of 19 RNAs encoding the proteasome core complex were found in ATLAS-Seq clusters 53 and 57. The localization of the proteasome itself is well studied and has been observed to play a key role in mitochondrial biogenesis (49). Interestingly, both proteasomal clusters, 53 and 57, also contained a number of nuclear-encoded mitochondrial RNAs (Supplementary Table S3).
To assess whether results from our ATLAS-Seq analysis were consistent with an imaging-based approach, we performed single-molecule inexpensive FISH (smi-FISH) on 4 proteasomal core complex RNAs (Psma1, Psmb1, Psmc5 and Adrm1), a nuclear-encoded mitochondrial RNA (Atp5b), and a signal sequence-containing RNA (Fn1). All proteasome-encoding RNAs and Atp5b exhibited similar ATLAS-Seq profiles, whereas Fn1 exhibited a highly distinct profile that was representative of RNAs encoding secreted proteins ( Figure 3C). Smi-FISH for Fn1 RNA in both liver ( Supplementary Figure S3) and adherent NIH 3T3 cells ( Figure 3D) revealed a perinuclear pattern, consistent with the presence of a signal sequence and localization to the ER. Psma1, Psmb1, Psmc5, Adrm1 and Atp5b were found throughout the cytoplasm in a pattern distinct from that of Fn1. Interestingly, in spite of highly overlapping proteasomal and mitochondrial ATLAS-Seq profiles, smi-FISH did not reveal strong spatial co-localization. This indicates that while ATLAS-Seq cannot provide information about the precise spatial location of an RNA, it may rather provide information about local microenvironments within a particular subcellular region--a property that is often difficult to discern by image-based methods.
Comparing sedimentation patterns of RNAs and the proteins they encode
A long-standing question is the extent to which RNAs co-localize with the proteins they encode. It is well established in neurons that many synaptically localized RNAs encode locally translated proteins, and therefore show colocalization (10). In contrast, in the mouse intestinal epithelium, localization of many mRNAs is distinct from their encoded proteins (50). Although ATLAS-Seq cannot truly assess co-localization of RNAs and proteins in space, it can assess the extent to which they co-sediment. We compared normalized protein profiles to normalized RNA profiles across the sucrose gradient, limiting these analyses to genes for which we had both reasonable RNA-Seq read coverage and mass spectrometry peptide counts (404 genes in total, Supplementary Table S5). As examples, we show that RNA and protein profiles for Alb (albumin) were highly concordant (Pearson's R = 0.93), whereas the RNA and protein for Psmd13, a 26S proteasome subunit protein, were anticorrelated (Pearson's R = −0.88) ( Figure 4A). We plotted a histogram of these correlations across all genes for which we could obtain reproducible RNA and protein data and observed that most genes exhibited a negative correlation; that is, RNA and protein exhibited anti-correlated sedimentation profiles ( Figure 4B). This suggests that in liver, the majority of RNAs are not localized to the same subcellular region as the steady state destination of their protein counterparts or are in a microenvironment distinct from the protein they encode. GO analysis revealed that genes with a high correlation between their RNA and protein counterparts were enriched for secretion and/or endomembranetrafficking, whereas highly anti-correlated genes were enriched for cytosolic genes ( Figure 4C, Supplementary Table S5). Indeed, the most positively correlated genes (top 20th percentile) contained signal sequences ∼38% of the time, consistent with their translation at the ER membrane, whereas the most negatively correlated genes (bottom 20th percentile) contained signal sequences only ∼14% of time (Supplementary Figure S4C). Thus, although proteins of the ER colocalize with their RNA, most RNAs and their encoded proteins did not co-sediment in this context.
Although most RNAs do not co-sediment with the proteins they encode, our previous Gene Ontology analysis of RNA clusters (Figure 1) suggested that proteins encoded by co-sedimenting RNAs act in similar biological pathways or cellular compartments. We therefore grouped proteins by their RNA cluster assignments and analyzed their sedimentation patterns. For example, RNAs in Cluster 53, enriched for the proteasome complex, showed enrichment toward the bottom of the gradient and were highly correlated with one another, showing a median correlation among all pairwise comparisons of 0.97 ( Figure 4D). Interestingly, the proteins encoded by these RNAs also tended to correlate with one another, showing a median pairwise correlation of 0.65. To assess this globally, we analyzed every cluster for which there were at least two proteins assessed by mass spectrometry and obtained the median pairwise correlation among all proteins in each cluster. These median correlation values were enriched for positive values ( Figure 4E, pink bars), and were much greater than when computed using shuffled RNA-protein assignments ( Figure 4E, gray bars). This analysis provides further evidence that the sedimentation patterns of RNAs contain information about the subcellular localization of the proteins they encode.
Alternative isoforms are differentially localized across the ATLAS-Seq gradient
We next investigated whether alternative isoforms from the same gene loci exhibit differential sedimentation patterns across the ATLAS-Seq gradient. Because untranslated regions have known roles in regulating RNA localization, we focused on alternative UTR isoforms for these studies. We considered both alternative first exons (AFE, generated by alternative promoter usage and splicing to a constitutive exon) and alternative last exons (ALE, generated by alternative splicing and/or polyadenylation). We quantitated the proportion of each isoform present in each fraction of the gradient, labeled percent spliced in, or PSI ( ) (Supplementary Table S6). After limiting analyses to isoforms for which could be confidently estimated (see Materials and Methods), we found 152 AFEs and 332 ALEs for which the maximum difference in ( ) across the gradient for any pair of isoforms was >0.5. For example, one AFE isoform for Chtop showed a of 0.96 towards the densest part of the gradient ( Figure 5A). Similarly, one ALE isoform of DNA-Caspase-9 (Casp9) showed a of 0.73 ( Figure 5B) towards the densest part of the gradient. These observations confirm distinct subcellular distributions of alternative isoforms, as revealed by sucrose density fractionation.
If the relative abundance of these alternative isoforms is important for cell function, they may contain sequences subject to positive selection through evolution. To determine whether isoforms with differential sedimentation patterns are more phylogenetically conserved, we measured their conservation using PhyloP scores. AFE isoforms showing strong ( > 0.5) and moderate (0.5 < < 0.25) differential sedimentation showed similar conservation scores but were more highly conserved than isoforms lacking differential localization ( < 0.25) ( Figure 5C). In contrast, the extent of differential localization of ALE isoforms correlated with conservation across all three groups, e.g. more strongly localized ALE isoforms were more highly conserved suggesting their enrichment for functional features.
ATLAS-Seq profiles of RBPs correlate with target mRNAs
RBPs can control the RNA localization of their targets, but few RBP-RNA pairs have been functionally validated in this context. Because each RNA interacts with many RBPs, the impact of each RBP-RNA interaction may only subtly influence final destination of that RNP. Therefore, analysis of many RNAs showing similar sedimentation patterns may be required to provide the power necessary to identify potentially weak, yet true, significant signals.
To test the hypothesis that RBPs and their RNA targets might co-sediment through the gradient, we first analyzed a known example of an RBP-RNA pair. The RNA binding protein APOBEC1 complementation factor, A1cf, Pearson correlation coefficients between RNA and protein are shown. (B) Distribution of Pearson correlation coefficients between RNAs and the proteins they encode across the ATLAS-Seq gradient for 404 genes. (C) Cellular compartment GO categories enriched in genes whose RNAs are strongly correlated with the proteins they encode. The size of each dot is determined by the number of genes (also listed next to point) found in that GO category. Fold enrichment was calculated by the observed number of genes in a GO category divided by the expected number of genes in that category (see Methods) (top panel). Cellular compartment GO categories enriched in genes whose RNAs strongly anti-correlate with the proteins they encode (bottom panel). (D) Normalized TPM (blue lines) and peptide counts (red lines) across the ATLAS-Seq gradient for genes with both ATLAS-Seq and mass spectrometry data in Cluster 53, which is enriched for proteasome genes. The median pairwise correlation among all RNAs and among all proteins in the cluster are listed. (E) Histogram of median pairwise correlations of protein profiles (red) for all clusters containing at least two proteins. Median pairwise correlations were also computed using shuffled RNA-protein assignments and plotted (gray). For reference, the median of all median pairwise RNA correlations across all RNA clusters is indicated in blue dashed line. is known to bind and edit the RNA encoding Apolipoprotein B (Apob) (51). The relative abundance of A1cf peptides and Apob RNA were strongly correlated across the gradient (Pearson's R = 0.92, Figure 6A). Given this correlation, we hypothesized that additional RNAs whose profiles strongly correlated with A1cf might also be binding partners of A1cf. We identified 894 RNAs whose profiles correlated strongly with A1cf (Pearson's R > 0.85); these RNAs encoded proteins enriched for GO Cellular Compartment categories such as ER, golgi, endosome, and vesicles ( Figure 6A). Enriched GO Biological Processes included lipid localization/transport and the Endoplasmic Reticulum-associated protein degradation (ERAD) pathway -functions known or proposed to be associated with A1cf (52). Similar results were observed in a separate replicate gradient (Supplementary Figure S5). Notably, binding motifs for A1cf identified in vitro by BindNSeq were enriched in the 3 UTRs of these 894 RNAs relative to all other RNAs in the gradient; these hexamers were also more highly conserved than other hexamers in all 3 UTRs of mouse mRNAs ( Figure 6B).
To further assess whether the abundance of specific RBPs across the gradient might be associated with the localization of their target RNAs, we identified RBPs in our mass spectrometry dataset for which functional binding data was also publicly available. We focused on hnRNP F, for which there is publicly available CLIP-Seq data from HEK293T cells (53). We correlated all RNAs in our gradient to the peptide profile for hnRNPF and separated them by Pearson's correlation coefficient. The most strongly correlating RNAs (Pearson's R > 0.85) were enriched for specific GO categories, including ER membrane ( Figure 6C, Supplementary Figure S5). We analyzed mouse orthologs of human RNAs bound by hnRNP F according to CLIP and found that more highly correlated RNAs showed a greater density of CLIP binding in 3 UTRs relative to less correlated or anti-correlated RNAs, as measured by number of binding sites per unit of gene expression ( Figure 6D).
To uncover additional RBP-RNA relationships that may drive co-sedimentation patterns, we identified 134 RBPs (see Methods) supported by mass spectrometry peptides across our ATLAS-Seq gradient. We correlated their profiles to all ATLAS-Seq RNA clusters ( Supplementary Figure S6) and found 71 RBPs whose peptide counts correlated to our previously defined RNA cluster profiles (Pearson's R>0.85). While functional connections between most of these RBP-RNA cluster pairs are unknown, some relationships observed are consistent with known functions of the RBPs. For example, heterogeneous nuclear ribonucleoprotein Q (hnRNP Q/SYNCRIP) correlated most strongly with cluster 177 (Pearson's R = 0.86), which contains RNAs encoding proteins in the pICln-Sm protein complex, the U12-type spliceosomal complex, and U2snRNPs ( Figure 6E, Supplementary Table S7). Consistent with this pairing, HnRNP Q interacts with Survival of Motor Neuron (SMN) complex (54), is a component of the spliceosome, and has been proposed to link the SMN complex to splicing functions (55). As another example, we observed that Myosin-9 (Myh9) correlates best with cluster 430 (R = 0.90), which contains RNAs encoding cilium components ( Figure 6F, Supplementary Table S7). Myh9 is an RBP that also contains a motor domain and has been shown to compete with Myh10 to inhibit cilium biogenesis (56). Taken together, these results support the ability of ATLAS-Seq to predict RBP-RNA associations and their regulatory connections.
DISCUSSION
We have used ATLAS-Seq to uncover unexpected relationships between sucrose gradient sedimentation profiles Cumulative distribution of the number of hnRNP F CLIP binding sites per unit TPM for groups of RNAs separated by Pearson's correlation to relative abundance of hnRNP F peptides. *P < 0.05, **P < 0.001 as assessed by Wilcoxon rank-sum test. (E) Relative peptide counts for heterogeneous nuclear ribonucleoprotein Q (hnRNP Q/Syncrip, red dashes) and mean TPM profile for the RNA cluster best correlating with hnRNP Q peptide counts (blue line). Shown below are GO Cellular Compartment terms enriched in that cluster. (F) Relative peptide counts for Myosin-9 (Myh9, red dashes) and mean TPM profile for the RNA cluster best correlating with Myh9 peptide counts (blue). Shown below are GO Cellular Compartment terms enriched in that cluster.
of RNAs encoding proteins involved in similar biological functions. Deep sequencing of RNA transcriptome-wide and mass spectrometry of peptides with high resolution across the gradient facilitated the discovery of these relationships and characterized the presence of cellular microenvironments to which RNAs are sorted. Surprisingly, subtle differences in profile shape can resolve differences in the composition of cellular compartments. These profiles likely reflect not only engagement with large macromolecules such as the ribosome, but also membranes and other structures with distinct physiochemical properties. We observed that these interactions were reflected in the divergence of ribosome footprint profiles from ATLAS-Seq. Future studies directly comparing polysome profiles to ATLAS-Seq or other gradients prepared by diverse detergents, cytoskeletal disruptors, or other agents might further elucidate how various interactions drive sedimentation profiles.
Distinct microenvironments in the cell arising from these interactions--the sum of weak attractive and repulsive forces between biomolecules--may create the appropriate settings for translation, sorting, decay, and other cellular processes. Although these specialized environments are sometimes membrane-bound organelles, our observations suggest they may also reflect membrane-less organelles in the cytoplasm such as the proteasome, sites of spliceosome component assembly, or even RNP granules. These RNP granules could contain single mRNAs bound to multiple RBPs, or perhaps supra-molecular assemblies in which multiple RNPs are linked via protein-protein, protein-RNA or even RNA-RNA interactions. Thus, there exists spatial organization among thousands of RNAs revealed by physical separation across a density gradient. The observations here provide a blueprint for how RNAs might map to specific subcellular microenvironments in liver cells and provide insights into higher scale organization of the transcriptome.
Interestingly, correlations of RNA to their encoded proteins revealed that most RNA-protein counterparts are not co-localized, but that some are, most notably those encoding membrane and secreted proteins. In these cases, the colocalization may reflect co-translational insertion into specific lumenal compartments. However, both RNAs with and without signal peptide sequences often co-sedimented, suggesting that there may be additional signals within RNA that influence their localization. Notably, proteins encoded by co-sedimenting RNAs also tend to co-sediment, suggesting regulatory mechanisms that bridge the subcellular localization of each molecule. This has been previously observed for specific mRNAs at the isoform level; for example, localization of some proteins has been shown to be directly influenced by 3 UTRs of their mRNAs via association with membraneless organelles such as TIGER domains (57). This is consistent with our findings that isoforms with distinct last exons and 3 UTRs showed distinct sedimentation patterns associated with increased phylogenetic conservation. Indeed, alternative 3 UTRs have been shown to localize mRNAs to neurites versus soma (15). Whether RNAs co-localize with their encoded proteins may also depend on cell type and/or cell state and remains to be further characterized.
A key goal in studies of RNA sorting and localization is to identify RNA elements and RBPs that might define subcellular localization of RNAs and locally translated proteins. Only a small fraction of putative RBPs have been functionally characterized, but co-sedimentation of RBPs and RNA targets may reveal functional interactions. In summary, high resolution subcellular fractionation on a transcriptome-wide scale can provide important insights into the regulation of higher order, subcellular compartmentalization of mRNAs by revealing groups of RNAs that co-segregate within the cell and implicating post-transcriptional processes and trans-factors associated with these microenvironments.
DATA AVAILABILITY
Raw sequencing reads for all samples are available through the NCBI via GEO Accession GSE140630. An interactive browser that allows users to explore profiles of transcripts, proteins, and ATLAS-Seq clusters can be found at http:// ericwanglab.com/atlas.php. | 8,071 | 2020-05-18T00:00:00.000 | [
"Biology"
] |
End to End Delay Improvement in Heterogeneous Multicast Network using Genetic Optimization
: Problem statement: Multicast is a concept of group communication which refers to transmitting the same data or messages from a source to multiple destinations in the network. This one-to-many group communication is a generalization of the concepts of one-to-one unicast and one-to-all broadcast. To deliver data from the sender to all receivers efficiently, routing plays an important role in multicast communication. In QoS multicast, every receiver must receive the data within their own specified QoS constraints. This becomes challenging especially if the network is a heterogeneous network made up of wired and wireless devices. Approach: This study investigates the performance of Protocol Independent Multicast-Sparse Mode (PIM-SM) protocol in a heterogeneous network running an video conferencing application and proposes an enhanced routing protocol using Genetic Optimizing techniques to improve QOS parameters in the wireless part. Results and Conclusion: Extensive simulations were carried out using the proposed technique and existing PIM-SM. The proposed optimization technique not only improves the throughput of the network but also decreased the end to end delay.
INTRODUCTION
Audio video streaming data are popularly distributed to multiple hosts using multicast routing protocols. Multicasting sends data to multiple nodes in a group unlike either a unicast wherein a data packet has to be sent repeatedly to different nodes or broadcast wherein all the nodes in the network receive the data packet (Al-Hunaity et al., 2007). Multicast groups are formed by set of nodes interested in the same data and are represented by an IP address (Ballardie, 1997). The source node sends data to multicast group using the IP address. Routers between the source and recipient nodes forward copies to recipient nodes in different paths. Multicasting is complicated due to the varying processing power and network access bandwidth of the nodes, more so in a heterogeneous network containing both wired and wireless devices.
A multicast session is established by creating a multicast tree through which the data is transmitted to the multicast group. Multicast routing algorithms are used for constructing the multicast tree. Quality of Service (QoS) requirements such as end-to-end delay, delay variation, loss, cost, throughput are to be satisfied in group applications for efficient working of the network (Chen and Nahrsted, 1998). As the multicast tress spans through the wired and wireless devices in heterogeneous networks, resources along the path may fail to guarantee the required QoS leading to failure of the multicast tree. For efficient multicast communication, it is required that the tree constructed satisfies the resource requirements (Wang and Hou, 2000). The goal of QoS is to provide certain level of predictability and control on the service. Delay, Jitter, Bandwidth and Reliability are commonly used parameters that measure the QoS.
Multicast trees are built using either the Source Based Algorithms (SBA) or Core Based Algorithms (CBA). SBA constructs tree from the source node and connects up with all the recipients whereas in CBA, a core node is selected which acts as the root for the multicast tree and a core-based tree is a shortest-path tree rooted at some core node. In SBA (Al-Talib et al., 2009), a global state is maintained by every node and is used for tree construction in multicast protocols such as Protocol Independent Mode Dense Mode (PIM-DM) and Multicast Open Shortest Past First (MOSPF). CBA is used in many-to-many transmissions and is used for tree construction in Core Based Tree (CBT) and Protocol Independent Multicast Sparse Mode (PIM-SM).
The most commonly used multicast protocol is the Protocol Independent Multicast (PIM) . PIM is a multicast routing architecture to establish trees to many sparsely represented groups. The PIM is suitable for large heterogenous networks as it is robust, flexible and scalable (Alfawaer et al., 2007). PIM is a collection of protocols optimized for different scenario. PIM Sparse mode (PIM-SM) and PIM Dense mode (PIM-DM) are the most commonly used multicast protocols . PIM-SM uses both sourcebased tress and core-based trees whereas PIM-DM uses only source-based trees. PIM-SM is widely used in all type of networks and PIM-DM is used mainly in small domains. In PIM routing protocols, PIM join and prune messages are used to join and leave a multicast distribution tree. Biswas and Izmailov (2000) proposed a PIM-SM based IP-multicast routing framework for delivering heterogeneous quality of service. Two tree construction algorithms: TIQM and NUQM were proposed. TIQM depends upon the full availability of tree-specific information about a multicast group whereas NUQM does not require any tree specific information. Pseudo-optimal QoS-constrained trees are computed using TIQM but faces control-scalability problem. NUQM overcomes the control-scalability problem by restricting the amount of information used during tree computation. A QoS-extended intra-domain PIM-SM framework was presented. Gomez-Montalvo et al. (2009) introduced an ontology framework for automatic networked multimedia systems deployment. The proposed framework takes into consideration the configuration driven by the requirements and preferences of the users. Heterogeneous multimedia environments due to various languages, protocols and hardware make it difficult for automatic deployment and interoperability of networked multimedia systems leading to major incompatibility issues. A case study was used to demonstrate the advantages of using this framework. Gupta and Srimani (2003) presented a distributed core selection and migration protocols for mobile ad hoc networks. Core selection in ad hoc network is expensive due to the dynamic topology. The proposed core location method is based on median node of the multicast tree instead of the median node of the entire network. The proposed adaptive distributed core selection and migration method uses the median of the tree as the centroid of that tree. The cost of multicast tree is computed as the sum of weights of all the links in the tree, which gives the total bandwidth required for multicasting a packet. the cost of shortest-path tree rooted at the tree median, CostTM, is compared with the cost of shortest-path tree rooted at the median of the graph, CostGM. The simulation results show that the ratio CostTM/CostGM lies between 0.8-1.2 for different multicast groups. Tseng and Chen (2001) introduced a distributed candidate's selection protocol, named DSDMR, which is self adapting based on group density. The proposed method uses an adaptive two-direction join mechanism. This address the problem of poor scalability resulting from high control overhead, un-robustness for centralized group manager and longer than necessary join latency. The proposed method performs well both in densely populated and sparsely populated networks. Extensive simulation results show that DSDMR can create low cost tree close to optimal greedy strategy with very low control overhead and join latency.
In this study it is proposed to implement Genetic Optimization techniques to improve the QOS for heterogeneous environments consisting of wired and wireless nodes.
MATERIALS AND METHODS
Gradient search techniques are intended for local search, obtaining solutions around the region of its starting point (Maalla et al., 2009). Global search techniques obtain more optimal solution, though it is dependent on an ideal set of starting values. The Genetic Algorithm (GA) is a population-based optimization algorithm, built on the basis of biological evolution. Gene sets are replicated, varied and mutated during the process of natural evolution. Similarly, mechanisms such as selection, reproduction, mutation are used in GA to evolve better solutions. The solutions to the optimization problem represented in bit-strings are the population used in GA. Fitness functions are used to evaluate each solution. The initial population is evolved through to the next generation on application of operators such as selection, reproduction, crossover and mutation. Better solutions are produced during each generation. The evolution process continues forming new generations till an appropriate solution is reached or a specific number of generations have been obtained. The pseudo-algorithm of GA is as follows:
For each n i ∈N r a routing table is created. The table contains information of the R shortest, R cheapest and R least used paths. R is a parameter of the algorithm. A chromosome is represented by a string of length |Nr| in which each element (gene) g i represents a path between source s and destination n i . A route is selected based on the following parameters.
Discard individuals:
In P, there may be duplicated chromosomes. Thus, new randomly generated individuals replace duplicated chromosomes.
Evaluate individuals:
Objective functions are used to evaluate the individuals of P. Then, non-dominated individuals of P are compared to the individuals in Pnd to update the non-dominated set, removing from Pnd dominated individuals.
Compute fitness: SPEA procedure is used to compute the fitness of each individual.
Selection:
A roulette selection operator is applied over the set P nd UP to generate the next evolutionary population P.
Crossover and mutation:
In this work a two-point crossover operator over selected pair of individuals is proposed with some genes in each chromosome of the new population being randomly changed (mutated), obtaining a new solution. The process continues until a stop criterion or the given maximum number of generations, is satisfied.
RESULTS AND DISCUSSION
The proposed system was implemented as a layer over PIM-SM and tested in a heterogeneous environment. Figure 1 shows the experimental test bed used for simulation. The throughput obtained in the proposed system for video traffic and compared with PIM-SM is shown in Fig. 2-4 show the end to end delay and the overall data dropped in the network respectively. Figure 2 shows the improvement of the throughput in the wireless section of the proposed optimization technique. It can be seen the proposed optimization technique (blue graph) increases the throughput by more than 75% for video transmission. Figure 3 shows the overall end to end delay in the network.
It can be seen that the end to end delay for the video conferencing increases, however the uniform delay between the wired and wireless nodes improves the Quality of service by reducing the jitter. The packet dropped in the wireless lan section drops considerably due to the genetic optimization as shown in Fig. 4. Fig. 4: The data dropped in bits/second CONCLUSION In this study it was proposed to investigate the effectiveness of PIM-SM in heterogeneous environment consisting of wireless and wired nodes. Video conferencing traffic was used in this study due to its critical QOS parameters. A novel optimization technique over the PIM-SM stack was proposed using genetic optimization. The proposed GO-PIM reduced the overall end to end delay in the network. While increasing the throughput of the network. | 2,415 | 2012-08-13T00:00:00.000 | [
"Computer Science"
] |
On a delay differential equation arising from a car-following model: Wavefront solutions with constant-speed and their stability
This work is concerned with the study of a scalar delay differential equation \begin{equation*} z^{\prime\prime}(t)=h^2\,V(z(t-1)-z(t))+h\,z^\prime(t) \end{equation*} motivated by a simple car-following model on an unbounded straight line. Here, the positive real $h$ denotes some parameter, and $V$ is a so-called \textit{optimal velocity function} of the traffic model involved. We analyze the existence and local stability properties of solutions $z(t)=c\,t+d$, $t\in\mathbb{R}$, with $c,d\in\mathbb{R}$. In the case $c\not=0$, such a solution of the differential equation forms a wavefront solution of the car-following model where all cars are uniformly spaced on the line and move with the same constant velocity. In particular, it is shown that all but one of these wavefront solutions are located on two branches parametrized by $h$. Furthermore, we prove that along the one branch all solutions are unstable due to the principle of linearized instability, whereas along the other branch some of the solutions may be stable. The last point is done by carrying out a center manifold reduction as the linearization does always have a zero eigenvalue. Finally, we provide some numerical examples demonstrating the obtained analytical results.
Introduction
Consider a system of countably infinite many cars moving one after another from the left to the right-hand side along a single lane road which we shall identify with the real line in the following. After fixing some origin on the road, and thus specifying the origin on the real line, we label the cars by the integers and denote the position of each car j ∈ Z at time t ∈ R relative to the origin by the coordinate x j (t) ∈ R as indicated in Figure 1. x 0 (t) 0 Key words and phrases. delay differential equation, stability, center manifold reduction, carfollowing model.
Let us assume that each driver attempts to drive with a velocity according to some optimal velocity which depends only on the headway, that is, on the distance to the car in front. Then, considering the most simple case where the optimal velocity function V : R → [0, ∞) is the same for each driver, the motion of the cars along the line is given by the system of coupled ordinary differential equations. For the optimal velocity function V we make the following standing assumptions: (OVF 1) V is non-negative and monotonically increasing. (OVF 2) V is bounded from above by some maximum velocity V max > 0 and lim s→∞ V (s) = V max .
(OVF 3) There is a safety distance d S ≥ 0 such that V (s) = 0 for all s ≤ d S and V (s) > 0 as s > d S . (OVF 4) V is C 1 -smooth, twice continuously differentiable in (d S , ∞), and there is some constant b > 0 such that V ′ is strictly increasing in (d S , b) and, on the other hand, strictly decreasing in (b, ∞). An example of such a function is given by with some fixed maximum velocity V max > 0 and safety distance d S ≥ 0, and the typical shape of V and V ′ is indicated in Figure 2. Now, suppose that, given some fixed parameter h > 0, there exists a globally defined solution z : R → R of the scalar differential equation with constant delay. Then a straightforward calculation yields that the family {x j } j∈Z of real-valued functions x j : R → R defined by (1.4) x j (t) := z − 1 h t − j , t ∈ R, satisfies Eq. (1.1). Furthermore, by claiming z ′ (t) < 0 for all t ∈ R, we obtain a solution of the traffic model (1.1) which is characterized by the property that each driver acts in the same way as the driver of the car in front, but probably some time units later. Thus, for each parameter h > 0 a strictly decreasing global solution of the delay differential equation (1.3) forms a particular wavefront solution of the proposed traffic model (1.1).
Remark 1.1. 1. Observe that a globally defined but not strictly monotonically decreasing solution z of Eq. (1.3) may generally result in a physically non-reasonable solution of the traffic model (1.1). For instance, assuming z ′ (t) > 0 for all t ∈ R, the wavefront ansatz (1.4) leads to x j+1 (t) = z − 1 h t − (j + 1) < z(− 1 h t − j) = x j (t) as t ∈ R. But in our setting the car with the number j + 1 moves in front of the car with the number j along the real line.
2. The wavefront ansatz (1.4) would also work in the situation of a negative parameter h < 0 involved in the delay differential equation (1.3). But again we would obtain a non-reasonable solution as it would result in Therefore, the cars would move from the right to left hand-side, in contrast to our assumption that the cars move from the left to the right-hand side.
The main purpose of this paper is an analytical study of the delay differential equation (1.3) for parameter h > 0 with respect to the existence and local stability of so-called quasi-stationary solutions, that is, solutions whose first derivative is constant. In the case of a negative derivative, such a solution leads to a wavefront solution with constant-speed of the traffic model (1.1) where all cars are uniformly spaced on the line and move with the same velocity for all time. In detail, using elementary arguments, we will show that all but one of these solutions are located on two branches parametrized by the involved parameter h > 0. Furthermore, we will prove that along the one branch all the wavefront solutions with constantspeed are unstable, whereas along the other branch some of those may be stable but not asymptotically stable. This will be done by applying, on the one hand, the principle of linearized instability, and on the other hand by employing a center manifold reduction as the linearization along a wavefront solution with constantspeed has always a zero eigenvalue.
The traffic model introduced above is a modification of a well-known carfollowing model describing the dynamics of N ∈ N cars moving on a circular single lane road with some fixed circumference of length L > 0. The last mentioned model was introduced by Bando et al. in [1,2], and then the original model as well as different modifications were extensively studied during the last twenty years. However, as in this work we will neither discuss the dynamical behavior of the model (1.1) in the main, nor the obtained results from the traffic flow point of view, we refrain from a deeper discussion on related car-following models and results from the traffic flow theory. In particular, such a discussion would exceed the scope of this paper. But all these issues will be addressed in a later work [4] which is in progress.
The rest of this paper is organized as follows. The next section contains some preliminaries. Here, we rewrite Eq. (1.3) in the more abstract form of a so-called retarded functional differential equation and discuss some basic facts. Section 3 deals with the existence of wavefront solutions with constant-speed, whereas in Section 4 we examine the linearization and its spectral properties along some fixed wavefront solution with constant-speed. Section 5 is devoted to the study of the local stability properties of wavefront solutions with constant-speed, and in the final section we close this work with some numerical examples.
Preliminaries
From now on, let · R 2 denote the Euclidean norm on R 2 and C the Banach space of all continuous functions ϕ : [−1, 0] → R 2 equipped with the usual norm ϕ C = sup −1≤s≤0 ϕ(s) R 2 of uniform convergence. Given t ∈ R, an interval I ⊂ R with [t − 1, t] ⊂ I, and some continuous function w : I → R 2 , let the segment w t ∈ C of w at t be defined by w t (s) = w(t + s) as −1 ≤ s ≤ 0.
Using the above notation of segments, and as the new state variable, Eq. (1.3) for the special wavefront solutions of the traffic model takes the more convenient equivalent form where the right-hand side is defined by We have f (0) = 0 ∈ R 2 due to assumption (OVF 3), and the map f is invariant with respect to translations into the direction of the constant function for all ϕ ∈ C and all k ∈ R. Further, observe that f is at least C 1 -smooth. Indeed, introducing the two evaluation operators which both are continuous and linear, and the map which, in view of assumption (OVF 4), is continuously differentiable, the map F defined by (2.3) may be written as the composition of C 1 -smooth maps. Hence, f is continuously differentiable, and thus particularly satisfies a local Lipschitz-condition at each ϕ ∈ C.
Under a solution of Eq. (2.2) we understand either a continuously differentiable function w : R → R 2 satisfying Eq. (2.2) for all t ∈ R, or a continuous function w : [t 0 − 1, t + ) → R 2 , t 0 < t + , such that w is continuously differentiable for all t 0 < t < t + and satisfies Eq. (2.2) as t 0 < t < t + . For instance, combining f (0) = 0 and the translation invariance (2.5), we immediately see that for each real d the constant function 2) has much more solutions than those of type w 0,d as we shall show next.
Recall that the map f is locally Lipschitz-continuous. Hence, following the basic existence theory for delay differential equations as, for instance, contained in Hale and Verduyn Lunel [5] or in Diekmann et al. [3], we see that for each initial function ϕ ∈ C there exist a uniquely determined constant t + (ϕ) > 0 and an in the forward time-direction non-continuable solution w ϕ : [−1, t + (ϕ)) → R 2 of Eq. (2.2) with w ϕ 0 = ϕ. So, the only point remaining to prove is that t + (ϕ) = ∞ for all ϕ ∈ C.
2. Observe that for all ψ ∈ C we have with constant b from assumption (OVF 4).
3. Now, let ϕ ∈ C be given. Set w = w ϕ : [−1, t + (ϕ)) → R 2 for the associated maximal solution of Eq. (2.2) due to the first part. Then, integrating Eq. (2.2) and using the triangle inequality along with the part above, we obtain for all 0 ≤ t < t + (ϕ) where the constant K > 0 is given by Furthermore, a straightforward argument now shows that as long as 0 ≤ t < t + (ϕ). Consequently, applying the Gronwall's inequality, we see that for all 0 ≤ t < t + (ϕ). 4. Observe that, in view of the second part of the proof, f maps bounded sets of C into relative compact sets of R 2 . Therefore, basic continuation results -see, for instance Hale and Verduyn Lunel [5, Theorem 3.2 in Chapter 2] -for delay differential equations show that each (in the forward time-direction) noncontinuable solution of Eq. (2.2) has to leave any closed bounded subset of C in finite time. Now assume that we would have t + (ϕ) < ∞. Then, using the last part, we would see that the orbit W := {w t | 0 ≤ t < t + (ϕ)} ⊂ C of solution w is bounded in C, and w would clearly not leave the closed bounded set W ⊂ C, which is a contradiction. Consequently, t + (ϕ) = ∞ and this finishes the proof.
All the solutions of Eq. (2.2) depend continuously on the initial values ϕ ∈ C: Given ϕ ∈ C, T > 0, and ε > 0 there exists a constant δ > 0 such that for all ψ ∈ C with ϕ − ψ C < δ and all 0 ≤ t ≤ T we have In particular, the segments w ϕ t , ϕ ∈ C and 0 ≤ t < ∞, induce a continuous semiflow on the state space C, namely, F : [0, ∞) × C → C with F (t, ϕ) := w ϕ t for all t ≥ 0 and ϕ ∈ C.
Existence of wavefront solutions with constant-speed
Suppose that there exists some c > 0 with (3.1) h V (c) = c.
Then, for each d ∈ R, forms a solution of Eq. (2.2). Indeed, we have With respect to the traffic model described by Eq. (1.1), such a solution w c,d of Eq. (1.3) leads to for all t ∈ R and all j ∈ Z, and thus to the dynamical behavior where all cars are uniformly spaced on the road with distance c to the car in front and moving with the same constant velocity c/h. For that reason, we should refer to a solutions of type w c,d with d ∈ R and c > 0 satisfying Eq. Observe that in our discussion above we regarded the parameter h > 0 as fixed and looked for an appropriate choice of c > 0 such that Eq. (3.1) is satisfied, in order to obtain a wavefront solution with constant-speed. On the other hand, given any c > 0 with V (c) = 0, the parameter value h := c/V (c) > 0 trivially fulfills condition (3.1). In this way, we find a "branch" c → h(c) = c/V (c) of wavefront solutions with constant-speed. However, for the study carried out in this work it is more convenient to parametrize the wavefront solutions with constant-speed by the parameter h > 0 involved in the right-hand side of Eq. (1.3). Therefore, our next goal is to analyze the existence of wavefront solutions with constant-speed in dependence of parameter h > 0. Our first result in this direction proves that for each h > 0 the number of such solutions c > 0 is bounded from above. To be more precisely, the following holds. Proof. Given h > 0, consider the function g : [0, ∞) ∋ c → hV (c) − c ∈ R. By assumptions, g is continuously differentiable and g(0) = 0. Moreover, c = 0 is clearly the only zero of g in the interval [0, d S ], which collapse to the one-point set {0} in the case d S = 0. However, in order to see the claim, it obviously suffices to prove that g has at most two zeros in (d S , ∞). We do that for the two cases d S = 0 and d S > 0 separately.
1. Case d S = 0. In this situation, condition (OVF 4) implies that g ′ (c) = hV ′ (c) − 1 has at most two zeros in (0, ∞). Indeed, V ′ is strictly increasing in (0, b) and strictly decreasing in (b, ∞) such that, for fixed h > 0, the equation V ′ (c) = 1/h clearly has at most two solutions c > 0. Applying Rolle's theorem, we conclude that g has at most three zeros in [0, ∞). As g(0) = 0 this proves the assertion in case d S = 0. 2 Further, by assumption (OVF 4), we see (similarly to the case above) that g ′ has at most two zeros in (d S , ∞). Hence, all in all, g ′ clearly has at most two zero in (0, ∞). Using Rolle's theorem, we conclude that there are at most three different zeros of g in [0, ∞). In view of g(0) = 0, this finishes the proof.
But not for each parameter h > 0 there is some real c > 0 such that Eq. (3.1) is satisfied as we show next.
Recall that by assumption (OVF 4) we have V ′ (b) = sup c≥0 V ′ (c). Consequently, the last result implies that for each parameter h > 0 with V ′ (b) < 1/h Eq. (2.2) does not have any wavefront solutions with constant-speed.
Provided Eq. (2.2) has a wavefront solution w c,0 with constant-speed and some additional conditions are satisfied, our next result ensures the existence of another wavefront solution wĉ ,0 ,ĉ = c, with constant-speed.
Proof. 1. Consider the continuously differentiable functions g 1 and g 2 from the proof of the last statement. Under given conditions of assertion (i), we have This proves assertion (i).
2. We consider again the continuously differentiable functions g 1 and g 2 . By assumptions of assertion (ii), Hence, due to the intermediate value theorem there is some 0 < c 1 < c 2 with (g 1 − g 2 )(c 1 ) = 0, and this shows the second part of the proposition.
As an immediate consequence of the last result we get the following corollary.
We use the criterion of the last result to show that there is at most one pair Proof. 1. Contrary to our claim, suppose that there are two pairs (c 1 , h 1 ), (c 2 , h 2 ), (c 1 , h 1 ) = (c 2 , h 2 ), of positive reals such that we have As in the situation c 1 = c 2 it clearly follows that h 1 = h 2 and thus (h 1 , c 1 ) = (h 2 , c 2 ), it suffices to consider only the case c 1 = c 2 . Moreover, in view of the assumption on V , we may assume where the constant b is defined in assumption (OVF 4). In order to see this claim, consider the function g : Clearly, g is continuously differentiable and its derivative is given by Further, a simple calculation involving assumption (3.4) shows that g(c 1 ) = 0 = g(c 2 ). Therefore, Rolle's theorem implies the existence of some real c 1 < ξ < c 2 with 0 = g ′ (ξ) = ξ V ′′ (ξ). As ξ > d S ≥ 0 it follows first that V ′′ (ξ) = 0 and then, in consideration of condition (OVF 4), ξ = b as claimed.
3. By the last part, we have g(c 1 ) = 0 and d S < c 1 < b. By applying the mean value theorem, we find some 0 < θ < c 1 with and strictly increasing in (d s , b). Thus, we finally get Proof. 1. First observe that it suffices to prove the existence of such a pair (c ⋆ , h ⋆ ) of reals since the uniqueness part of the assertion would immediately follow from Proposition 3.7. In order to see the existence, fix constant d S < c 1 < b and set h := c 1 /V (c 1 ) > 0. Clearly, the realsh and c 1 satisfy the constant-speed condition given by Eq. (3.1). Moreover, we claim that V ′ (c 1 ) > 1/h such that Proposition 3.5 implies the existence of another constant c 2 > c 1 withh V (c 2 ) = c 2 . In order to see the claim, first recall from the assumptions (OVF 1) -(OVF 4) that V ′ is positive and strongly monotonically increasing on (d s , b). Now, write V (c 1 ) as 2. Next, note that we have V ′ (c 2 ) ≤ 1/h. Indeed, otherwise an application of Proposition 3.5 would show the existence of some further constant c 3 > c 2 with h V ′ (c 3 ) = c 3 , in contradiction to Proposition 3.2.
In the case V ′ (c 2 ) = 1/h, the assertion obviously follows with reals h ⋆ =h and c ⋆ = c 2 . Therefore, assume that V ′ (c 2 ) < 1/h in the following, and let function g : (d s , ∞) → (0, ∞) be given by Clearly, g is continuous, and using the first part, we get It follows that there exists where h ⋆ <ĥ ≤ ∞. Both, c 1 and c 2 , are continuously differentiable with where h V ′ (c 1 (h)) > 1 along the first branch and, conversely, along the second branch.
Proof. 1. To begin with, observe that in the case of 0 ≤ c ≤ d S we have V (c) = 0 and thus in this situation there clearly is no real h > 0 such that Eq. (3.1) holds. Therefore, it is sufficient to study only the case c > d S below. 2 In order to see this, it suffices to consider b ≤ c < c ⋆ as in the situation d S < c < b the assertion immediately follows from the proof of Proposition 3.8. In particular, there is a Then, in consideration of c = c ⋆ and of Proposition 3.8, it follows that h V ′ (c) < 1. Moreover, for the continuous function g used in the proof of Proposition 3.8 we have g(c) < 1 < g(c). For this reason, the intermediate value theorem implies the existence of somec < c 0 < c < c ⋆ with that is, H 1 is strictly decreasing. Thus, after settingĥ := lim cցd S H 1 (c), we find a continuously differentiable map c 1 : 3. Fix some h ⋆ < h 1 <ĥ. Then the last part implies that (c 1 (h 1 ), h 1 ) satisfies Eq. (3.1) and h 1 V ′ (c 1 (h 1 )) > 1. Furthermore, by Proposition 3.5 there is an additional constantc 1 > c 1 (h 1 ) such that h 1 V (c 1 ) =c 1 holds. We claim that c 1 > c ⋆ and h 1 V ′ (c 1 ) < 1. To see this, first observe that from the last part it follows easily thatc 1 ≥ c ⋆ . Next, we surely have h 1 V ′ (c 1 ) ≤ 1 since otherwise Proposition 3.5 would imply the existence of a third constant c 3 >c 1 with h 1 V (c 3 ) = c 3 , in contradiction to Proposition 3.2. But if h 1 V ′ (c 1 ) ≤ 1 then, in consideration of h 1 = h ⋆ and Proposition 3.8, necessarily h 1 V ′ (c 1 ) < 1 holds. As finally the assumptionc 1 = c ⋆ also results in a contradiction, namely, we see thatc 1 > c ⋆ and h 1 V ′ (c 1 ) < 1 as claimed. 4. Definition of c 2 . Consider any c ⋆ < c < ∞, and set again h := c/V (c) > 0. We assert that we have h V ′ (c) < 1. In order to show this, recall from the last part that in the case c =c 1 we clearly have h V ′ (c) < 1. Otherwise, c =c 1 and we assume, contrary to the assertion, that h V ′ (c) ≥ 1. Then either h V ′ (c) = 1 or h V ′ (c) > 1. However, both situations lead to a contradiction. First observe that due to Proposition 3.8 the case h V ′ (c) = 1 results in c = c ⋆ , and thus indeed in a contradiction to c > c ⋆ . Next, consider the situation h V ′ (c) > 1. For the map g from the proof of Proposition 3.8 we get for all c ⋆ < c < ∞. Thus, H 2 is strictly increasing, and we have which closes the proof. But, however, first you have to find some realsc,h > 0 satisfyingh V (c) =c andh V ′ (c) = 1 simultaneously, and that should not make the proof substantially shorter than the one discussed above.
The mapf has, of course, the same smoothness and compactness properties as f . In particular, for each ϕ ∈ C, Eq. (4.1) has a uniquely determined solution v ϕ : [−1, ∞) → R 2 with v ϕ 0 = ϕ. The segments v ϕ t , ϕ ∈ C and t ≥ 0, form a continuous semiflowF : [0, ∞) × C → C withF (t, ϕ) := v ϕ t . This semiflowF is obviously closely related to the semiflow F induced by the solution of Eq. (2.2). Indeed, we haveF The solution w c,d of Eq. (2.2) is now represented by the zero solution of Eq. (4.1), and it is unstable / stable / (locally) asymptotically stable if and only if the zero solution of Eq. (4.1), that is, the stationary point ϕ 0 := 0 ∈ C of the semiflowF , is unstable / stable / (locally) asymptotically stable. Now, we linearize Eq. (2.2) along the wavefront solution w c,d , or equivalently, we linearize Eq. (4.1) along the zero solution. For this reason, we calculate the derivative of the map f defining the right-hand side of Eq. (2.2). At each ϕ ∈ C it forms a bounded linear operator Df (ϕ) ∈ L(C, R 2 ) whose action to some ψ ∈ C is given by Hence, along the zero solution of Eq. (4.1) the associated linear delay equation reads Of course, Eq. (4.2) is equivalent to the "linearization" of the scalar delay differential equation ( involving a uniquely determined normalized function ζ : [0, 1] → R 2×2 of bounded variation. In this context, the normalization conditions means that ζ should satisfy ζ(0) = 0 and be continuous from the right on (0, 1), that is, ζ(τ ) = ζ(τ +) for all 0 < τ < 1. A straightforward calculation shows that For each ϕ ∈ C, Eq. (4.2) has a uniquely determined solution y ϕ : [−1, ∞) → R 2 . The equations T (t) ϕ = y ϕ t , with ϕ ∈ C and t ≥ 0 define a strongly continuous semigroup T = {T (t)} t≥0 of bounded linear operators T (t) : C → C. The infinitesimal generator G : D(G) → C is given by The associated characteristic matrix reads such that for the characteristic equation of Eq. (4.2) we get The (countably infinitely many) solutions of this algebraic equation coincide with the spectrum σ(G) ⊂ C of G. The last consists only of eigenvalues with finite rank, that is, the associated generalized eigenspaces are finite dimensional, and for each real β > 0 the spectral subset {λ ∈ σ(G) | Re (λ) > β} is either empty or finite. Writing σ u (G), σ c (G), and σ s (G) for the spectral subsets of σ(G) consisting of eigenvalues with negative, zero, and positive real parts, respectively, we get the splitting of σ(G). Furthermore, let C u , C c and C s denote the associated realified generalized eigenspaces, which are called the unstable, the center and the stable space of G.
Then the Banach space C decomposes to with the two C u , C c obviously finite and the one C s in general infinite dimensional subspaces. Now, it is apparent that for any h > 0 and any V ′ (c) > 0, we always have λ 0 = 0 ∈ σ(G); that is, the linearization along any wavefront solution with constantspeed has a zero eigenvalue. In the case h V ′ (c) = 1, the eigenvalue λ 0 = 0 is clearly simple, whereas in the case h V ′ (c) = 1 it has the algebraic multiplicity two or three. The permanent occurrence of the zero eigenvalue is caused by the translation symmetry of f . To be more precisely, the direction of the translation invariance of f is always an eigendirection of the zero eigenvalue as we shall see next. Proof. Of course,ê 1 is C 1 -smooth and (ê 1 ) ′ = 0 ∈ C. Further, an easy computation shows that Lê 1 = 0 ∈ R 2 . Hence, we clearly haveê 1 ∈ D(G). Moreover, as Gê 1 = (ê 1 ) ′ = 0 = λ 0ê1 it follows thatê 1 ∈ N and so Rê 1 ⊆ N as claimed.
Observe that, apart from λ 0 = 0, all other elements of σ(G) may in general not be calculated explicitly. However, we are only interested in the location of the eigenvalues of G, and thus of the roots of Eq. (4.3), in the complex plane relative to the imaginary axis. For this reason, consider the function where α, β ∈ R. By identifying α = −h and β = h 2 V ′ (c), the function χ clearly coincides with the left-hand side of Eq. (4.3). Therefore, we may use the following result about the number of unstable characteristic roots of χ for parameter values α < 0 < β, in order to analyze the stability of wavefront solutions with constantspeed.
(i) If (α, β) ∈ S then, apart from the simple root λ 0 = 0, all other roots λ ∈ C of χ defined by Eq. Proof. Introducing D : we see that χ(λ) = λ · D(λ, −α, −β) and so . Therefore, it suffices to determine, in dependence on α and β, the location of the roots λ of D in the complex plane. But such an analysis can be found, for instance, in Insperger and Stépán [6,Chapter 2.1.2], and this completes the proof.
Local stability analysis of wavefront solutions with constant-speed
With the preparatory work of the last section, we are now in the position to analyze the local stability properties of the wavefront solutions w c,d with constantspeed of Eq. (2.2). But before doing so, recall that the zero solution v 0 : R ∋ t → 0 ∈ R 2 of Eq. (4.1) is called stable if and only if for each δ > 0 there is some constant ε > 0 such that for each ϕ ∈ C with ϕ C < δ we have v ϕ t C < ε for all t ≥ 0. Otherwise, we call the zero solution v 0 of Eq. (4.1) unstable. If v 0 is stable and, additionally, we find some constant ε a > 0 such that for all ϕ ∈ C with ϕ C < ε a we have then the zero solution v 0 of Eq. (4.1) is called locally asymptotically stable.
In terms of a wavefront solution w c,d with constant-speed of Eq. (2.2), the above definitions mean the following: The solution w c,d is stable whenever for every ε > 0 there exists some ε > 0 such that w c,d 0 − ϕ C < δ for some ϕ ∈ C guarantees that w c,d t − w ϕ t C < ε for all t ≥ 0. Otherwise, the solution w c,d is unstable, that is, there exists some ε > 0 such that any neighborhood of w c,d 0 in C contains an initial value ϕ ∈ C with w c,d (t) − w ϕ (t) R 2 > ǫ for some t ≥ 0. Finally, w c,d is locally asymptotically stable when it is stable and, in addition, there is some ε > 0 with the property that w c,d 0 − ϕ C < ε a for any ϕ ∈ C guarantees We begin our (local) stability analysis of the wavefront solutions with constantspeed with a result which is hardly surprising in consideration of the translation invariance (2.5) of f . Then, by the translation invariance of f , w = w c,d + v εa trivially forms a solution of Eq. (2.2) and so v εa a solution of Eq. (4.1). Indeed, for all t ∈ R we have (v εa ) ′ (t) = 0 and But v εa t clearly does not converge to 0 ∈ C as t → ∞. This proves the assertion.
But it is even more sobering when we assume that the wavefront solutions w c,d with constant-speed lies on the second branch from Theorem 3.9: Theorem 5.2. Given V satisfying the standing hypotheses (OVF 1) -(OVF 4), let w c 2 (h),d with h ⋆ < h < ∞ and d ∈ R denote a wavefront solution with constantspeed of Eq. (2.2) belonging to the second branch from Theorem 3.9. Then w c 2 (h),d is unstable.
Proof. Under given assumptions, h V (c 2 (h)) = c 2 (h) and h V ′ (c 2 (h)) < 1 due to Theorem 3.9. Multiplying the last inequality with h, we see h 2 V ′ (c 2 (h)) < h. Hence, for β = h 2 V ′ (c 2 (h)) and α = −h it trivially follows β < −α, and therefore (α, β) ∈ ((−∞, 0] × [0, ∞))\S with the region S from Proposition 4.2. Consequently, by assertion (iii) of Proposition 4.2, there is some λ ∈ C with and Re(λ) > 0. Therefore, the so-called principle of linearized instability, compare, for instance, the first part of Theorem 6. The question about the stability of wavefront solutions with constant-speed of Eq. (2.2) which lie on the first branch of Theorem 3.9 is more sophisticated. Note that for such a solution w c 1 (h),d , h * < h <ĥ and d ∈ R, necessarily hV ′ (c 1 (h)) > 1 holds. Hence, for the corresponding parameter pair α = −h and β = h 2 V ′ (c 1 (h)) we have β + α > 0 such that each of the three cases (α, β) ∈ S, (α, β) ∈ C 1 , and (α, β) ∈ ((−∞, 0] × [0, ∞)) \ S from Proposition 4.2 may occur. In the last case the linearization has an eigenvalue with positive real part and therefore, similarly to the proof of our last result, the principle of linearized instability shows that w c 1 (h),d is unstable. In the other two cases, the linearization does not have any eigenvalues with positive real part but at least one eigenvalue on the imaginary axis. Consequently, in these situations the solution w c 1 (h),d may be stable or, more exactly, it has the same local stability properties as the zero solution of the ordinary differential equation obtained from a so-called center manifold reduction. Below we address this issue partially by carrying out a center manifold reduction for the case where the eigenvalue λ 0 = 0 is simple and the only one in σ c (G). But let us first introduce local center manifolds of Eq. (4.1) at the stationary solution v(t) = 0, t ∈ R, in general.
To begin with, write Eq. (4.1) in the form with separated linear part L = Df (w c,d t ) = Df (0) ∈ L(C, R 2 ) and the nonlinear part As, regardless of the particular wavefront solution w c,d with constant-speed, we always have λ 0 = 0 ∈ σ c (G), it follows that the center space C c ⊂ C is not the zero space but has at least dimension one. Therefore, the center manifold theory for delay differential equations as, for instance, may be found in Diekmann et al. [3, Chapter IX], shows the existence of a non-trivial, so-called local center manifold W c ⊂ C of Eq.
of the so-called reduction map w c has the following properties: (ii) W c is positively invariant with respect to the semiflowF ; that is, if ϕ ∈ W c and t > 0 such thatF (s, ϕ) ∈ C c,0 ⊕ C su,0 for all 0 ≤ s ≤ t, then (iii) W c contains the segments of all solutions of Eq. (5.1) which are defined on R and have all their segments in C c,0 ⊕ C su,0 . In general, such a local center manifold of a differential equation is not unique. Moreover, in the most cases it is rarely possible to represent the reduction map in a completely explicit way. However, as we show next, in the case of Eq. (4.1) the last point is easily done, provided the linearization of the underlying wavefront solution with constant-speed has only the zero eigenvalue on the imaginary axis and its algebraic multiplicity is one. Proof. 1. Under given assumptions, λ 0 = 0 is clearly a simple eigenvalue and the associated one-dimensional eigenspace coincides with the center space C c . Consequently, from Proposition 4.1 it trivially follows that C c = Rê 1 , and this shows the first part of the assertion.
2. For the proof of the second part of the assertion, let arbitrary ψ ∈ C c,0 be given. Then, by the last part, there is some k ∈ R with ψ = kê 1 . Now, observe that the function is a global solution of Eq. (5.1) and it has the segments v t = kê 1 = ψ as t ∈ R.
In particular, v t ∈ C c,0 ⊕ C su,0 for all t ∈ R. Hence, property (iii) of the associated local center manifold W c implies that for each t ∈ R we have v t ∈ W c , that is, v t = P c v t + w c (P c v t ) where P c denotes the continuous projection P c of C along C su onto the center space C c . All in all, it follows that and so we conclude first that P c v t = kê 1 = ψ and then w c (ψ) = w c (kê 1 ) = w c (P c v t ) = 0, which finishes the proof.
So, under the conditions of the last result, a local center manifold W c of Eq. (5.1) at the stationary solution v(t) = 0, t ∈ R, just coincides with the neighborhood C c,0 of the origin in the center space C c . Furthermore, the dynamics induced by Eq. (5.1) on W c is the most simplest one: Under the assumption of Proposition 5.3, the reduction of Eq. (5.1) to a local center manifold W c is given by the scalar ordinary differential equation p ′ (t) = 0.
Proof. By the center manifold theory, as, for instance, contained in Diekmann et al. [3], and the last proposition, in the situation considered here the center manifold reduction reads where Q c : C c,0 → R denotes the composition Q c = γ • r of a linear operator γ : R 2 → R and the nonlinearity r of Eq. (5.1). Now, note that for all p ∈ R we have r(pê 1 ) = 0. Hence, it follows that Q c (p(t)ê 1 ) = 0 and thus p ′ (t) = 0 as claimed.
With the statement above we are now in the position to determine the local stability properties of almost all wavefront solutions with constant-speed of Eq. (2.2) along the first branch from Theorem 3.9. Then ,d is clearly unstable as discussed after Theorem 5.2 and its proof. Now, assume (α, β) ∈ S and then recall from Proposition 4.2 that in this case λ 0 = 0 is a simple eigenvalue of the linearization whereas for all other λ ∈ σ(G) we have Re(λ) < 0. In particular, there is no unstable direction. Therefore, the local center manifold W c is attractive as, for instance, discussed in Section IX.8 of Diekmann et al. [3]. Consequently, stability assertions for the zero solution of the center manifold reduction carry over to stability assertions for the stationary solution v(t) = 0, t ∈ R, of Eq. (5.1), or equivalently, of Eq. (4.1). Now, by the last proposition the center manifold reduction is given by the ordinary differential equation p ′ (t) = 0, and here the zero solution is clearly stable. This proves the stability of the zero solution of Eq. (4.1), and thus of solution w c 1 (h),d of Eq. (2.2).
Remark 5.6. Observe that the last proposition contains no statement about the stability properties of solution w c 1 (h),d with (−h, h 2 V ′ (c 1 (h))) ∈ ∂S. In this case, the associated linearization does not have any eigenvalues with positive real part but, in addition to the simple eigenvalue λ 0 = 0, a pair ±iω, ω > 0, of simple pure imaginary eigenvalues due to Proposition 4.2. As we will discuss in the next section, it seems that here Eq. (2.2) undergoes a degenerate Hopf bifurcation.
Numerical examples and discussion
After all the analytical work in the last sections, in the following we consider some numerical examples demonstrating our results. In doing so, we will also briefly address some aspects arising from our simulations. For the numerical calculations we use the solver routine dde23 of the computing environment MAT-LAB [7] with the relative error tolerance of 10 −9 and the absolute error tolerance of 10 −12 . The optimal velocity function considered throughout this section is the example V = V q defined by Eq. (1.2) with some maximum velocity V max > 0 and safety distance d S = 0. Numerically, the solution z seems to be stable. For instance, setting c * e = c e − 0.005 and starting with the initial function [−1, 0] ∋ s → (−c * e s, −c * e ) T ∈ R 2 results in the figure below which indicates that the computed solution z * does not only remain in a small neighborhood of z but actually is attracted by z. Indeed, a calculation of the associated stability parameters from Proposition 4.2 results in α = α e := −0.2 and β = β e := h 2 e V ′ (c e ) ≈ 0.39899. Hence, we see at once that w ce,0 , and so solution z of Eq. (1.3), is located on the first branch c 1 from Theorem 3.9, and that, in view of (α e , β e ) ∈ S, it is locally stable due to Theorem 5.5.
satisfying condition (3.1) for the existence of a wavefront solution with constantspeed. Observe that, in consideration of our analysis in Section 3, the associated wavefront solution w ce,0 with constant-speed, and so solution z = −c e t, t ∈ R, of Eq. (1.3), necessarily belongs to the second branch c 2 from Theorem 3.9 and thus is unstable due to Theorem 5.2. Of course, the instability of z is also apparent in numerical simulations. The figure below shows the computed solution for the initial value w ce,0 0 which theoretically should lead to the solution z for all time under consideration. It seems, also supported by the scale of the axes, that at first the numerically computed solution coincides with z for a (long) while as expected. However, finally we end up with something else. And the reason here is the interplay between the instability of the solution z and the rounding in the floating point arithmetic. To be more precisely, at some time, the rounding in the floating point arithmetic first leads to the fact that the computed solution leaves the quasi-stationary case and "jumps" to some other orbit of Eq. (1.3) in the immediate vicinity of the orbit of z. Then, the instability of the quasi-stationary solution z "forces" the numerical computed solution to leave all sufficiently small neighborhoods of z. At the end, the simulation shown in Figure 6 is irrelevant with respect to the car-following model given by Eq. (1.1) as the computed solution is clearly not strictly decreasing at all (but strictly increasing on some interval with length greater than 2h). β e := h 2 e V (c e ) ≈ 2.8245, and it is easily seen that the solution under consideration belongs to the first branch c 1 from Theorem 3.9, and that, in view (α e , β e ) ∈ S, it is unstable due to Theorem 5.5. We choose w ce,0 0 as initial function and compute the solution z numerically. The result of this computation is shown in Figure 7, and, apparently, it is completely different in nature as in the example before. At the initial stage of the simulation the computed solution seems, similarly to the last example, to coincides with z. But then, caused by the rounding in the floating point arithmetic and the instability of z, the computed solution leaves the quasi-stationary state, and its first derivative begins to oscillate about the value −c e with decreasing minimal and increasing maximal value. After reaching some thresholds for the minimal and maximal value, the oscillation becomes completely regular such that, in the final stage of the simulation, the computed solution is uniform, and its first derivative not only uniform but periodic. Compare here also Figure 8 showing the final stage of the numerical computation. 3), but, of course, not subject to the initial value w ce,0 0 . A rough explanation for what seemingly happens here was indicated in Remark 5.6. But let us specify it more precisely by the following conjecture which shall be addressed analytically in [8]. Returning to our example under consideration, we note that the branch c 1 = c 1 (h) of wavefront solutions w c 1 (h),0 with constant-speed is at least defined for all h ⋆ < h ≤ h e with h ⋆ = 2/V max ≈ 0.8127. Next, after fixing some h f > h ⋆ with h f − h ⋆ > 0 sufficiently small, a simple argument shows that the associated stability parameters α = α := −h f and β = β f := h 2 f V ′ (c 1 (h f )) of solution w c 1 (h f ),0 form a point inside the region S from Proposition 4.2. Thus, w c 1 (h f ),0 is stable due to Theorem 5.5. Now, let us increase the parameter value h continuously from h f to h e . At first, all the solutions w c 1 (h),0 remain stable as the associated parameters (α(h), β(h)) := (−h, h 2 V ′ (c 1 (h)))) are contained inside S. On the other hand, the curve C : [h f , h e ] ∋ h → (α(h), β(h)) has to leave and stay outside of S for all sufficiently large h ≤ h e , since we have (α(h e ), β(h e )) = (α e , β e ) and do already know that (α e , β e ) ∈ ((−∞, 0] × [0, ∞))\S. Therefore, the curve C has to cross the boundary ∂S of S at some point h f < h H < h e . Moreover, it is easily seen that the curve C has to do so by crossing the curve C 1 from Proposition 4.2 which particularly shows that by increasing h about the value h H a pair of simple complex conjugate eigenvalues of the linearization moves from the left to the right half-plane of C. For that reason, by increasing h from h f to h e we loose the stability of the associated solution w c 1 (h),0 at the value h = h H . But, as conjectured, that is done by undergoing a supercritical Hopf bifurcation such that, for each parameter h > h H with h − h H > 0 sufficiently small, we find a locally stable solution w h H of Eq. (2.2) which is not periodic but its first derivative. With the above in mind, let us briefly revisit the numerical simulation in this example. As already said, the solution z is unstable. On the other hand, it seems that the bifurcating branch of solutions w h H is even defined for the parameter value h = h e . In fact, most likely, Figure 8 namely shows, the solution w he H which is locally stable. So, after having left the quasi-stationary state of z due to the rounding in the floating point arithmetic and the instability, the computed solution seems first to be attracted by w he H , and then, after sufficiently long time, to coincide, more or less, with w he H as indicated in Figure 7. Finally, observe that the example discussed here is also significant for the traffic model described by Eq. (1.1) as it suggests the existence of wavefront solutions with stop-and-go behavior. Indeed, a bifurcating solution of Eq. (2.2) from Conjecture 6.1 leads to a solution of the traffic model where each driver accelerates and brakes alternately. | 11,503.8 | 2016-09-22T00:00:00.000 | [
"Mathematics"
] |
Modeling and analysis of three-degree of freedom regenerative chatter in the cylindrical lathe turning
Regenerative chatter is major issue in turning operations, which lowers the machining efficiency and parts quality. It is usually caused by the intense self-excited vibration between the cutting tool and the work piece. In this paper, we intend to visualize the tool-work system vibration during the three degrees of freedom cylindrical turning on CA6140 lathe machine. This was achieved by building up a dynamical model for the process with taking into consideration the dynamical properties of the cutting tool. Thus, the theoretical analysis of the natural frequencies, vibration mode and transient response on tool-work system vibration during this process was simulated using Matlab/Simulink. The results showed a variation in the first-order natural frequency vibration of the cutting tool. However, during the transit stage, vibration was gradually decreased and became stable. Finally, the proposed model was verified, and the results were found to be consistent with previous research work. This model provides new technique in evaluation and understanding the cutting tool vibration of three degrees of freedom cylindrical turning.
Introduction
As science and technology develops, researchers start paying attention to the precision and efficiency of the machinery.In the case of machining processes, this mean to satisfy the requirements for high performance process as well as obtaining good quality machined parts.During lathe turning, the vibration would inevitably occur and affect the process accuracy and efficiency [1,2].This is generally caused by the intense self-excited vibration between the cutter and the workpiece, which is often called chatter.In other words, chatter has been the main obstacle in improving the capability of lathe machine for producing high quality products.Hence, studying and analyzing flutter during turning process is very important [3,4].
Since 1906, researches have studied turning flutter mechanism, and thus, have come up with different theories regarding this phenomenon.More generally accepted theories include regenerative theory, mode shape coupling mechanism, negative friction and the principle of cutting force hysteresis.Among all these, the regenerative chatter theory is the most widely accepted and applied.It undertakes systematic analysis and demonstration of the process from the point of view of exciting force and amplitude.Therefore, it can reasonably explain the turning flutter under the single degree of freedom (SDOF) [5,6].However, the SDOF is only effective on the turning flutter of certain buckling modes because it has some shortcomings that have not been addressed.The major issue is hard to determine the main flutter direction of the cutter, because SDOF is not suitable to explain flutter of non-free turning operation.Hence, SDOF is not appropriate to be applied for solving actual turning flutter control [7].
This paper based on the previous studies and focuses on the regenerative chatter of external lathe work cutter-workpiece system of the CA6140 lathe.Considering the dynamic characteristics of the tool, this study is associated with building up a kinetic model for Cylindrical Turning Flutter of 3-DOF model.The theoretical analyses implemented is this study, include the frequency and the principal mode of the model.Furthermore, it undertakes a simulation study by the means of the transient vibration response of the cutter (at the time the tool touches the workpiece until stable machining is established) using Matlab/Simulink.
Materials and methods
The regenerative chatter during Cylindrical Turning Processing is complicated.There is no model that could completely simulate the actual situation without any deviations.The kinetic model developed in this paper defines the practical problem and proposed the solution.However, it is a hypothetical model of simplified key elements, as well as the input and the output elements.
Analysis of the turning process chatter
Before building up a kinetic model for Cylindrical Turning, we should analyze the active body chatter during the process.The workpiece-cutter system with regenerative chatter should have two requirements.Firstly, a disturbed system generates dynamic cutting force.Secondly, the system has to gain energy to maintain the chatter, thus, the replenished energy is of the dynamic cutting force [8].When the cutter-workpiece system in the state of regenerative chatter the processing conditions, and dynamic characteristics of cutter and workpiece determine its occurrence.In other words, under certain machining conditions, the inherent frequency, dynamic cutting force and dynamic stiffness define the active body vibration.Therefore, during the actual lathe operation, the analysis of the active vibration body depends on the above mentioned parameters [9,10].
Three-degrees of freedom dynamic model
In order to simplify the analysis, it has been assumed that the turned cylindrical workpiece is a rigid body and the cutter is considered as the active body vibration.The cyclical change of the dynamic cutting force caused by the cutter-workpiece system generates chatter at axial, radial and tangential directions.According to the equivalence, simplicity and successive approximation of the kinetic model, the cutter could be seen from the perspective of three degrees of freedom elastically guided and damped system.Therefore, it can be simplified in the three axials, radial and tangential directions as shown in Fig. 1.Where ( ) is the feed force, ( ) is the back force, ( ) is the main cutting force and ( ) is the resultant force acting on the cutting tool.For easy determination of the kinetic model of this process, the tool-workpiece system in Fig. 1 is replaced by the inertia element Fig. 2.This describes the regenerative chatter mechanism of the process under three-degree of freedom.Where the three-dimensional model of the tool-workpiece system is represented by two plane coordinate systems.However, the mechanical 3 1 model still has three degrees of freedom, where the inertia element is the mass , the elastic elements are , and , and the damping elements are , and .
Formulation of the dynamic model
The actual turning process of the cylindrical workpiece is proposed to be oblique cutting as illustrated in Fig. 3.At each revolution of the workpiece, the cutting tool moves from position I to position II and removes layers from the workpiece in the form of chip [11].Where the removed layer cross-section area in the datum is called the cutting area, and is denoted by the shaded region Fig. 3.Moreover, during cutting action, the cutting tool is subjected to a resultant cutting force ( ) that has three perpendicular components.This helps in the formulation of the three-degree of freedom dynamic model that will help in the development of the required regression model for solving the problem.Therefore, the cutting area can be calculated as follows: where: is the cutting area, is uncut chip thickness, is an uncut chip width: where: is major cutting edge angle; ( ) is the total cutting thickness of the ( ) tool along the radial direction of the workpiece cutting thickness.
According to the empirical formula of cutting force, resultant cutting force ( ) can be expressed as follows: where is cutting coefficient, which depends on the material and size of the workpiece, cutting tool material and geometry, chip thickness and cutting speed [12].In this paper, the selected tool geometry as recommended by the supplier is tabulated in Table 1.Combining Eqs. ( 1), (3) into Eq.( 4) to give the resultant cutting force acting on the tool.Thus, Eq. ( 4) can be expressed as follows: The cutting force is resolved into three mutually perpendicular components; axial or feed force ( ), thurst force ( ), and main cutting force ( ) respectively.According to actual machining test: = 45°, = 15°, = 0.The following are the approximated relationship between resultant force and its three components: where: , and are the excitation cutting forces in the , and direction respectively.Hence, by substituting Eq. ( 5) into Eq.( 6) the cutting force components can be expressed as follows: = 0.30 ( ) + ( ) + ( ) , = 0.40 ( ) + ( ) + ( ) , = 0.87 ( ) + ( ) + ( ) . (7)
Mathematical model
The regression model development was achieved by using the kinetic model of the system Eq.( 7) and the principle of Newton's second law.Accordingly, the equations of motion for the vibrated system were expressed as follows: where , , are the masses, and , , are the damping coefficients and , , are the structure stiffness of the machine tool in the three respective directions.By making use of Eqs. ( 7) and ( 8), we can obtain the complete mathematical model of the three-degree of freedom vibrated system.Thus, the model is expressed by the following matrix: The value of (0) which is the average cutting depth can be found form the equation motion ( ) = (0) + sin( + ) ( − ), where = 52 rad/s, = 0.005 m, = 4 ⁄ , (0) = 0.003 m.
The numerical solution for the Mathematical model can be obtained by substituting the parameters of Table 2 into Eq.(9).
The natural frequency of vibrated system
The next step in this analysis is to solve the system of differential equations for the vibrated motion.This enables the determination of the system frequency for un-damped n-degree of freedom vibration.In this case the general matrix equation is given as follows: where is the mass matrix, is the stiffness matrix, is the generalized coordinate vector.It is assumed that the system is under harmonic vibration.Therefore, the general equation of motion can be put as follows: where is arbitrary constant, is the natural frequency of harmonic vibration and ∅ is the initial phase angle.By substituting Eq. ( 11) in Eq. ( 10) we can get the following expression: In order to generate a solution for , make its coefficient determinant equal to zero: Eq. ( 13), is called the natural frequency equation or characteristic equation which satisfy the conditions of the natural frequency .After the determinant Δ( ) of the characteristic equation is expanded, an ( th) order polynomial of is obtained.For the positive definite system, we need to obtain the positive real roots where = 1, 2,..., and ( ) is called natural frequencies of the system.In most cases, the natural frequencies are not equal and can be arranged from small to large, in ascending order; 0 ≤ ≤ ≤ ⋯ ≤ By making use of the input parameters of Table 2 in Eq. ( 13) and the MATLAB Software, the natural frequencies of the system were calculated and listed below.
According to the above result, the main mode of the vibrated system was represented by Fig. 4. It obvious that the chatter due variation for the three natural frequencies takes place at three directions (axial, radial and tangential) on the cutting tool.However, the chatter of the cutter is the large in and directions, while it is minimum at -direction.The result was found to be consistent with all three natural frequencies.This is mainly due to the fact that the cutting tool is more rigid in direction than in and directions.The relative vibration of the system in the direction of the respective degrees of freedom can be obtained by the vibration pattern.The graph also shows that the vibration in the direction is the largest at the first order natural frequency.
Transient response of the cutter
According to the developed model of the three degree of freedom cylindrical turning, it is obvious that chatter in the specified three directions is zero at the initial contact between the cutting tool and workpiece.Then it increases with the progression of the cutting action, and become stable at the transient stage.Therefore, it is desirable in this section and following subsections to drive the mathematical expression for the chatter amplitude and speed, and the system state equation.The application of the three degrees of freedom dynamic model with the aid of MATLAB simulation can assist in the determination of transient response of the cutting tool.Based on Eq. ( 9), It was assumed that at the initial contact between the cutting tool and workpiece, the amplitude and speed at all directions is zero.This is regarded as the initial condition for the Differential equation of system motion.Therefore, the differential equation of motion become as follows: In general, the answer to the kinematic equation of system vibration can be solved through differential equation of system movement.Where the numerical solution of differential equations is often transformed into a state space equation of a standard format [13].
In this case, we represent the system space variables by Eq. (15).Then the kinematic equation of the vibrated system can be transformed into a state space equation, as shown in Eq. ( 16).
where: = 0.30 (0) ; = 0.39 (0) ; = 0.87 (0) ; = -0.30(0); =-0.30(0); =-0.39 (0); = -0.39(0); = =-0.87(0); = .Then the initial conditions for this state of space equation can became as follows: To analyze the vibration amplitude of the vibrated system at the maximum excitation frequency, it is necessary to analyze the steady-state output value of the vibrated system at different excitation frequencies during the cutting process.Therefore, the phase angle difference is considered large, and the theoretical data is provided for the actual production situation.Using Matlab through the complex domain frequency response function, we can obtain the amplitude-frequency curve and the phase-frequency curve of the vibrated system as shown in Fig. 5.
According to the characteristic analysis of the frequency response of the regenerative chatter of the turning process, the vibration frequency of the system is found to be 2800 rad/s (Fig. 5).The vibration amplitude and phase difference at the three vibration directions are the largest with excitation frequency of 445.9 Hz.The spindle speed is also maximum to avoid the frequency range in the workpiece, resulting in lathe regeneration flutter vibration.6 shows the block diagram for converting the state space equation of the system vibration to numerical simulation, using Matlab/Simulink [14].By employing the Matlab/Simulink, the turning process in question was simulated in the time from the initial contact between the cutting tool and workpiece until the cutting process become stable.Fig. 7(a) shows the direction of vibration of the cutting tool.It can be seen that when the cutter touches the workpiece, the chatter at all directions is high.This attributed to the sudden change in the dynamic cutting force.As the time passes by, the chatter starts getting stable to a certain point.When the whole system gets stable, the chatter at direction and directions chatter is high.
However, the chatter at -direction is comparably small and almost negligible.This result is consistent with previous studies findings.Therefore, this proves that the kinematic model of the three degrees of freedom can adequately describes the cylindrical turning process in question.Moreover, the first natural frequency and second natural frequency of the vibration system are 2731 Hz and 3233 Hz, which seems to be close to each other.Therefore, the transient response of the vibration system will occur " " situation as shown in Fig. 5
Conclusions
Considering the dynamic characteristics of the cutter in cylindrical turning processing, these three degrees of freedom dynamic model of regenerative chatter is rational and representative.Under the first natural frequency, the chatter at the direction is the biggest and followed by that in direction.The -direction is minimum chatter.When the cutter touches the workpiece, the chatter from all the directions are very dramatic.As cutting action progress, the chatter stabilizes to a certain point.The turning process gets stable, at the transient cutting stage, however, chatter in the and direction is high, and in direction remained minimum.This result is consistent with previous research work.The model is verified by applying it to a concrete turning operation.Finally, this study proved that a three-degree of freedom model of chatter could very well visualize the turning process in a simple and reliable way.
Fig. 1 .
Fig. 1.Dynamic model of the oblique cylindrical turning
Fig. 7 .
Transient response of the vibrated system | 3,619 | 2017-06-30T00:00:00.000 | [
"Materials Science"
] |
Regulation of actomyosin ATPase by a single calcium-binding site on troponin C from crayfish.
Equilibrium-binding studies at 4 degrees C show that, in the instance of crayfish, troponin C contains only one Ca-binding site with an affinity in the range of physiological free [CA2+] (K = 2 X 10(5) M-1). At physiological levels of Mg2+, this site does not bind Mg2+. In the complexes of troponin C-troponin I, troponin and troponin-tropomyosin, the regulatory Ca-specific site exhibits a 10- to 20-fold higher affinity (K = 2-4 X 10(6) M-1). The latter affinity is reduced to that of troponin C upon incorporation of the troponin-tropomyosin complex into the actin filament (regulated actin), as determined at 4 degrees C by the double isotope technique. The Ca-binding constant is again shifted to a higher value (7 X 10(6) M-1) when regulated actin is associated with nucleotide-free myosin. Both crayfish myofibrils and rabbit actomyosin regulated by crayfish troponin-tropomyosin display a steep rise in ATPase activity with [Ca2+]. Comparison of the pCa/ATPase relationship and the Ca-binding properties at 25 degrees C for the crayfish troponin-regulated actomyosin indicates that while the threshold [Ca2+] for activation corresponds to the range of [Ca2+] where the regulatory site in its low affinity state (K = 1 X 10(5) M-1) starts to bind Ca2+ significantly, full activation is reached at [Ca2+] for which the Ca-specific site in its high affinity state (K = 3 X 10(6) M-1) approaches saturation. These results suggest that, in the actomyosin ATPase cycle, there are at least two calcium-activated states of regulated actin (one low and one high), the high affinity state being induced by interactions of myosin with actin in the cycle.
three subunits of Tn in the thin filament which is composed of Tn, Tm, and F-actin in a 1:1:7 molar ratio (2). The steric blocking model has been proposed as a mechanism for the calcium-regulated control (3,4). This all-or-none model suggests that, when TnC is Ca2+-free, Tm sterically prevents the binding of the myosin heads to actin, thus relaxing the muscle.
In the presence of Ca2+, Tm does not block the binding of myosin cross-bridges. Recent data suggest that the Tn-linked regulation may be better viewed as an allosteric system (5,6).
In many invertebrate muscles, the Tn-linked regulatory system coexists with the myosin-linked system (7). Yet, in fast striated muscle of arthropods such as horseshoe crab (8) and crayfish (9), Ca2+ binding to TnC seems to be the primary trigger for contraction. Similarly to its vertebrate counterpart, Tn isolated from arthropods appears to consist of three components (10-12). The latter differ, however, from the corresponding subunits of vertebrate Tn (13) in molecular weight and amino acid composition. The smallest (Mr = 16,000 to 18,000) and medium (M, = 23,000 to 29,000) subunits are analogous in function to vertebrate TnC and TnI, respectively (11, 12). As for the TnT-like protein (Mr = 50,000 to 60,000), its involvement in troponin function has not yet been studied in detail (10-12).
Rabbit skeletal TnC contains four potential binding sites for Ca2+ (14, 15). Only two sites that bind Ca2+ specifically are thought to be functional (15). The two other sites that bind both Ca2+ and Mg2+ seem to play a structural role (16). Bovine cardiac TnC is analogous to the skeletal muscle protein, with two Ca2+-Mg2+ sites but only one Ca2+-specific site (17). There is still some doubt about the regulatory sites in vertebrate systems, since magnesium ions affect the calcium activation of myofibrillar ATPase (18) and of tension development (19). Invertebrate TnC binds less Ca2+ than does vertebrate TnC (11,20-22); arthropod TnC seems to have no more than one Ca2+-binding site (20, 21).
The aim of this work was to study the nature of the Ca2+and M$+-binding sites on crayfish TnC and their role in the regulation of actomyosin ATPase. We measured Ca2+ and Mg+ binding to crayfish TnC in a vast range of [Ca"] or [M$+] and found that in the limited range of physiological [Ca"] only a single site binds Ca2+. This site is analogous to the Ca2+-specific sites of vertebrate TnC. Hence, the Tm . Tn complex with the simplest physiologically significant Ca2+binding properties constitutes a convenient system to study Ca2+ regulation. For instance, conflicting reports exist concerning the mechanism responsible for the sharp transition in Ca2+ dependence of myofibrillar ATPase and of tension development (15,(23)(24)(25). We compared the Ca2+-binding properties of crayfish Tm . Tn-containing actin (alone or associated with myosin) with the pCa/ATPase relationship in the instance of both regulated actomyosin and myofibrils. This system has the advantage over the vertebrate one that 9017 the results are blurred neither by Ca2+ binding to the nonrelevant sites on TnC nor by the requirement for activation of multiple bound Ca2+ on the TnC molecule.
EXPERIMENTAL PROCEDURES
Materials-DEAE-Sephadex A-25 and SE (sulfoethy1)-Sephadex C-50 were obtained from Pharmacia (Uppsala, Sweden). Polyacrylamide gel electrophoresis reagents were purchased from Serva (Heidelberg, Germany). 45Ca (30 mCi/mg) and ~-[6-~H]glucose (13 Ci/ mmol) were from Amersham International pCI (Amersham, England). All other chemicals were reagent grade and were utilized without further purification, except for urea (Merck, Darmstadt, Germany) solutions which were deionized by means of a mixed bed resin (Bio-Rad). All buffers and protein solutions were prepared with bidistilled water from an all-quartz apparatus and contained inhibitors of bacterial growth and proteolysis: 0.5 mM NaN3 (Merck), 20 p~ phenylmethylsulfonyl fluoride (Sigma), and 0.2 pg/ml of pepstatin A (Protein Research Foundation, Osaka, Japan).
Preparation of Myofibrik-Myofibrils were prepared by the method of Lehman (26) from crayfish (Astacus leptodactylus) tail muscle and from rabbit hind leg and back muscle. The myofibrils were stored at Preparation of Rabbit Myosin and Actin-Myosin was prepared according to Watterson and Schaub (27) and then stored at -20 "C in 10 mM potassium phosphate, pH 6.5, 0.6 M KCI, 5 mM dithiothreitol, and 50% glycerol. Actin was purified by the procedure of Spudich and Watt (28) and stored on ice as F-actin.
Preparation of Tm.Tn Complex from Rabbit and Crayfish-Both complexes were extracted from fresh myofibrils in 15 mM 2-mercaptoethanol, 2 mM Tris/HCl buffer, p H 7.0, using a procedure similar to that described by Murray (29). The complexes were stored in the above buffer a t -20 "C.
Preparation of Crayfish Tn and Tn Subunits-Crayfish T n was obtained either from the T m . T n complex by the isoelectric precipitation of T m (30) or from myofibrils by a procedure similar to that of Ebashi et al. (31). T h e T n complex was dialyzed against 0.2 M NaCI, 5 mM EDTA, 15 mM 2-mercaptoethanol, 20 mM Tris/HCI buffer, pH 7.8. When applied to a column of DEAE-Sephadex A-25, only TnC was retained. The latter protein was eluted a t 0.4 M NaCI. The proteins not absorbed were dialyzed against 6 M urea, 5 mM 2mercaptoethanol, 50 mM sodium barbital, pH 8.0, and loaded on a column of SE-Sephadex C-50. TnT and TnI were eluted by a linear gradient of 0 to 0.3 M NaCl (at 0.08 and 0.20 M NaCI, respectively). Appraisal of purity of the preparations (Fig. 1) and determination of the apparent M, of the polypeptide chains (32) were carried out by means of an sodium dodecyl sulfate-polyacrylamide gel electrophoresis procedure (33) in 12.5% acrylamide gels. The TnI. TnC complex was obtained by mixing purified subunits using a procedure similar 0-4 "C. Preparation of Reconstituted Regulated Actin and Actomyosin-Regulated actin was prepared by polymerizing rabbit actin in the presence of an excess of the Tm .Tn complex from crayfish or rabbit, as described by Murray (29). The thin filament preparations and rabbit myosin were dialyzed against 80 mM KCI, 1 mM dithiothreitol, 0.1 mM EGTA, 40 mM imidazole, pH 7.0. The dialyzed actin and myosin were then mixed (4:l molar ratio) to produce a concentration of reconstituted regulated actomyosin of approximately 6 mg/ml.
Metal and Protein Anulyses-Calcium, magnesium, and ' T a were determined as described previously (34). Protein concentrations were determined by the Lowry method (35) with bovine serum albumin as a reference except for the purified crayfish TnC solutions which were standardized by either amino acid analysis or determination of dry weight. The concentrations of rabbit proteins were measured spectrophotometrically using the following absorption coefficients: 630 cm2/ g a t 290 nm for G-actin, 540 cm2/g a t 280 nm for myosin, and 280 cm2/g at 278 nm for Tm.Tn. The molecular weights used for rabbit actin, myosin, and Tm .Tn complex were 42,000,460,000, and 150,000, respectively. Those for crayfish proteins were: T m ' T n complex, 161,000, T n complex, 87,000; TnI .TnC complex, 42,000, TnC, 16,000. Quantification of TnC in crayfish preparations was achieved by densitometry of the Coomassie Blue-stained gels (36), using increasing amounts of pure TnC as internal standards. Extrapolation to zero concentration of added TnC gave the amount of TnC in the original solution.
Calcium-and Magnesium-binding Measurements-The binding of calcium to crayfish TnC (3 mg/ml), TnI.TnC (6 mg/ml), T n (8 mg/ ml), and T m . T n (15 mg/ml) was measured by equilibrium dialysis a t 4 "C using EGTA to regulate the free [Ca"] (34). The dialysis fluid contained 40 mM imidazole, pH 7.0,80 mM KCI, 0.1 mM EGTA, 0.1 pCi/ml of "CaCI2, and the appropriate amount of CaCI2 to achieve the desired free Ca2+ concentration. In experiments without added M e , the contaminating M F concentration was about 0.1 p~. In experiments with 1 mM MgCIz, KC1 concentration was diminished to 77 mM. For M e binding to TnC in the absence of Ca2+, EGTA concentration was increased to 1 mM, and the desired concentrations of MgCI, were used. The free Ca2+ and Mg2' concentrations were calculated by means of the computer program of Perrin and Sayce (37). The association constants for metals and H+ to EGTA were adjusted for use a t 4 'C using their enthalpy values (38). All constants involving protons were corrected for the proton activity as described by Martell and Smith (38); thus pH was used instead of [H+] in the calculations. For crayfish TnC, equilibrium dialysis experiments were also performed without EGTA, and the free [Ca'+] was regulated by appropriate amounts of Chelex (Bio-Rad) in the dialysis fluid, as described by Crouch and Klee (39).
The binding of calcium to the reconstituted regulated actin and actomyosin was measured a t 4 and 25 "C by means of a double-isotope technique. Regulated actin (2 mg/ml) and regulated actomyosin (6 mg/ml) were incubated for 30 min at the chosen temperature in 1 ml of the solution described above containing also 0.3 pCi of [3H]glucose and 5 mM glucose. For measurements of Ca2+ binding to regulated actomyosin in the presence of 1 mM MgATP, 20 mM creatine phosphate and 0.2 mg of creatine phosphokinase were included, MgC12 concentration was increased to 2 mM, and KC1 concentration was diminished to keep ionic strength constant; ATP was brought to 1.3 mM (4 "C) or 1.2 mM (25 "C) just before centrifugation. The suspensions (0.15-ml fractions) were centrifuged in a Beckman Airfuge at 165,000 X g for 30 min. When measurements were carried out a t 4 "C, the centrifuge was placed in a cold room. The supernatant samples were analyzed for calcium by atomic absorption and counted for ' T a and 3H. The association constants for metals and H' to EGTA and ATP, adjusted for use at the appropriate temperature and corrected for the proton activity (38), were employed in the calculations of free [Ca2+]. The pellets were suspended in 0.1 ml of 1 M NaOH, placed in a boiling water bath for 5 min, and then neutralized by addition of 0.1 ml of 1 M HCI and 5 p1 of 0.5 M sodium phosphate, pH 7.0. Bound calcium was calculated from the "Ca/3H ratio of the dissolved pellets relative to that of the supernatant solutions and from the total concentration of calcium in supernatants. Corrections were made for calcium binding to pure actin (in the case of regulated actin) and myosin combined with pure actin (for regulated actomyosin) in the same assay conditions. ATPase Assays-One-ml reaction mixture, equilibrated a t 25 "C, contained 0.2 to 0.4 mg of regulated actomyosin or 0.1 to 0.5 mg of myofibrils, 40 mM imidazole, pH 7.0, 70 mM KCI, 0.1 mM EGTA, 2 mM MgClZ, 4 mM phospho(enol)pyruvate, 0.02 mg of pyruvate kinase, and the desired concentrations of CaCl2. Reactions were initiated by adding ATP to 1.2 mM and stopped at various times with 0.25 ml Of &-cold 25% trichloroacetic acid. The supernatant obtained following a low speed centrifugation of the precipitate was assayed for inorganic phosphate (40) and calcium. The pCa values were calculated as shown above.
Fluorescence Measurements-The tyrosyl fluorescence of crayfkh TnC was measured in 10 mM HEPES, pH 7.0, and 80 m M KCl, at 4 and 25 'C, in a Baird Atomic FC 100 spectrofluorimeter equipped with a thermostated cuvette holder. The free CaZ+ concentrations were controlled using an EGTA buffer system as described above. The TnC concentration was 20 pg/ml. Upon excitation at 280 nm, emission was monitored at 310 nm. In a first approximation, the intensity of the fluorescence was considered as directly proportional to the absolute quantum yield. Ca2+ titration of crayfish TnC results in an enhancement of tyrosine fluorescence (Fig. 5). The best fit of the titration curve was obtained with a single binding constant K = 3.9 X lo6 M-' (4 "C). The binding constant is similar to that obtained from direct Ca2+-binding measurements for the Caspecific site. M F (2 mM) in itself has no effect on the fluorescence nor does it affect the Ca2+ titration curve (not shown). These results suggest that solely the Ca-specific site participates in the structural change in crayfish TnC upon Ca2+ binding. Moreover, Fig. 5 shows that the affinity of Ca2+ for the regulatory site varies significantly between 4 and 25 "C.
Caand Mg-binding
Assuming that the enthalpy change, hH, for Ca2+ binding to the Ca-specific site is independent of temperature, our data and the van't Hoff equation (38) yield a A H ' value of -8.1 kcal/mol site. Similar enthalpy changes evaluated from calorimetry were reported for Ca-binding sites of rabbit TnC (41) and of parvalbumin (42).
Ca-binding Studies on Crayfish TnI. TnC, Tn, and T m . T n
Complexes-Formation of the TnI . TnC complex increases the affinity of the Ca-specific site of TnC approximately 10fold (Table I), irrespective of the presence of M e . In whole T n (Table I) or T m . T n (Fig. 6, Table I) complexes, this site has an affinity slightly higher than that in the TnI-TnC complex. As for the low affinity Ca-Mg sites of TnC, their affinity is hardly increased, if at all, in the TnC-containing complexes (Fig. 6). The Ca-Mg sites are clearly not physiologically significant as far as Ca2+ is concerned because of their too low affinity for this metal. Fig. 6 shows that incorporation ot the crayfish Tm.Tn complex into actin filaments diminishes the binding constant of the Ca-specific site from 4 X lo6 M" to 2 X lo5 M" (Fig. 6, Table I). On the other hand, combining regulated actin with myosin in the absence of ATP (i.e. rigor) changes again the Ca-binding properties of TnC; its regulatory site has an affinity as high as that in the Tm . T n complex without actin ( Table I). Thus, the single Ca-specific site on crayfish TnC in regulated actin may exist in two affinity states: 1) in a low affinity state which is reminiscent of that on TnC free of other proteins; 2) in a high affinity state which is induced by the binding of myosin to regulated actin and resembles that for TnC present in the Tm. Tn complex. During steady state ATP hydrolysis, regulated actomyosin displays Ca-binding properties similar to those of regulated actin without myosin (Fig. 7, Table I).
7,
Calcium Dependence of Myofibrillar and Actomyosin AT-Pase-The ATPase activity of rabbit actomyosin, regulated by crayfish T m . T n complex, shows the same Ca2+ dependence as that of the crayfish myofibrillar ATPase (Fig. 81). Hence, the replacement of crayfish actin and myosin by the corre- sponding proteins from rabbit hardly changes the pCa/AT-Pase relationship. This close similarity is also reported here as evidence of the functional integrity of the reconstituted regulated actin used to characterize its Ca2+-binding properties (see also Figs. 6 and 7). As seen on Fig. 81, both actomyosin regulated by crayfish Tm.Tn complex and crayfish myofibrils display a steep rise in ATPase activity with free [Ca"]. Because there is only one Ca2+-binding site of physiological significance on crayfish TnC, such a cooperative-like behavior in calcium activation cannot be explained as a consequence of multiple sites on the TnC molecule. On rabbit skeletal TnC, there are four Ca2+-binding sites, and two of these are thought to regulate hydrolysis of MgATP (15). Therefore, it was of interest to compare, in the same experimental conditions, rabbit and crayfish myofibrils with respect to their pCa/ATPase relationship. The rabbit myofibrillar ATPase activity as a function of [Ca"'] is shown in Fig. 811. Both the position and the slope of the activation curve are not significantly different from those for crayfish myofibrils. Thus, in rabbit myofibrils also, the sharp response of ATPase to Ca2+ is controlled by factor(s) other than the requirement of more than one bound Ca2+ on TnC for activation.
The pCa/ATPase curve for crayfish myofibrils and for the crayfish Tm. Tn-regulated actomyosin was compared with the Ca-binding curves determined at the same temperature (25 "C) for crayfish Tm.Tn-containing actomyosin (Fig. 80. The relation between free [Ca"] and ATPase activity fits with neither "weak" (regulated actomyosin during steady state ATP hydrolysis) nor "strong" (regulated actomyosin, no ATP) binding of Ca2+ to TnC. The threshold [Ca"] for activation corresponds to the range of [Ca'+] where the Ca-specific site in its low affinity state starts to bind Ca" significantly. On the other hand, comparison of the activation curve and the "strong" binding of Ca2+ to TnC on regulated actomyosin (no ATP) shows that full activation is reached at [Ca"] where the regulatory site in its high affinity state approaches saturation. This suggests that during calcium activation of the actomyosin ATPase, the Ca-specific site on TnC displays at least two affinity states (one low and one high), the high affinity state being induced by interaction of myosin with regulated actin in the ATPase cycle.
DISCUSSION
Our Ca2+-and M$+-binding studies on crayfish TnC show that this protein has one Ca2+-binding site with an affinity in the range of physiological free [Ca"] and a high selectivity for this metal as compared to M$+ (regulatory Ca2+-specific site). TnC is thought to have evolved from a four-domain ancestor which was common to many intracellular calciumbinding proteins; each domain contained a calcium-binding site (43). In skeletal TnC, domains I and I1 include the two Ca2+-specific sites, whereas domains I11 and IV correspond to the two high affinity Ca2+-binding sites that bind Mg2+ competitively (Ca2+-Mg2+ sites); domains I1 and 111 contain each a distinct TnI-binding site (13, 44). Bovine cardiac TnC is analogous to the skeletal muscle protein but its domain I not longer binds Ca2+ (17, 44). In contrast, during invertebrate evolution, as many as three of the four putative domains of TnC have lost their calcium affinity of physiological significance. The presence of a single Ca-specific site on crayfish TnC is reminiscent of the situation prevailling in cardiac TnC where only domain 11 binds Ca2+ specifically. It is tempting, therefore, to postulate that the regulatory Ca2+-specific site of crayfish TnC is located in domain 11. However, a more precise relation between structure and function cannot be predicted before amino acid sequence data for crayfish TnC become available.
The Ca2+-binding properties of crayfish TnC again raise the question of the role played by domains I11 and IV on TnC. A structural role for the Ca2+-MP sites on skeletal TnC has been postulated, since these sites would always contain either Ca2+ or M P in vivo, and their occupancy by either metal is required for attachment of TnC to TnI in intact myofibrils (16). Crayfish TnC possesses about five sites with the same low affinity ( K = 1-2 X lo3 M-') for Ca2+ and M$+. If some of these Ca2+-MP sites were the mutated sites in domains I11 and IV, then one could postulate that their occupancy by M$+ in vivo is required for maintaining the integrity of the Tn complex; however, this turns out not to be the case. Unpublished results of ours indicate that removal of metal ions from whole crayfish myofibrils by extractions with metal chelators does not cause a dissociation of TnC from the myofibrils. This suggests that in crayfish TnC, there is a region that binds TnI irrespective of Ca2+ or Mg+. The fourdomain ancestor of calcium-binding proteins probably possessed only sites specific for Ca" as in calmodulin (43). Therefore, it is likely that evolution of domains I11 and IV in TnC proceeded in two ways, toward either the loss of ionbinding capacity (invertebrates) or the acquisition of high affinity Ca2+-Mp2+ sites (vertebrates). In both cases, these two domains maintain the structure of the Tn complex intact independently of the cytosolic levels of Ca2+, thus allowing the Ca2+-specific site(s) to function in the regulation of Ca2+ activation.
We have taken advantage of the simplicity of Ca2+ binding in crayfish TnC to study those features that are more difficult to discern in the vertebrate Tn-linked regulatory system because of the presence of multiple and diverse sites which bind Ca2+ in the range of its physiological concentrations. Previous studies on the TnI.TnC and whole Tn complexes from skeletal (15) and cardiac (17) muscle indicated tht the interaction of TnI and TnC results in a 10-to 20-fold increase in the affinities of both the Ca-Mg sites and the Ca-specific sites. This is also the case for TnC from crayfish with respect to its single Ca2+-specific site. Moreover, our Ca-binding studies reveal that while the regulatory site displays essentially the same affinity ( K = 4 X lo6 M-') in the crayfish Tm .
Tn complex as in the Tn complex, in regulated actin the Caspecific site exhibits a markedly lower affinity (K = 2 x lo5 "l).
As reported in our earlier studies (l), these data show that the actin filament affects the Ca-binding properties of TnC. Similar results have recently been reported by Zot et al. (45) for the rabbit system, where an effect of similar magnitude was found exclusively for the Ca-specific sites. Interestingly, the reduced affinity approaches the level of that of TnC in its isolated state; this suggests that the effect of actin, mediated through TnI and (or) transmitted via Tm to TnT and to TnI, depresses those interactions between TnI and TnC which increase the affinity of the Ca-specific site on TnC in the absence of actin. This is in agreement with studies showing that the binding of Tm-Tn to F-actin appears to be weaker in the presence of Ca" than in its absence (46). Indeed, it follows from thermodynamic reasoning that, if the binding of Ca2+ to TnC decreases the binding constant of Tm.Tn to F-actin, the binding of the Tm. Tn complex to actin filament decreases the binding constant of Ca2+ to TnC.
Bremel and Weber (14) first reported a slight increase in affinity of skeletal TnC for Ca2+ when regulated actin and myosin form complexes in the absence of ATP hydrolysis, i.e. under equilibrium conditions. Our studies on actomyosin, regulated by crayfish Tm . Tn, indicate that in the absence of ATP the interaction of myosin and regulated actin results in a 20-to 30-fold increase in the affinity of the Ca2+-specific site on TnC. The affinity of regulated actomyosin for Ca2+ (K = 7 X lo6 M-') is similar to that of the Tm .Tn complex.
Hence, in the rigor state where the Tm .Tn complex is "pushed" into the groove of the actin helix by myosin heads, the effect of actin on the Ca-specific site is suppressed. The rigor state can be assumed to correspond only to the final step of an actomyosin ATPase cycle. The affinity of actin to myosin-nucleotide intermediates of the cycle is lower than in the rigor state and varies over four orders of magnitude depending on the state of the nucleotide bound to myosin. Therefore, the effect of myosin intermediates on binding of Ca" to regulated actin is difficult to measure directly.
The steep pCa/ATPase curve of rabbit skeletal myofibrils has been attributed to the requirement that all four sites (23) or both Ca-specific sites (15) on TnC must be filled by Ca2+ for activation to occur. However, experimental data have been recently obtained that do not support such an explanation. First, as monitored by tension development of rabbit psoas fibers (25) and by ATPase of rabbit skeletal myofibrils (16, 24), the responses to Ca2+ are much too steep to be explained solely by a requirement for 2 or even 4 calcium ions bound on TnC. Second, the pCa/ATPase curve of cardiac myofibrils is sloping as sharply as that of skeletal myofibrils despite the suggestion that among three Ca-binding sites on cardiac TnC, only the site which is specific for Ca2+ is regulatory (17). Similarly, our studies show that crayfish myofibrils and rabbit actomyosin, regulated by crayfish Tm . Tn complex, display a steep rise of ATPase activity with [Ca"]. Moreover, we observe a close similarity in the Ca2+ dependence of ATPase activity with rabbit and crayfish myofibrils as well as with actomyosin regulated by rabbit and crayfish Tm . Tn complex.
Thus, in Tn-linked regulation of actomyosin ATPase, the requirement of more than one site occupied by Ca" on TnC does not play any role in the steep responses to Caz+.
The Ca-binding measurements carried out on the crayfish Tm. Tn-containing actomyosin during steady state ATP hydrolysis reflect essentially the noncooperative Ca-binding properties of regulated actin which is dissociated from myosin. With the simplest kinetic models, the increase in ATPase rate would be expected to be proportional to the fractional occupancy of the Ca-specific site in its low affinity state (K = 1-2 X lo5 M-') over the entire range of [Ca"]. However, this is not the case. When comparing the overall "weak" binding of Ca2+ with the ATPase activity as a function of [Ca2+], it appears that the only common range of [Ca'+] for both curves is the one where the Ca-specific site begins to bind Ca2+ significantly and activation starts (Fig. 8).
In order to explain the sensitive activation of isometric muscle contraction by Ca2+, Hill (47) has developed a model with two major ingredients in the regulation of contraction: (a) Ca2+ binds much more strongly on TnC if myosin is already attached to actin; (b) there is positive cooperativity in the system because of nearest neighbor Tm-Tm interactions which are responsible for the steep response to Caz+. The increased calcium affinity of myofilaments as a result of crossbridge interaction has recently been shown for muscle fibers in the isometric state (48). In considering the application of the model of Hill (47) to the regulation of actomyosin ATPase, we note that under the conditions used in our experiments (where the actomyosin or myofibrillar preparations are under no tension) the fraction of myosin bridges attached to actin at any instant is too small to alter the overall Ca2+ affinity of TnC in regulated actin (Fig. 7, Table I). This, of course, does not mean that the cross-bridge-induced increase in the cal- | 6,426 | 1984-07-25T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Spatial Visualization as Mediating between Mathematics Learning Strategy and Mathematics Achievement among 8 th Grade Students
Jordanian 8 grade students revealed low achievement in mathematics through four periods (1999, 2003, 2007 & 2011) of Trends in International Mathematics and Science Study (TIMSS). This study aimed to determine whether spatial visualization mediates the affect of Mathematics Learning Strategies (MLS) factors namely mathematics attitude, mathematics motivation, mathematics self-regulation, mathematics self-efficacy, and mathematics anxiety on mathematics achievement. The study consists of 360 students from public middle schools in Alkoura district, selected through stratified random sampling. It employed 65 items to assess MLS, which consists, attitude (18 items), motivation (7 items), self-regulation (25 items), self-efficacy (5 items) and math anxiety (10 items). The mathematics test comprises of 30 items, which has eight items for numbers, 14 items for algebra and eight items for geometry while spatial visualization test consists of 32 items base on 2D and 3D views. The findings showed that spatial visualization fully mediated the relationship between motivation, math anxiety and mathematics achievement and it partially mediated the relationship between attitude and mathematics achievement. However, the results showed no mediating effect between self-regulation, self-efficacy and mathematics achievement. Considering these results, it is recommended that teachers should focus on mathematics attitude, mathematics motivation and math anxiety in classes to make achievement in mathematics easier. Moreover, these factors help students deeply understand mathematics through interest in spatial visualization as it mediates between relevant factors and mathematics achievement.
Introduction
The educational system is primarily viewed as a significant factor forming the basis of an individual's development and progress, and in turn, forms the core of countries' development.As such, more and more focus is being emphasized on the educational systems promotion on a global scale.In the context of Jordan, the government has made considerable efforts in developing its educational system.Such system has experienced tremendous development and increasing progress that date back to the 1920s (Al-Jaraideh, 2009).Owing to the significance of mathematics in the knowledge economy, Jordan's Ministry of Education (MoE) has concentrated on improving students' knowledge, skills and achievement in mathematics (Sabah & Hammouri, 2010).
The importance of mathematics has been hailed by many studies in literature.According to Drew (1996), mathematics is the most important factor that relates to an individual's success.He proceeded to describe mathematics as a subject that is required for entry into many professions and it is important for existing as well as emerging occupations in a global economy that is based on information and technology.Saffer (1999) also stated that mathematics is not just useful in the day to day skills such as managing money but also in the most popular occupations and countless of jobs that call for some mathematical skill or another.This is the reason why mathematics is hailed at a higher rate compared to other subjects, and it has been called as the queen of all sciences and servant to all disciplines (Ajayi, Lawani, & Adeyanju, 2013).A student's proficiency in mathematics in schools is reflective of the related other variables or a combination of variables comprising academic and non-academic variables including individual characteristics, mathematics attitude , motivation, self-concept, self-confidence, self-regulation, equipment and instructional materials for effectively teaching the subject (Ma & Xu, 2004;Tella, 2007).
Studies dedicated to the educational field imply that student attitudes toward a subject influence academic success (Popham, 2005;Royster, Harris, & Schoeps, 1999).Additionally, attitude is among the factors that are gaining more focus from scholars as well as educators.More importantly, studies have highlighted the relationship between attitudes towards mathematics and achievement in the subject (Ma & Kishor, 1997) as a reciprocal influence where attitudes impact achievement and vice versa.In addition, Hammoury (2004), in her study involving eighth grade Jordanian school students, revealed that attitude directly impacts mathematics achievement.Also, in Eleftherios and Theodosios's (2007) study, attitude is revealed to impact mathematics achievement and mathematics abilities including spatial visualization to memorize formula and procedures.
A second factor that can influence and determine the student's success in school is mathematics motivation.Learner's motivation is viewed as a crucial aspect of effective learning.Even psychologists are convinced that motivation is a necessary element for learning and that satisfactory school learning may not take place without enough learning motivation (Tella, 2007).Additionally, Hammoury (2004) claimed that motivation significantly and positively impacts mathematics achievement among eighth grade students while Cassidy (2002) revealed that motivation is important in problem solving skills.
Another factor that influences learning is mathematics self-regulation as evidenced by Zimmerman and Martinez-Pons's (1990) study which demonstrated its relevance for school students and academic achievement.Self-regulated learning is described as the knowledge and skills acquisition through cognitive and meta-cognitive process and actual behavior (Zimmerman, 2000).Researchers claimed that self-regulation is crucial to the learning process (Jarvela & Jarvenoja, 2011;Zimmerman, 2008) as it assists students in creating better learning habits, strengthening study skills (Wolters, 2011), applying learning strategies to improve academic outcomes and monitoring their achievement (Harries, Friedlander, Sadler, Frizzelle, & Graham, 2005) and evaluating their academic progress (De Bruin, Thiede, & Camp, 2001).
It is also notable that in the past three decades, educational research has stressed on self-efficacy (Joo, Bong, & Choi, 2000).Bandura (1997) described self-efficacy as the belief in one's capabilities to organize and execute the courses of action needed to be achieved.Therefore, it can be stated that beliefs of self-efficacy can influence student's behavior through its impact upon the decisions of the tasks to engage in, the level of effort expended, and the time duration of persevering in difficult situations.A study has also revealed that self-efficacy is a major predictor of general academic achievement and in mathematics achievement in particular (Zimmerman, 2000).
In studies of meta-analysis, Hembree (1990) and Ma (1999) revealed that mathematics anxiety is inversely related to mathematics achievement.Ikegulu (2000) stated that math anxiety may hinder the developmental achievement of mathematics learners.Moreover, Gourgey (1984) claimed that math anxiety is a factor that leads students to stop trying when they encounter mathematics problems.Math anxiety may lead to higher withdrawal and/or failure rates among students who take developmental mathematics courses.
Problem Statement
The importance of acquiring mathematics knowledge in the world is quite evident owing to its usefulness in everyday life and in several disciplines.The students' low achievement in mathematics in all education levels is the most crucial issue that researchers and educators are concerned about (Abo-lebdeh, 2008).Studies concerning the topic have revealed low level of mathematics achievement of school students, particularly 8 th graders (Chouinard et al., 2007;Ikegulu, 2000).This called for several studies to examine the low mathematics achievement of eight graders in schools in general and in Jordanian schools students in particular (Abo-lebdeh, 2008;Hammouri, 2004).
Despite the studies that investigated the different factors of mathematics achievement conducted by prior researchers (Capraro, Young, Lewis, Yetkiner, & Woods, 2009;Hammoury, 2004), they are still insufficient, and paid minimal attention to schools (Abo-lebdeh, 2008).As such, researchers over the past two decades have examined the low level of mathematics achievement school-going students.According to Mullis, Martin, Foy, and Arora (2012), Jordanian 8 th grade students revealed low achievement in mathematics through four periods (1999,2003,2007,2011) of Trends in International Mathematics and Science Study (TIMSS).
Studies claimed that Mathematics Learning Strategy factors are all significant in improving students' mathematical achievement (Areepattamannil, 2014;Ifamuyiwa & Ajilogba, 2012;Sartawi, Alsawaie, Dodeen, Tii, & Alghazo, 2012).However, regardless of their comprehensive findings, there is still the need to investigate the factors on a global scale and in the context of Jordan as there is still a gap in literature (Hammoury, 2004;Roviro & Sancho-Vinuesa, 2012).
Moreover, although spatial visualization has been considered to be significantly related to mathematics achievement, findings are inconclusive.Some researchers revealed a positive link between spatial visualization and mathematics achievement (Meyer et al., 2010;Rohde, 2007) and others revealed minimal to no correlation between the two (Lee et al., 2004;Pandisco, 1994).
Literature Review
Social-Cognitive Theory was presented by Bandura (1986) who came up with the theory and established variables in the learning psychology field, which illustrated several learning techniques.Social cognitive theory is assigned to a model of growing interactive agency.It suggests that a person is not autonomous factor or mechanical conveyers of animating environmental influences.Rather, they develop causal contribution to their own motivation and action encapsulated in a system of triadic reciprocal causation.This provides base for Bandura (1986) conception of reciprocal determinism, which is the fundamental concept on which this model of triadic reciprocal causation is based on.The social cognitive theory put forward by him in 1989 discusses that a person can be a good judge of his own experience and thought processes by self-reflecting, and such self-reflection leads to the evaluation and modification of his environments and social systems.Therefore, social cognitive theory has been used on several fields of psychosocial function such as attitude, anxiety, self-concept, self-regulation and motivation (Landry, 2003).These evaluations contain understanding of self-efficacy.
Relationship between Mathematics Achievement and Spatial Visualization
Spatial visualization is described as a basic skill of understanding and developing primary mathematical skills and a gateway to superior problem solving (Augustynaik, Murphy, & Phillips 2005).Despite the consensus of several researchers upon the importance of spatial visualization in mathematics learning as they improve the intuitive view and comprehension of various areas of math (Usiskin, 1987), other studies claimed that the relation between the two factors are still not clear (Idris, 1998).Ghbari, Abu-Shendi, and Abu-Sheirah (2008) conducted a study to examine the spatial ability development among Jordanian students.The study sample comprised 221 students randomly selected from various sections, who took a mental rotation test that gauged their spatial ability.The findings showed no differences in performance on the test of spatial ability due to study years.They also showed no significant differences in spatial ability test performance due to social status or academic performance.Also, Abu-Mustafa (2010) focused on the relationship between spatial ability and the achievement of sixth graders in mathematics in an attempt to determine the impact of the gender variable, and to determine the students' diversity (in terms of high and low spatial abilities).The study sample comprised of students in 6 classes of sixth grade (228 students), distributed to three female classes and three male ones.The study employed the spatial orientation test-Card Rotation-by Whitley Test.The results showed positive correlation between mathematics achievement and students' spatial ability.The results also showed that male students have higher spatial abilities compared to their female counterparts in the application of one way analysis of variance between the scores of both genders.In addition, the results showed that high achievers possess high spatial abilities compared to their average and low achieving counterparts.
The spatial visualization abilities of enrolled engineering students were examined by Seabra and Santos (2008) in order to assess the relationship between spatial abilities, gender and age.The sample consisted of 605 students in mental rotation test and 587 students in visualization test.They revealed significant differences between the genders in terms of their scores in spatial visualization abilities, where male students outperformed female students.Sipus and Cizmesija (2012) investigated the differences in gender in light of spatial ability in Croatia with the help of the Mental Cutting Test.The study sample consisted of 130 students and the findings showed that male students outperformed female students.
Similarly, Meyer, Salimpoor, Wu, Geary, and Menon (2010) examined the differential contribution of particular working memory component to mathematical achievement of 98 students enrolled in San Francisco Bay Area.The study sample was required to take IQ assessments through Wechsler Abbreviated Scale of Intelligence (Wechsler, 1999).The results showed that spatial visualization component was a predictor of mathematical reasoning and numerical operations skills.Meanwhile, Idris (1998) studied the key role of cognitive variables including spatial visualization, field dependence/independence, and Van hiele levels of geometric in mathematics learning among 137 6 th -8 th graders enrolled in the Franklin County School in the United State of America (U.S.A).The findings revealed spatial visualization to be related to mathematics achievement.Rohde (2007) revealed that from the three spatial factors namely visualization, perceptual speed, and closure speed, the first factor explained major variance in academic achievement and math achievement with independence from general intelligence.The author contended that visualization does not only influence math achievement but also scholastic achievement on the whole.
More evidence revealing the relationship between spatial visualization and mathematics was provided by Lee, S. Ng, E. Ng, and Lim (2004).Their study is an investigation of the central executive functions comprising of phonological loop and spatial visualization sketchpad, and mathematical achievement among 10 year old students in Singapore.They revealed that spatial visualization sketchpad and phonological loop did not contribute to mathematical achievement indicating that the overall executive functioning of the combined elements of both spatial visualization do contribute to mathematical achievement in combination and not individually (Lee et al., 2004).In the present study there are five hypotheses proposed.
H A (1): Spatial visualization mediates between mathematics attitude and mathematics achievement among 8 th grade students.
H A (2): Spatial visualization mediates between mathematics motivation and mathematics achievement.
H A (3): Spatial visualization mediates between mathematics self-regulation and mathematics achievement.
H A (4): Spatial visualization mediates between mathematics self-efficacy and mathematics achievement.
H A (5): Spatial visualization mediates between math anxiety and mathematics achievement.
Sample and Population
The present study is carried out in middle schools in Jordan, specifically in Al-koura District Governorate, North of Jordan.The research population selected for the present study included 2,257 8 th grade students 1,101 (49%) males and 1,156 (51%) females representing the whole schools in the region (37).The sample size of the present study comprised 360 schools students from 8 th grade, 178 (49%) male students and 182 (51%) female students.This ensured that the number represents the whole population.
Instruments
Data for the study were collected by three instruments, MLS questionnaire, Mathematics achievement and Spatial Visualization Test, where MLS questionnaire is divided into five sections (1-5).Sixty-five items were used to measure mathematics learning strategy factors namely mathematics attitude (18 items), mathematics motivation (7items), mathematics self-regulation (25 items), mathematics self-efficacy (5 items) and math anxiety (10 items).Students were asked to indicate the extent to which they agreed with statements on a 5-point Likert scale [choice 1 (strongly disagree), 2 (disagree), 3 (moderately agree), 4 (agree) and 5 (strongly agree)].These items collect information on five sections: mathematics attitude, mathematics motivation, mathematics self-regulation, math's self-efficacy and mathematics anxiety.
The Mathematics Achievement questionnaire is a modified and constructed version by the Ministry of Education in Jordan and it includes 30 multiple choice items consisting of numbers (8 items), algebra (14 items) and geometry (8 items) of one mark each.A correct response to an item was awarded one mark, while an incorrect response was given no mark.
The spatial visualization test by Ben-Chaim, Lappan, and Houang (1988) was employed to measure the students' spatial visualization in the present study.Idris (1998) made use of this test comprising of thirty-two multiple choice items with each having two options.The test is considered as an untimed power test that takes around 20-30 minutes to complete.Generally, the test items comprises of the 2D and 3D views of the building and mat plan including the description of the building base by square and numbers.Each square states the number of cubes within it.The test was initially created for 6 th -8 th graders, and its validity has been tested by studies (Idris, 1998;Fraenkel & Wallen, 2006).
Data collected was analyzed with the help of Statistical Package for Social Sciences (SPSS), version 19.0, in order to examine the obtained information from the respondents.
Findings
The first hypothesis H A (1) states that spatial visualization mediates the relationship between mathematics attitude and mathematics achievement.To examine the hypothesized statement, hierarchical regression was performed.
As portrayed in Table 1, the results indicated that in the first model, mathematics attitude significantly contributed to mathematics achievement, R 2 = 0.05, F = 15.73,p < .05.Model one shows that mathematics attitude was positively related to mathematics achievement = .21,t = 3.96, at the significant level of p = <.05.In model two, spatial visualization was added to the equation, the R 2 = 0.42 significantly changed with F = 120.65,p < .05.Model two shows that mathematics attitude was reduced = .09,t = 2.31, at the significant level of p > .05 in testing the mediation effect of spatial visualization: in model 1, the relationship between mathematics attitude and mathematics achievement was significant while in model 2 the relationship between mathematics attitude and mathematics achievement, although reduced, is still significant and not decreased.Therefore, spatial visualization partially mediated the relationship between mathematics attitude and mathematics achievement.The second hypotheses H A (2) stated that spatial visualization mediates the relationship between mathematics motivation and mathematics achievement.
As portrayed in Table 2, the results indicated that in the first model, mathematics motivation significantly contributed to mathematics achievement, R 2 = 0.02, F = 6.20, p < .05.Model one shows that motivation was positively related to mathematics achievement = .14,t = 2.49, at the significant level of p = <.05.In model two, spatial visualization was added to the equation, the R 2 = 0.42 significantly changed with F = 119.04,p < .05.
Model two shows that mathematics motivation was reduced = .08,t = 1.86, at the significant level of p < .05 in testing the mediation effect of spatial visualization: in model 1 the relationship between mathematics motivation and mathematics achievement was significant while in model 2 the relationship between motivation and mathematics achievement reduced and not significant.Therefore, spatial visualization fully mediated the relationship between motivation and mathematics achievement.The third hypotheses H A (3) stated that spatial visualization mediates the relationship between mathematics self-regulation and mathematics achievement.
The results in Table 3 demonstrate the correlations of self-regulation and mathematics achievement and they showed no relationship between the two.Therefore, it can be concluded that the hypothesis did not achieve the first condition of the mediation variable analysis, which is the independent variable has to influence the dependent variable in order to test the mediation hypothesis.Therefore, hypothesis number three, which states that spatial visualization mediates the relationship between self-regulation and mathematics achievement, was rejected.The fourth hypothesis states that spatial visualization mediates the relationship between self-efficacy and mathematics achievement.
The results in Table 4 demonstrate the correlations between self-efficacy and mathematics achievement and show no relationship between the two variables.Therefore, it can be concluded that the hypothesis did not achieve the first condition of the mediation variable analysis which is, the independent variable has to influence the dependent variable in order to test the mediation hypothesis.Therefore, hypothesis number four, which states that spatial visualization mediates the relationship between self-efficacy and mathematics achievement, was rejected.Lastly, fifth hypothesis states that spatial visualization mediates the relationship between math anxiety and mathematics achievement.
As portrayed in Table 5, the results indicated that in the first model, math anxiety significantly contributed to mathematics achievement, R 2 = 0.04, F = 14.59, p < .05.Model one shows that math anxiety was positively related to mathematics achievement = .21,t = 3.82, at the significant level of p < .05.In model two, spatial visualization was added to the equation, the R 2 = 0.41 significantly changed with F = 116.84,p < .05.Model two shows that math anxiety was reduced = .04,t = .94,at the significant level of p > .05 in testing the mediation effect of spatial visualization: in model 1 the relationship between math anxiety and mathematics achievement was significant while in model 2 the relationship between math anxiety and mathematics achievement was reduced and not significant-therefore, Spatial visualization fully mediated the relationship between math anxiety and mathematics achievement.
Discussion
The possible explanation for the mediating effect of spatial visualization on the relationship between attitude and mathematics achievement is having a high spatial visualization ability encourages students to learn mathematics and this leaves a positive attitude among students to learn mathematics in order to obtain high scores in mathematics.In addition, the relationship between spatial visualization a mathematical ability is based upon the fact that operations performed while interacting with mental models in mathematics are often the same as those used to operate in spatial environment (Battisa, 1994).A visual representation not only organizes the data at hand in a meaningful structure, but is also an important factor guiding the analytic development of a solution (Fischbein, 1987).It is possible that the increasing positive attitude together with the increasing importance of spatial visualization have strengthened the positive association between mathematics attitude and spatial visualization with mathematics achievement.This association might have encouraged student attitudes as spatial visualization is important to success in mathematics while mathematics is important in academic success.In terms of spatial visualization findings, the result indicated that spatial visualization has an important role in enhancing the students' attitude towards mathematics and heightening their achievement in mathematics subject.More importantly, the relationship between mathematics attitude and mathematics achievement through spatial visualization was fully supported.
A mediating effect of spatial visualization was also found between motivation and mathematics achievement.It was possible that having high spatial visualization ability encourages students and motivates them to enhance academic achievement.This statement was supported indirectly by Wheatley (1992) where spatial abilities were accepted as crucial for high mathematical abilities.In addition, Lowrie and Kay (2001) asserted that students prefer to select visual methods to complete difficult mathematics problems.This finding could be interpreted as views of the important role of spatial visualization that motivate students and lead them to higher mathematics achievement.Hence, it can be concluded that creating students awareness on the importance of spatial visualization to their future undertakings would be one way of motivating students in their mathematics learning.
In terms of spatial visualization findings, the result indicated that spatial visualization has an important role in enhancing the students' motivation towards mathematics and heighten their achievement in mathematics subject.However, the relationship between the mathematics motivation and mathematics achievement through spatial visualization was only partially supported.
Furthermore, the result indicated that spatial visualization does not play an important role in enhancing student's ability to apply self-regulation strategies while doing mathematics tasks.In other words, the relationship between the self-regulation and mathematics achievement through spatial visualization was not supported.Learning spatial visualization topics normally require various learning techniques and do not rely on cognitive processing spatial information.Hence, the insignificant results of the mediating spatial visualization on the relationship between self-regulation and mathematics achievement maybe due to the fact that the participants may rely on other abilities but not spatial visualization for learning mathematics, and therefore, their decisions to use other abilities are based on personal preferences rather than intellectual ability.Furthermore, various interpretations of the definition of self-regulation have led to a multiplicity of instruments to measure the construct.Therefore, self-regulation measure used for this study may have led to the insignificant result for the mediating role of spatial visualization on the relationship between self-regulation and mathematics achievement.
In addition, the results showed no mediating effect between self-efficacy and math achievement.Students who believe that mathematics will be useful to them in the future develop a desire to learn mathematics and vice versa.Feelings are the emotional reactions that are felt by the students when they deal and face with spatial ability problem.Poor performance on spatial visualization level may directly affect perceptions of self-efficacy, especially in girls.Students who have the opportunity to improve their spatial-visualization skills demonstrated greater self-efficacy and are more likely to persist in their study.Girls were more apt to display under-confidence relative to their actual spatial visualization tasks and to attribute spatial visualization failure than were boys.Therefore, girls in this study showed less mean scores on spatial visualization and this may have led to the insignificant result.
Lastly, the results showed a mediating effect between math anxiety and math achievement.Since anxiety can create worries that are verbal in nature (Beilock et al., 2010), and some students use verbal strategies while others use spatial ability, and since male students perform better than girls on spatial visualization, the researcher assumed that the effect of mediating variable may be due to the fact that majority of the study sample are males so this may affect the result of mediating variable.In other words, students who rely heavily on verbal strategies to solve mental rotation problems should show the strongest relation between spatial visualization and anxiety and mathematics.To this end, female students have been reported to engage in more verbal strategies such as thinking of words for features and matching shapes based on those features, whereas males have been reported to engage in more spatial strategies such as mentally rotating one shape and comparing the resulting visual representation with another shape.In sum, the relationship between the mathematics anxiety and mathematics achievement through spatial visualization was fully supported.
Implications
In particular, one of the more interesting findings of this study is the demonstration of the presence of a novel empirical evidence of mediating effects that expounds the relationship between Mathematics Learning Strategy factors with mathematics achievement.According to Summers' (2001) classification, the empirical contributions comprises of 'testing a theoretical linkage between two constructs that has not been previously tested, examining the effect of a potential moderator or mediator variable on the nature of the relationship between to constructs, and testing a theoretical linkage between two constructs'.Furthermore, the study identifies the influence of spatial visualization as a mediator variable between the relationship of Mathematics Learning Strategy factors and mathematics achievement.In other words, empirically, spatial visualization is revealed in this study to partially mediate the linkage between Mathematics Learning Strategy and mathematics achievement.The reason behind this lies in the strength of the relationship between independent and dependent variables in students who demonstrate high spatial visualization.
Conclusion
The tested hypotheses set out to examine the mediating effect of spatial visualization on the relationship between Mathematics Learning Strategy factors and mathematics achievement.Results revealed that spatial visualization played a mediating role between mathematics attitude, mathematics motivation, math anxiety and mathematics achievement.On other hand, spatial visualization did not mediate the relationship between self-efficacy, self-regulation and mathematics achievement.Hence, the result of the hypotheses was partially supported.This study also suggests that the spatial visualization mediating effect between mathematics attitude, mathematics motivation and math anxiety and math achievement may be due to teaching styles employed by teachers at schools.The research findings could be justified and since literature has not provided any interpretation for the mediating role of spatial visualization with other variables, the findings of the present study contribute to literature by highlighting partial support for its mediating role in the relationship between Mathematics Learning Strategy factors and mathematics achievement.
Table 1 .
The result of hierarchical regression analysis using spatial visualization as a mediator in the relationship between mathematics attitude and mathematics achievement
Table 2 .
The result of hierarchical regression analysis using spatial visualization as a mediator in the relationship between mathematics motivation and mathematics achievement
Table 3 .
The result of hierarchical regression analysis using spatial visualization as a mediator in the relationship between mathematics self-regulation and mathematics achievement
Table 4 .
The result of hierarchical regression analysis using spatial visualization as a mediator in the relationship between mathematics self-efficacy and mathematics achievement
Table 5 .
The result of hierarchical regression analysis using spatial visualization as a mediator in the relationship between math anxiety and mathematics achievement | 6,267.8 | 2015-04-27T00:00:00.000 | [
"Education",
"Mathematics",
"Psychology"
] |
A Matrix Factorization Algorithm for Efficient Recommendations in Social Rating Networks Using Constrained Optimization
In recent years the emergence of social media has become more prominent than ever. Social networking has become the de facto tool used by people all around the world for information discovery. Consequently, the importance of recommendations in a social network setting has urgently emerged, but unfortunately, many methods that have been proposed in order to provide recommendations in social networks cannot produce scalable solutions, and in many cases are complex and difficult to replicate unless the source code of their implementation has been made publicly available. However, as the user base of social networks continues to grow, the demand for developing more efficient social network-based recommendation approaches will continue to grow as well. In this paper, following proven optimization techniques from the domain of machine learning with constrained optimization, and modifying them accordingly in order to take into account the social network information, we propose a matrix factorization algorithm that improves on previously proposed related approaches in terms of convergence speed, recommendation accuracy and performance on cold start users. The proposed algorithm can be implemented easily, and thus used more frequently in social recommendation setups. Our claims are validated by experiments on two real life data sets, the public domain Epinions.com dataset and a much larger dataset crawled from Flixster.com.
Introduction
Matrix factorization in collaborative filtering recommender systems is usually performed by unconstrained gradient descent for learning the feature components of the user and item factor matrices [1]. This is essentially a "black box" approach, where apart from the minimization of an objective function (usually the Root Mean Squared Error (RMSE) over the known ratings), generally no other information or knowledge is taken into account during the factorization process. However this approach alone cannot solve efficiently a majority of recommendation problems. The most notable example is the Netflix Prize problem, which dealt with a very sparse, high-dimensional dataset with more than 100 million training patterns. The winning solutions of the "Bellkor's Pragmatic Chaos" team [2][3][4], made apparent that a huge number of latent factor models, clustering, and K-nearest neighbors approaches was necessary to be combined (both linearly and non-linearly) in order to provide an accurate final solution.
In earlier work where our focus was on training feedforward neural networks [5] we showed that, during training, it is useful to incorporate additional "incremental conditions", i.e., conditions involving quantities that must be optimized incrementally at each epoch of the learning process (by the term "epoch" we denote a training cycle of presenting the entire training set). We thus formulated a general problem, whereby we sought the minimization of the objective function representing the distance of the network's outputs from preset target values, subject to other constraints that represented the additional knowledge. We then demonstrated how to formulate the general problem of incorporating the additional knowledge as a constrained optimization task, whose solution lead to a powerful generic learning algorithm accounting for both target and incremental conditions. Advancing that work, in a recent study [6], and similarly to other proposed methods [7], we cast the factorization of the user-by-item ratings matrix as a feedforward neural network training problem, and thus concentrated on the development of constrained optimization methods that could lead to efficient factorization algorithms. In that study, we introduced a general constrained matrix factorization framework that we refer to as FALCON (Factorization Algorithms for Learning with Constrained OptimizatioN), and presented two examples of algorithms which can be derived from that framework, that incorporate additional knowledge about learning the factor matrices. The first example (FALCON-M), incorporated an extra condition that seeks to facilitate factor updates in long narrow valleys of the objective function landscape, thus avoiding getting trapped in suboptimal solutions. The second example (FALCON-R), considered the importance of regularization on the success of the factorization models and adapted the regularization parameter automatically while training.
In a social rating network, recommendations for a user can be produced on the basis of the ratings of the users that have direct or indirect social relations with the given user. This approach is supported by sociological models [8], and their verification due to the increasing availability of online social network data [9]. The models propose that people tend to relate to other people with similar attributes, and due to the effects of social influence, related people in a social network, in turn, influence each other to become even more similar. Thorough surveys summarizing the various approaches proposed for social recommender systems in general, can be found in [10,11].
In this paper, we exploit this information within the FALCON framework and propose a matrix factorization algorithm for recommendation in social rating networks, called SocialFALCON. Here the additional information is incorporated as an extra condition that, during training, seeks to render the latent feature vector of each user as close as possible to the weighted average of the latent feature vectors of his direct neighbors, without compromising the need for decreasing the objective function. By achieving maximum possible alignment between the successive feature vectors updates of each user to that of his direct neighbors, the latent features of users indirectly connected in the social network also become dependent and hence social network influence gets propagated during training. An important benefit from this approach also arises for cold start users, that is, for users who have expressed only a few ratings. These users are more dependent on the propagation of social infuence compared to users with more ratings, and thus the SocialFALCON model, through the enforcement of the constraint that their latent features is close to those of their neighbors, can learn user feature vectors for this challenging category of users as well.
To evaluate the effectiveness of our approach in terms of prediction accuracy and computational efficiency, we present experimental results to compare its performance with the following baseline and state of the art algorithms for matrix factorization in recommender systems:
•
Regularized SVD (RegSVD): This method is the baseline matrix factorization approach which does not take into account the social network [12]. We compare our approach against this method in order to evaluate the improvement in performance induced by the social network information.
•
Probabilistic Matrix Factorization (PMF): This is an extension of the baseline matrix factorization which bounds the range of predictions by introducing non-linearities in the prediction rule [13]. Similarly to RegSVD we compare our approach against this method in order to evaluate the importance of utilizing social network information. • SVD++: This is also a matrix factorization approach which also does not take into account any social network information [14]. However SVD++ takes into account additional information in the factorization model in the form of implicit feedback for each user's item preference history. We compare our approach against this method since it achieves state of the art performance in recommender system tasks against which other algotihms should be compared, and because it provides a paradigm for extending the baseline matrix factorization approach with additional information.
•
SocialMF: This is a very closely related algorithm to our proposed approach but utilizes the social network information in a different way in the factorization model [15].
Detailed derivations and descriptions of the above algorithms are presented in the next sections. To elucidate on the transparency of the results, we have implemented each algorithm from scratch in low-level C with no dependencies on additional libraries, and we've made the source codes publicly available with documentation and additional supporting materials from Github as described in Section 8.
Due to the lack of publicly available social rating network datasets, the performance of all algorithms was evaluated on the two standard social rating network datasets utilized and introduced respectively in [15], namely the existing Epinions dataset and a large scale dataset that was gained from Flixster.com. Again for transparency reasons in the reported results, we've made publicly available all the training/validation cross-validation splits that we utilized in our experiments as described in Section 7.
The rest of this paper is organized as follows: The problem definition of matrix factorization in recommender systems is presented in Section 2. SVD++, which we consider in our benchmarks as a state of the art extension of the basic matrix factorization model, and which utilizes implicit rating information, is discussed in Section 3. Section 4 discusses the recommendation problem in a social rating network setting and describes SocialMF which is the most closely related method to our approach. The general constrained optimization FALCON framework is outlined in Section 5. In Section 6 we derive the SocialFALCON algorithm from the FALCON framework. In the same section we also discuss some desirable properties of the algorithm, and its advantages over the SocialMF approach, especially with regards to computational complexity. The real life data sets used in our experiments are described in Section 7, and the experiments are reported in Section 8. Finally, in Section 9 conclusions are drawn and we outline some interesting directions for future work.
Matrix Factorization for Recommender Systems
In its basic form, matrix factorization for recommender systems describes both users and items by vectors of (latent) factors which are inferred from the user-by-item rating matrix. High correlation between user and item factors leads to a recommendation of an item to a particular user [16].
The idea behind matrix factorization is very simple. We define the corresponding matrix factorizationR K of the user-by-item ratings matrix R ∈ R N×M aŝ where U ∈ R K×N and V ∈ R K×M , in order to approximate all the unknown elements of R.
For the recommendation problem, R has many unknown elements, which cannot be treated as zero, and the application of formal Singular Value Decomposition (SVD), e.g., by Lanczos' method [17] would provide erroneous results. Thus, for this case, the approximation task can be defined as follows: Letr ui denote how the u-th user would rate the i-th item, according to the factorization model, and e ui denote the error on the (u, i)-th known rating example (training pattern): and e ui = r ui −r ui .
In order to minimize this error (and consequently the error over all training patterns) we can apply the stochastic version of the gradient descent method on each (1/2) · e 2 ui to find a local minimum. Hence the elements of U and V can be updated as follows: where η is the learning rate.
To better generalize on unseen examples, we can apply regularization with factors λ u and λ v for the user and item factors respectively, in order to prevent large weights: Thus, Equation (5) are the user and item factors updates when minimizing the following objective function by performing unconstrained stochastic gradient descent on U u and V i for all users u and all items i: In the above equation I ui is an indicator function that is equal to 1 if user u rated item i and equal to 0 otherwise, and "||.||" denotes the Euclidean norm. In this linear model the predictions are usually clipped to the [min r , max r ] range appearing in the ratings matrix R.
It is also customary to adjust the predicted rating by accounting for user and item biases, that is, for systematic tendencies for some users to give higher ratings than others, and for some items to receive higher ratings than others. We can encapsulate these effects using baseline estimates [14] as follows: Denote by µ the overall average rating. A baseline estimate for an unknown ratingr ui is denoted by b ui and accounts for the user and item effects: The vectors β ∈ R N and γ ∈ R M contain the biases for all users and items respectively, and the elements β u and γ i indicate the biases of user u and item i respectively. These can be also learned by gradient descent if we plugin into Equation (6) the baseline corrected expression forr ui as: In this case the updates for β u and γ i are given by: where λ β and λ γ are regularization parameters for the user and item biases respectively. This approach is usually refered to in the literature as Regularized SVD (RegSVD) [12]. Instead of using a simple linear model, which can make predictions outside of the range of valid rating values, the dot product between user and item specific feature vectors can be passed through the logistic function (sigmoid) s(z) = 1/(1 + exp(−z)), which bounds the range of predictions within [0, 1]. In this case the objective function is modified as follows: where and g(z) = (max r − min r ) * s(z) + min r . This model is usually refered to as Probabilistic Matrix Factorization (PMF) [13]. Usually the learning rate and the regularization values for each latent factor are determined by a search for optimal values that optimize the performance on a withheld validation set. There are various approaches on how this search is performed, but the most common approach is by grid search [18], where a predefined grid of candidates is set up and on each candidate a model is trained and evaluated on the validation set. It is obvious that such a scheme is very expensive in terms of runtime as many different models have to be trained and then evaluated, neverthess it is the dominating method for parameter selection in matrix factorization models.
SVD++
SVD++ [14] is a state of the art extension to the RegSVD model by taking into account implicit information. The most natural choice for implicit feedback is the item preference history, which tells us about the items for which the user has expressed a preference (either explicit or implicit). Thus, the idea is to add a second set of item factors, relating each item i to a factor vector Y i ∈ R K , and model the factor vector of each user as a function of all factor vectors Y i , for which the user has expressed a preference.
Following [12,14], SVD++ gives the predicted rating of user u on item i as follows: where b ui is given by Equation (7) and the set R u contains the items rated by user u (for which we may or may not know the exact explicit rating). For example, in the Netflix challenge there was extra information in the qualifying set that users had rated certain movies but the actual value of each rating was withheld in order to be predicted by the contestants. Therefore, in order to predict an unknown ratingr ui , as in (8), SVD++ maintains a vector of factors U u for each user but this vector is also complemented by the sum |R u | − 1 2 ∑ j∈R u Y j that represents the combination of explicit and implicit feedback.
Plugging in Equation (12) into (6) and taking into account regularization parameters λ β and λ γ for the user and item biases as in Equation (9), we can learn U u , V i , and Y i by gradient descent updates as follows: where λ y is the regularization parameter for the items for which the user has expressed a preference.
SocialMF
In a social rating network, each user u also has a set N u of other users (direct neighbors) with which she has a connection. We denote by T uv the value of social "trust" [15] that user u has on user v as a real number in [0, 1]. A value of 0 means that there is no trust and a value of 1 means full trust from u to v. In practice, in most social networks the trust values are binary with T uv = 0 meaning that user u does not "follow" user v, and T uv = 1 meaning that u "follows" v. The trust values can be stored in a matrix T = [T uv ] NxN , where usually each row is normalized so that ∑ N u=1 T uv = 1. Note that the matrix T is asymmetric in general.
SocialMF [15] is a probabilistic model-based solution which incorporates the propagation of trust in the model, and which has been shown to improve both in terms of the quality of recommendations and speed over previously proposed approaches such as the Social Trust Ensemble (STE) proposed in [19]. The main idea behind SocialMF is that the latent feature vectors of a user should be made dependent on the feature vectors of his direct neighbors in the social network. Using this idea, latent features of users indirectly connected in the social network will be also dependent and hence the trust gets propagated. Thus, latent factors for all users and items are learned jointly from the ratings and the social graph. SocialMF therefore models the users' feature vectors as: whereÛ u is the estimated latent feature vector of u given the feature vectors of his direct neighbors.
To this end, SocialMF extends the basic PMF model by considering the following objective function: where g(U T u V i ) is the predicted ratingr ui of user u on item i. As in PMF, λ u , λ v and λ t are regularization parameters and I ui is an indicator function that is equal to 1 if user u rated item i and equal to 0 otherwise.
We can find a local minimum of the objective function in Equation (17) by performing gradient descent on U u and V i for all users u and all items i: Note that the derivative of the objective function with respect to the item feature vectors V i is simply the same to that of the PMF model.
A variation of SocialMF has been proposed in [20] in which the authors train separate matrix factorization models for each category c that the items belong to, and for which a user u has issued a trust statement towards v, given that u and v simultaneously have ratings in a category c.
Overview of the Falcon Framework
Based on the discussion in the previous sections, it is clear that in all approaches matrix factorization in recommender systems involves minimization by unconstrained gradient descent of a suitably chosen objective function with respect to the user and item features and biases. In our approach we suppose that there are additional relations to be satisfied, that represent additional knowledge we wish to incorporate into the learning mechanism, and involve all the available features U u , V i , β u and γ i (u = 1 · · · N and i = 1 · · · M).
For the rest of our discussion it is convenient to group all the features into a single column vector w (which we will refer to as the "weight" vector) as follows: Before introducing the form of the additional relations, we note that in this work we'll concentrate on the minimization of the Mean Squared Error (MSE) objective function given by: with the quantities involved explained in Section 2. We will also adopt an epoch-by-epoch (i.e., batch) optimization framework with the following objectives: 1. At each epoch of the learning process, the vector w will be incremented by dw, so that the search for an optimum new point in the space of w is restricted to a hypersphere of known radius δP centered at the point defined by the current w 2. At each epoch, the objective function L must be decremented by a quantity δQ, so that, at the end of learning, L is rendered as small as possible. To first order, we can substitute the change in L by its first differential and demand that dL = δQ.
Within the FALCON framework originally introduced in [6], we can define a certain quantity Φ that we also wish to incrementally maximize at each epoch subject to the objectives defined above. Consequently the learning rule can be derived by solving the following constrained optimization problem: This constrained optimization problem can be solved analytically by a method similar to the constrained gradient ascent technique introduced in optimal control in [21], and leads to a generic update rule for w as follows: First, we introduce suitable Lagrange multipliers λ 1 and λ 2 to take into account Equations (22) and (23) respectively. If δP is small enough, the changes to Φ induced by changes in w can be approximated by the first differential dΦ. Thus, secondly, we introduce the function φ, whose differential is defined as On evaluating the differentials involved in the right hand side, we readily obtain where G and F are given by To maximize dφ at each epoch, we demand that and Hence, the factor multiplying d 2 w in Equation (27) should vanish, and therefore we obtain Equation (29) constitutes the weight update rule, provided that λ 1 and λ 2 can be evaluated in terms of known quantities. This can be done as follows: From Equations (22), (26) and (27) we obtain with I GG and I GF given by It remains to evaluate λ 2 . To this end, we substitute (29) into (23) to obtain where I FF is given by Finally, we substitute (30) into (32) and solve for λ 2 to obtain Note that the positive square root value has been chosen for λ 2 in order to satisfy Equation (28).
Let us now discuss our choice for δQ. This choice is dictated by the demand that the quantity under the square root in Equation (34) should be positive. It is easy to show that the term I FF I GG − I 2 GF is always positive by the Cauchy-Schwarz inequality [5]. Now, since I GG = ||G|| 2 ≥ 0, it follows that care must be taken to ensure that I GG (δP) 2 > (δQ) 2 . The simplest way to achieve this is to select δQ adaptively by setting δQ = −ξδP √ I GG with 0 < ξ < 1. Consequently, the proposed generic weight update algorithm has two free parameters, namely δP and ξ.
The SocialFALCON Algorithm
The most important aspect that we will now discuss is the definition of the quantity Φ that we wish to incrementally maximize at each epoch. Rather than adopting the approach of SocialMF which imposes a target condition on the latent feature vector of a user as the weighted average of the latent feature vectors of his direct neighbors, we will adopt and epoch-by-epoch approach so as to incrementally maximize at each epoch the alignment of the user's feature vector update to the weighted average of the feature vectors updates of his direct neighbors at the immediately preceding epoch.
We thus introduce the quantity: where dU u t is the user u feature vector update at the present epoch and ∑ v∈N u T uv dU v t−1 is the weighted average of the latent feature vector updates of his direct neighbors v at the immediately preceding epoch. Similarly, dβ u t is the update of the bias of user u at the current epoch and ∑ v∈N u T uv dβ v t−1 is the weighted average of the updates of the biases of the user's direct neighbors at the immediately preceding epoch. Since within our constrained learning framework the whole weight vector w updates have constant moduli equal to δP (by Equation (22)), maximization of Φ amounts to minimization of the angle between each user's feature vector updates at the present epoch and the weighted average of the latent feature vector updates of his direct neighbors preceding that epoch. Hence, for the factorization problem, at each epoch of the learning process we can restore from the weight vector w the user feature vectors U u using Equation (20), and update them according to Equations (29) and (26) as: Similarly the user biases will be updated by: As far as the updates of the item feature vectors V i and item biases γ i are concerned, there are two possible alternatives. The first is to simply update them by: since there are no dependencies of Φ as defined in Equation (35) from any V i or γ i . An even more interesting option for updating the item feature vectors and biases is to supplement the quantity Φ with a term involving an incremental maximization target that we seek to achieve for each V i and γ i on an epoch-by-epoch basis. To this end, we adopt the approach that we have proposed in algorithm FALCON-M as described in [6]. According to that approach, the quantity that we wish to maximize is the alignment between the item feature vector updates at the present and immediately preceding epoch. Due to the constant moduli of the updates of the whole vector w (Equation (22), this also amounts to minimization of the angle between the vectors of the successive weight updates for each V i , and thus can suppress zig-zagging and allow learning to proceed along relatively smooth paths. In a sense this is equivalent to adding a momentum term in the updates of each V i (and γ i ), with the important difference that both the coefficients of the gradient (i.e., learning rate) and momentum are suitably adapted at each epoch of the learning process by the constained update rule of Equation (29).
Note that for this approach to work we should redefine Φ as: where dV i t and dV i t−1 are the item i feature vector updates at the present and immediately preceding epoch respectively (similarly for the item i bias updates dγ i t and dγ i t−1 ). Again we can restore from the weight vector w the item feature vectors V i using Equation (20), and update them according to Equations (29) and (26) as: Similarly the updates for the item biases will be given by: Equations (36), (40), (37), and (41) thus constitute our proposed SocialFALCON algorithm for updating the user features, item features, user biases, and item biases respectively.
Desirable Properties of SocialFALCON
As is the case with SocialMF, the SocialFALCON model makes the feature vector of each user to be dependent on the feature vectors of his direct neighbors. Since those neighbors are in turn connected to other users, recursively, indirect connections in the social network propagate their social infuence across the entire network. In addition, even for users that have expressed no ratings (cold start users) but have social connections, there still remains a social update term in their factor update rule, which means that latent features of those users will at least adapt towards those of their neighbors. Therefore, despite not having any expressed ratings, feature vectors for these users will be learned as well.
The SocialFALCON model has three further desirable properties. First, despite its seemingly complex derivation, the factor update rule for both users and items (and their biases) turns out to be quite simple since it only constitutes from a term proportional to the gradient of the objective function of Equation (21), and an extra term proportional to the gradient of Φ as given by Equation (39) with respect to each feature vector. Both of these terms are quite easy to compute as it can be seen from Equations (36), (37), (40) and (41). For the case of user factors and their biases, the extra term is a social term adapting the update towards the weighted average of the factors' updates of the direct neighbors at the immediately preceding epoch, whereas for the item factors and biases the extra term acts as momentum.
Second, all the factor update rules can also be viewed as standard gradient descent with regularization. This is enforced by the hard constraint of Equation (22) that restricts the norm of the weight vector update dw to a hypersphere (which means that the compoments of the vector w cannot grow in an uncontrollable fashion), and can be more easily seen if we rewrite, for example, Equation (40) as: The same argument applies, by expansion of the weighted average in the social term, to user factor updates. An important benefit here is that both the coefficient multiplying the gradient as well as the reguralization coefficient are automatically adapted at each epoch by the current values of λ 1 and λ 2 .
Finally, the algorithm has only two free parameters, namely δP and ξ which, as we'll discuss in the experimental results section, are not very sensitive to the exact setting of their values.
Complexity Analysis of SocialFALCON
The main overhead to complete an epoch is in evaluating the gradient of the objective function with respect to the feature vectors of users and items. As it can been seen from Equation (21), the number of operations per feature vector needed to complete this evaluation is proportional to the number of rating examples in the training set. In addition we need to calculate the gradient of Φ as given by Equation (39) with respect to the feature vectors. Following the notation used in [15], we assume that the average number of ratings per user isr, and the average number of direct neighbors per user ist. As reported in the same paper, the computational complexity of computing the gradients of SocialMF's objective function L with respect to the number of users in the social rating network is O(NrK + Nt 2 K). Thet 2 factor is justified if we take a closer look at Equation (18) which computes the derivative of Equation (17) with respect to each user feature vector U u so as to update it by gradient descent. In order to evaluate this derivative, for each user u, one needs to take the following steps: (a) Find his direct neighbors and aggregate their feature vectors, (b) find the users v in the social network to which u is a direct neighbor, and (c) ∀v find their direct neighbors w and aggregate their feature vectors.
In the SocialFALCON model the updates of the user feature vectors are given by Equation (36). As it can be readily seen from that equation, the evaluation of its second term only requires step (a) as described above. This makes the total omputational complexity of evaluating the gradients of (21) and (39) with respect to the number of users O(NrK + NtK). Therefore SocialFALCON isr +t 2 r+t times faster than SocialMF in computing the gradient in each training epoch as it scales linearly with the number of user connections. Since usually the rating matrix R and trust matrix T are very sparse,t and r are relatively small, and therefore both SocialMF and SocialFALCON scale linearly with respect to the total number of users in the social rating network. However in a large social network where the number of average number of direct neighbors per user is large, the speedup factor provided by SocialFALCON becomes profound and crucially important in the ability to effectively train a recommender.
We should note here that once the gradient has been evaluated, a relatively small number of additional operations as given by Equations (31), (33), (34) and (30) (which is independent of the number of training examples) is needed to complete the update. In addition the updates of all user and item factors at the immediately preceding epoch should be stored since they are utilized by the SocialFALCON update rule. This additional computational burden however is very small compared to the calculation of the gradient since it just involves the evaluation of three inner vector products as given by Equations (31) and (33). The sizes of these vectors are equal to the size of vector w (Equation (20)), thus the number of operations involved is (K + 1) * (N + M). This is confirmed by the actual CPU times measured in our experimental results.
Datasets
The availability of publicly available social rating network datasets is extremely limited, presumably due to the sensitive nature of social network data and the proprietary rights of social networks. To the best of our knowledge there are only very few public social rating network datasets: The Epinions.com dataset, and a dataset that was crawled from Flixster.com and has been made available by the authors of [15].
The Epinions dataset (http://www.trustlet.org/wiki/Downloaded_Epinions_dataset) that we used has 49,289 users, 664,824 ratings and 487,183 connections between users. Possible rating values are discrete integers in the range (1,5). The average number of ratings per user is 16.5 and each user has on average 14.3 direct neighbors. The social relations in the dataset are directed.
The Flixster dataset (http://www.cs.sfu.ca/$\sim$sja25/personal/datasets) dataset has 787,213 users, 8,196,077 ratings and 7,058,819 relations between users. Each user, on average, has rated 55.5 items and has 48.6 social relations. Even though the dataset was crawled by [15] as a directed network, the social relations are undirected. This means that either duplicates should be removed or be handled appropriately in the code (we opted for the later choice). Possible rating values are ten discrete numbers in the range (0.5,5), and in our implementation we scaled rating values by 2 to be integers in the range (1,10).
In order to evaluate the performance of all algorithms considered in the next section, we used five-fold cross-validation (CV). We used stratified sampling on user ratings so that, in each fold, 80% of the ratings of each user were chosen randomly and used for training and the remaining 20% of user's ratings were used for evaluation. Due to this sampling scheme, users with only one rating in the dataset were inevitably excluded from the evaluation set. We have made publicly available (http://labs.fme.aegean.gr/ideal/socialfalcon-datasets/) both the database schema as well as the five-fold CV training/validation splits that we've used in the experiments.
Experimental Results
We compared the proposed SocialFALCON algorithm with four other factorization models: RegSVD and PMF as described in Section 2, SVD++ as described in Section 3 and SocialMF as described in Section 4. All algorithms were implemented in low-level C and were compiled with the GNU gcc 4.6.3 compiler. We should point out that the codebase for RegSVD, PMF and SVD++ has been carefully developed both in terms of minimizing their runtime and increasing their prediction accuracy, due to their prior utilization by one of the authors as member in "The Ensemble" team which was a runner up for the Netflix Prize competition (http://netflixprize.com/leaderboard). For the code implementation of SVD++, it should be noted that a naive implementation of gradient descent is very ineffective, because if the implicit items are updated at each training example then the training process will be very slow. In our implementation we have used the "looping trick" proposed in [22] for NSVD1, which is a predecessor of SVD++ originally introduced by [12]. This modification produces exactly the same results as the naive implementation of gradient descent, however it makes a significant difference in running time, especially for large scale problems.
To the best of our knowledge the authors of [15] have not provided any source code implementation of SocialMF. The only known publicly available code implementation of SocialMF is within MyMediaLite (http://www.mymedialite.net/), which is a lightweight, multi-purpose library of recommender system algorithms. An important bug fix to that code implementation has also been provided by one of the authors of the present paper (http://mymedialite.net/download/Changes), which was contributed while developing our own implementation. Nonetheless, the codes for all algorithms presented in this section have been rewritten from scratch and have been made publicly available from Github (https://github.com/namp/SocialFALCON). The experiments were run on a machine with an Intel Core2 Quad CPU (Q9400 @2.66GHz) with 4GB RAM, running Ubuntu 12.04 LTS.
The prediction error was measured in RMSE on the evaluation set of each split defined as: where |E| is the size of the evaluation set E, r ui is the actual rating andr ui is the prediction. Table 1 summarises the RMSE of RegSVD, PMF, SVD++, SocialMF and the proposed SocialFALCON algorithm on the Epinions and Flixster datasets. We report results for two choices of latent dimensions, specifically K = 5 and K = 10 as was also reported in [15], where it was shown that increasing K in Epinions did not improve the results, while increasing K in Flixster improved the results slightly. For all algorithms, each RMSE result is reported along with two numbers: The average number of epochs required to achieve it and the average time (in seconds) spent at each epoch. The RMSE numbers reported are the average of the five-fold CV splits. Note that for the Flixster dataset the average reported RMSE for all algorithms has been divided by 2 in order to compensate for reporting the error on the original rating values in the range (0.5,5). For fairness in the comparison, for all algorithms, each run was initialized from a common random seed, and their relevant parameters were carefully adjusted to achieve the best possible performance. These parameters were found by utilising grid search and are reported in Appendix A. For the Epinions dataset, as highlighted in bold, SocialFALCON improves the RMSE of RegSVD, PMF, SVD++, and SocialMF on both choices of latent dimensions. In both cases, we notice that the RMSE of SocialFALCON is 0.6% lower compared to that of RegSVD, 0.62% lower than the RMSE of PMF, and 0.55% lower compared to the RMSE of SVD++. Empirical data on recommender systems research have shown that such levels of improvement are indeed significant [23]. For example during the Netflix Prize competition, yearly progress prizes of $50,000 US were offered for the best improvement of at least 1% over the previous year's result [24]. The closely related SocialMF method seems to perform only marginally better than RegSVD and PMF, and marginally worse than SVD++. Naturally, as explained in the beginning of this section, the codes for RegSVD, PMF, and SVD++ have been very carefully implemented due to their utilization in the Netflix prize and thus their performance is expected to be quite good. However the same argument is also true for the code implementation of SocialMF. In addition all RMSEs are reported with optimal parameters found by grid search, and to this end, it remains unclear why we have not been able to confirm in our evaluation the much improved performance of SocialMF over PMF on the Epinions dataset as reported in [15]. A possible explanation for this is that in their experiments they used a different version of the Epinions dataset and, of course, our CV splits are different. Another explanation for the poor performance they reported for PMF is perhaps due to poor code implementation. Still, the results that we obtained by SocialMF for the Epinions dataset are better than those reported on their paper.
For the much larger Flixster dataset, for both choices of latent dimensions, SocialFALCON outperforms PMF and SocialMF, and performs as good as RegSVD (which apparently performed remarkably well on this task considering its model simplicity). SVD++ performs marginally better than SocialFALCON and RegSVD, which can be attributed to the large number of implicit ratings in the evaluation sets. In general, the RMSE results reported for all algorithms are better than the results for Epinions. An explanation for this has been given in [15] by noticing that the items in Epinions are heterogeneous (belonging to categories such as DVD players, cameras, printers, laptops, etc.), while the items in Flixster are from a single caterory, namely movies. Possibly this makes the rating signals quite more accurate than those of Epinions, and thus makes the factorization task easier in general. Despite the large number of social connections in the dataset, apparently SocialMF provided the worst RMSE score among all the algorithms tested. In this case also we have been unable to verify the claims of significant RMSE improvement over PMF for the Flixster dataset and, in general, the SocialMF's results reported in [15]. It is also interesting to note how much SocialFALCON appears to improve the RMSE over SocialMF, which is 3.4% for K = 5 and 3.15% for K = 10.
Overall, the proposed SocialFALCON algorithm performs well and it is competitive to state of the art matrix factorisation methods. More important however, is that it can effectively handle large datasets. This is supported by the other two numbers, apart from the RMSE, that are shown in each cell of Table 1. SocialFALCON and SocialMF are batch methods, and hence they require a much larger number of epochs than stochastic algorithms such as RegSVD, PMF and SVD++. The reason is that batch methods only have one chance to update the factors after each presentation of the whole training set, whereas stochastic methods make thousands of updates in one epoch, up to the N × M size of the user-by-item ratings matrix. This explains the significant difference, for example, between PMF and SocialFALCON or SocialMF in the average number of epochs to achieve the reported RMSE score. We should mention that SocialFALCON's average number of epochs, as reported for the various tasks in Table 1, are higher than should be needed in practice. This is because its best scores were reported for small stepsizes δP discovered by the grid search, as shown in Appendix A, even thought quite comparable RMSE scores where attained with larger stepsizes and hence fewer epochs. Similar performances were recorded with 0.5 < δP < 2.0 and 0.7 < ξ < 0.9, indicating that results are not very sensitive to the exact values of the parameters. Following the example of [25], we performed additional runs of SocialFALCON using common values (δP = 1.0) and (ξ = 0.85) on the two datasets. Deterioration of the RMSE compared to the scores reported with the optimal parameters shown in Table 1, was in both cases never more than 2%, even though for the case of Flixster there was more than 30% reduction on the required number of epochs. The larger scale problem was more sensitive to the selection of δP, whereas it was not so sensitive to the selection of ξ (for which a value around 0.85 worked well in both cases).
The third number shown in each cell of Table 1, reports the average time required to complete an epoch and is quite informative as it is a measure of the computational complexity of each algorithm. The numbers show that the complexity/epoch of the SocialFALCON algorithm is not significantly higher than that of RegSVD and PMF and even SVD++. This is true in spite of the fact that our implementation of SVD++ is very efficient as we have utilised the "looping-trick" mentioned in the beginning of this section, which significantly reduces SVD++'s computational burden. Note also that according to our discussion in Section 6.2, SocialFALCON should theoretically ber +t 2 r+t times faster than SocialMF. Since the Flixster data set is denser than the Epinions data set, the improvement over runtime efficiency for Flixster should be more prominent than that for Epinions. As we can see from Table 1, indeed SocialMF is much slower than SocialFALCON. The epoch of SocialFALCON is 2.9 times faster than that of SocialMF for Epinions and 14.5 times faster for Flixster.
Cold Start Users
Based on the findings of the previous section where, in general, the performance of SocialFALCON compared to SocialMF was significantly better, we also investigated the performance of these two closely related methods only on cold start users. As reported in [15], in both Flixster and Epinions more than 50% of users are cold start users and thus the efficiency of recommendations on this challenging class of users becomes very important. As cold start users we define users that have less than five ratings in each dataset and at least two, since users with only one rating were not included in the evaluation set of each fold. To this end, we created a subset of the evaluation set of each CV fold which contained only cold start users. Table 2 summarises the RMSE of SocialMF and the proposed SocialFALCON algorithm for cold start users. For the Epinions dataset, as highlighted in bold, SocialFALCON improves the RMSE of SocialMF on both choices of latent dimensions. For the Flixster dataset, in both cases (especially for K = 10), SocialMF performs marginally better than SocialFALCON. On this dataset which has denser social relations, this can be attributed to the hard preset target distance metric incorporated into the SocialMF factorization model that, at the end of learning, the latent feature vector of each user is rendered equal to the weighted average of the latent feature vectors of his direct neighbors. On the other hand SocialFALCON seeks to minimize that distance incrementally by aligning the corresponding updates so as to render the latent feature vector of each user as close as possible to the weighted average of the latent feature vectors of his direct neighbors at the end of learning. Thus, for cold start users, social connections alone, might play a slightly more dominant role for providing recommendations in SocialMF than in SocialFALCON, which comes of course with a higher computational cost as was discuseed in the previous sections.
Conclusions
In this paper, we proposed an efficient constrained matrix factorization algorithm called SocialFALCON, for providing recommendations in social rating networks. The algorithm derives from the FALCON generic constrained matrix factorization which has been previously proposed by authors of this paper. The FALCON framework allows the incorporation of additional knowledge into the learning mechanism for determining the user and item factor matrices. In the case of SocialFALCON, this additional knowledge is embedded into mathematical constraints that, during learning, drive the feature vector of each user to be dependent on the feature vectors of his direct neighbors in the social network. Similarly to related proposed approaches, this allows the propagation of social influence within the factorization model, which has been shown to be an important factor in the social sciences, in social network analysis and in trust-based recommendations. The propagation of social infuence also allows for providing recommendations to cold start users that have not expressed many ratings, since their feature vectors can be learned through their social relations. However, unlike similar approaches the proposed algorithm has reduced computational complexity, can be implemented easily, and thus be utilized more frequently in social recommendation setups.
Experimental results on two publicly available datasets showed that the algorithm improves on baseline and state of the art factorization methods as well as on previously proposed related approaches in terms of convergence speed and recommendation accuracy. To elucidate on the transparency of our results, we have made publicly available the source code of the proposed algorithm, as well as the codes of the algorithms against which it was compared, along with the datasets used in the experiments.
One of the most attractive features of this algorithm is its potential for suggesting several interesting directions for further improvements. In the same framework for constrained matrix factorization, it is possible to augment it with further information about learning in social rating networks, including methods for embedding graph theory measures and indices into the social trust values and incorporating implicit direct neighbor ratings as utilized in SVD++. It is the concerted incorporation of such detailed information into the same algorithm that will hopefully lead to increasingly efficient matrix factorization training schemes combining fast learning, good scalability properties, and powerful generalization capabilities on predicted ratings for unseen items.
Conflicts of Interest:
The authors declare no conflict of interest.
Appendix A. Algorithms' Parameters
In this section we list the learning parameters (discovered by grid search) with which the algorithms achieved their best performance on each dataset, as presented in the results tables (Table 1 for all users and Table 2 for cold start users only).
In order to boost the performance of RegSVD, PMF, SVD++ and SocialMF we used different learning rates for the user features and biases (η u and η bu respectively) and for the item features and biases (η v and η bv respectively). SVD++ also requires two further parameters for the training of the implicit item factors, namely η y and λ y which are the learning rate and regularisation parameter respectively.
SocialFALCON has only two free parameters, namely δP and ξ. | 11,243.4 | 2019-08-11T00:00:00.000 | [
"Computer Science",
"Mathematics"
] |
Coarsening Behavior of Particles in Fe-O-Al-Ca Melts
The characteristics of particles greatly affect the microstructure and performance of metallic materials, especially their sizes. To provide insight into coarsening phenomena of particles in metallic melts, Fe-O-Al-Ca melt with calcium aluminate particles was selected as a model system. This study uses HT-CSLM, SEM detections and stereological analysis to probe the behavior of particles and their characteristics including size, number density, volume fraction, spreading of particle size, inter-surface distance and distribution of particles. Based on the experimental evidence and calculation of collision, we demonstrate that the coarsening of inclusion particles is not only dependent on the Ostwald growth as studied in previous study, but also on the particle coagulation, and floatation. The collision of particles affects the maximum size of the particles during whole deoxidation process and dominates the coarsening of particles at later stage of deoxidation under the condition without external stirring in Fe-O-Al-Ca melts. The factors influencing collision behaviors and floating properties were also analyzed, which is corresponding to coarsening behavior and change of particle characteristic in the melts with different amounts of Ca addition. Such coarsening mechanism may also be useful in predicting the size of particles in other metallic materials.
temperature toughness and hydrogen induced crack of steel. One of the key factors for decreasing the side effects of particles, or determining in the pining effect of particles or their ability on serving as cores of precipitated phase nucleation is the particle size.
The formation of particles starts with nucleation which plays an important role in determining the structure, shape and size distribution of the particles. Suito and Ohta et al. 32 found that the initial size distribution of particles became narrow in the case of high nucleation rate, which they thought was facilitating to obtain fine particles. After that, the particles are deemed to grow and coarsen by the following steps: the diffusion of reactants to the oxide nuclei, Ostwald ripening 33 , collision and subsequent coagulation in liquid metal. Lindberg et al. 34 reported that the time for attending the 90% of the equilibrium value of particle volume is 0.2 s. Suito and Ohta et al. 35 investigated that the growth of particles by diffusion is very fast. In their study, Ostwald ripening dominates the growth of particles in deoxidation process under no fluid flow. However, in our previous study 36 , the experimental evidence for particles size distribution in Fe-O-Al-Ca melt corresponds to the theoretical results based on Ostwald ripening at early stage of deoxidation but not at later stage. Therefore, it is necessary to study the change of particle size distribution in liquid metal affected by the collision and subsequent coagulation. Collisions between particles and rapid diffusion in the liquid phase increase the number of large particles and enhance particle removal by floatation 37 . Extensive theoretical studies about the time-dependent particle size distribution and mathematical model have been reported based on collision-coalescence behavior due to turbulent collision 38 , Stokes collision [39][40][41] , Brownian collision 42 . Furthermore, attractive capillary force acted on particles has been also investigated with consideration on chemical compositions, size and distance between particles which is one mechanism for coagulation [43][44][45][46] . It can be concluded that the particle size distribution is effected by nucleation, growth, coagulation due to collision and attractive force, and floatation behavior of particles in liquid metal [47][48][49][50][51][52] . However, lots of researches are focused on the behavior of solid particles 32,34,53 and there are limited researches on the coarsening of liquid particles in liquid metal. The nucleation and Ostwald ripening of liquid calcium aluminate particles in Fe-O-Al-Ca melt have been investigated in our previous study 36 . In current study, the coarsening mechanism of particles in Fe-O-Al-Ca melt was studied with the consideration of nucleation, coagulation due to collision and floating properties and verified by experimental data. This study will provide information to understand the relations between characteristics, behavior and coarsening of particles in Fe-O-Al-Ca melts, and will be helpful for predicting and controlling size of particles.
Methods
Materials used in the present study and high temperature experimental processes are described detailedly in our previous study 36 . Characteristics of particles were detected by SEM at an accelerating voltage of 15 KV and the transformation of particle characteristic in three-dimensional from that in two-dimensional based on stereological analysis is the same with our previous study 36 . Geometric standard deviation of particle size distribution ln σ values is calculated by Eq. (1) Where r geo is the geometric mean radius of particle given by (r 1 · r 2 · r 3 …… r n ) 1/n . The ln σ values are obtained from Eq. (1) using the values for the size and number density of particles which were measured in the deoxidation experiments.
The inter-surface distance D ab between two particles can be obtained by measuring the central coordinates and radius of particles by Image-Proplus software and the illustration is shown in Fig. 1.
Where X i and Y i are the central coordinates of particles in the cross section, r i is the equivalent radius and D ab is the inter-surface distance between two particles. By calculating the inter-surface distances of certain particle with all others, the inter-surface distance between this certain particle with the nearest particle D mi can be obtained by calculating the minimum value of D ab as Eq. (3) and D mi is defined as the inter-surface distance of a pair of adjacent particles in this paper. The average inter-surface distance of particles in certain region of sample D AV is the arithmetic mean value of D mi calculated as Eq. (4).
Results
In-situ observation of liquid particle behavior. The behavior of particles in Fe-O-Al-Ca melt was in-situ observed using high-temperature confocal scanning laser microscope (HT-CSLM) as shown in Fig. 2. Most aggregation and coagulation between liquid calcium aluminates were caused by collision as Fig. 2(a-c). The attraction force was hardly found between most liquid calcium aluminate particles at gas/molten steel interface, even at very small separation (particle C/D moved to particle E and then passed away) as shown in Fig. 2(d-f). The same phenomenon was also observed by Hongbin Yin 54 , in which they found that the liquid calcium aluminate particles could separate freely after getting in touch with each other at 1/6 seconds. www.nature.com/scientificreports www.nature.com/scientificreports/ Characteristics of particles. Morphologies and compositions of particles in steels deoxidized by Al and Ca alloys SEM-EDS micrographs are displayed in Fig. 3. It can be seen that there were lots of calcium aluminate particles in collision and coagulation. Few particles with similar size were found jointed as Fig. 3(a). According to the low melting point diagram Ca-Al-S 55 , the particles with mole ratio of Al 2 O 3 to CaO in the range of 0.15-1.5 are in liquid or partially liquid state which are regarded as "liquid particles" in this paper. Some solid particle merged with each other and formed into an irregular aggregate by high temperature sintering which was hard to deform and densify as Fig. 3(b). The coagulations between the liquid particles were observed as Fig. 3(c-e) and these aggregates seem susceptible to deform into spherical body. Such difference is thought to be attributed to the difference of inter-diffusion of composing elements and contact area, as the liquid particles are prone to spread on the surface of the other one 54 . Therefore, it can be concluded that the particles with large discrepancy in size tend to collide and merge, and the deformation as well as densification proceed easily for liquid calcium aluminates particles. www.nature.com/scientificreports www.nature.com/scientificreports/ In order to study the effect of liquid particle on their characteristics, the percentage of liquid particles in Fe-O-Al-Ca melts is illustrated in Fig. 4(a) (C1/2/3 represents that initial adding amount of Ca is 0.25%/0.4%/ 0.78%; A1/2 represents that initial adding amount of Al is 0.05%/0.25%). The experimental condition and chemical compositions of samples were depicted in previous study 36 . Liquid particle percentage increased with increasing amount of calcium addition in melts. The steels with high calcium addition ([%Ca] = 0.78) after deoxidation 3900 s have an extremely higher percentage of liquid particles than those with low calcium addition ([%Ca] = 0.25, 0.4).
A few hundreds of particles were observed by SEM-EDS in each sample and the planar particle size distribution was transformed into the size, number density and volume fraction of particles in three-dimensional based on stereological analysis. The average size of particles increased, and their number density and volume fraction decreased significantly with holding time as illustrated in Fig. 4 The size of the largest particle in each experiment was larger in the steel with higher Ca addition during the first 360 s of deoxidation, and decreased with holding time due to the rapid floatation of large particle 56 which was explained in DISCUSSION part. The change of number density and volume fraction for calcium aluminates in Fig. 4(c) suggests that the ascending velocity of particles in the steel containing more liquid particles (A1C3 and A2C3) was larger than that containing more solid particles (A1C1) at early stage due to the relatively larger size and fractal dimension of liquid particles in the steels with high calcium (In spite of larger density for solid calcium aluminate particles, the liquid particles in the steel containing high calcium were larger in size and fractal dimension relatively at the initiation of deoxidation, which accelerated the floatation of these liquid particles). It is reported that the ascending velocity of the condensed particles with fractional characteristic was smaller than that of isometric three dimension spherical particles 43 , and it decreased with the decreasing value of D f (fractal dimension) 57 . Based on the expression of D f by Lech Gmachowski 58 , the liquid aggregates with spherical shape have larger D f than the irregular solid aggregates.
With the rapid rise of large particles, the average size of particles increased slightly and their number density decreased slowly after deoxidation for 1800 s. Furthermore, the change of characteristics for particles in the steels containing high calcium (with low number density of particle) was smaller than that in the case with low calcium (with high number density of liquid particle) at later stage of deoxidation. It is thought to be caused by the difference of collision rate affected by the number density which can be verified in DISCUSSION part. Therefore, as illustrated in Fig. 4(d), the average diameter tended to decrease with an increased percentage of liquid particles due to the rapid rise of particles in high Ca containing steel at early stage of deoxidation and less collision at later stage. The number density of particles changed irregularly which is affected by their aggregation and floatation behaviors. www.nature.com/scientificreports www.nature.com/scientificreports/ Spreading of particle size. The geometric standard deviation of particle size distribution ln σ value which represents the spreading of particle size distribution in each experiment was measured in the deoxidation experiments. It is illustrated in Fig. 5(a) that the ln σ values had an increasing trend with the increase of liquid particle percentage after deoxidation for 3900 s and they decreased with time elapsed. As the spreading of a size distribution becomes narrower with a decrease in ln σ, it means that discrepancy in particle size is greater in the case with more liquid particles. Ohta et al. 35 reported that ln σ values were dependent on the nucleation rate in the early stage. It can be seen that in Fig. 5(b) that the ln σ values increased in the order of Exp. A1C1 < Exp.A1C3 < Exp. A2C3 at the first 600 s of deoxidation process, in which the theoretical nucleation rates ln I were 484, 313, and 309, respectively (as reported in our previous study 36 ). This result is in agreement with the conclusion that in the case of low nucleation rate, the particle size distribution becomes broader 35 . The ln σ values of calcium aluminate particle at 3900 s changed non-monotonically with the increase of liquid particle percentage due to the hereditary of particle size distribution from early stage of deoxidation and the change of particle number density.
Inter-surface distance between particles. The cumulative frequency curves of D mi (inter-surface distance of adjacent particles) in Fig. 6(a) change little, and the particles with inter-surface distance of 30-100 μm were in larger proportion. The curves in Fig. 6(a) move toward right with increasing addition of calcium at 3900 s www.nature.com/scientificreports www.nature.com/scientificreports/ which means that the particles were in larger inter-surface distance with more calcium addition. Figure 6(b) shows that the proportion of particles with close inter-surface distance (<10 μm) accounted for 40%, 25% after deoxidation for 360 s in experiments A1C1 and A1C3, and it reduced to 20% and 8% after deoxidation for 600 s. www.nature.com/scientificreports www.nature.com/scientificreports/ The inter-surface distance between farthest particles increased with time elapsed. The average inter-surface distances of particles in Fig. 6(c,d) show that the D AV values (average inter-surface distance of particles in certain region of sample) decreased with the increasing liquid particle percentage when it was larger than 5%, and they increased with time elapsed. It is noteworthy that the change rule of particle inter-surface distance is contracted with that of particle number density, indicating that the larger the number density of particle is, the closer the particles are.
Distribution of particles. The distribution of area density for particles in Fe-O-Al-Ca melts as a function of holding time and amount of Al and Ca addition is displayed in Fig. 7. The segregation of particles were more serious in Fe-O-Al-Ca melts containing high Ca at early stage of deoxidation due to high number density which enhanced the collision and coagulation of particles, resulting in larger particles. It can explain the phenomenon that the size of largest particle increased in the order of A2C3 > A1C3 > A1C1 as shown in Fig. 4(b). With time elapsed, the area density for particles in steel decreases due to the floatation of particles. As the floatation of particle in the liquid steel following stokes behavior 56 , the particles with larger size have higher ascending velocity, www.nature.com/scientificreports www.nature.com/scientificreports/ and thus, in the high Ca containing steel, more particles with large size were removed after deoxidation for 1800s which resulted in lower area density for particles at later stage of deoxidation. It can also be verified in Fig. 7(g-j) and it indicates that the particles distribute more homogenously and their area density decreased with the increasing amount of Ca addition, resulting in relatively fine particle in melts with high Ca addition, corresponding to Fig. 4(d). Where dn i /dt is collision rate of particles (mm −3 ·s −1 ), n i is the number of size i particles per unit volume (mm −3 ) and β i,j is the collision frequency between size i and size j particles (m 3 ·s −1 ). δ i,k is the Kronecker delta 60 , δ i,k = 1 for i = k, and δ i,k = 0 for i ≠ k. When i = 1, the Equation simplifies to
Discussion
In this experiment, only Brownian collisions and Stokes collisions happened among the particles in Fe-O-Al-Ca melts, and it's not necessary to consider the turbulent collisions without external stirring condition. Therefore, the collision frequency β i,j between size i and size j particles can be estimated as: www.nature.com/scientificreports www.nature.com/scientificreports/ The experimental change rate of particle number density (−ΔN/Δt) increases monotonically with collision rate of particles in Fig. 8(a). The observed values of −ΔN/Δt are about 1/9 of calculated collision rate which indicates that not all the particles will coagulation after collision. Compared with the total collision rate in the steel containing low calcium, it is higher in the case of high calcium during the first 600 s, while becomes lower at the later stage of deoxidation process. Moreover, the collision rate of particles decreases with time elapsed which is attributed to a decrease of number density. Figure 8(b) illustrates that the size of largest particle in each sample increases with an increased collision frequency β i,j which decreases with time elapsed. It is verified that the collision behavior of particles affects their size significantly. Figure 8(c,d) represents the collision rate of particles with different size at 3900 s. Zone I and zone II in Fig. 8(c,d) represent the particles with certain size increase and decrease, respectively. The particles with small size (<3 μm) decrease and those with large size (>3 μm) increase; the particle change rate of number density for particles, i.e. absolute value ΔN/Δt, increases with the increasing size of particles, and then decreases with further increase of particle size in both zone I and zone II. Comparing the curves in Fig. 8(c,d), it is found that the peak values of curves and the size of particles corresponding to those increase with the decrease of the valley values, which means that, the more the small particles reduce, the more the large particles form by collision and the bigger the produced particles with largest number density are. Furthermore, it is found that the size of particles corresponding to the peak value of curves, D p , decreases with the increase of Ca addition, which is the same with the changing trend of arithmetic mean diameter of particles observed in experiments. Hence, D p is plotted with arithmetic mean diameter of particles in Fig. 8(e). It seems that the D p values go up with an increase of the arithmetic mean diameter of particles linearly, especially at later stage of deoxidation process, while they change little with the increase of during the first 600 s of deoxidation process. It indicates that the collision between particles affects the coarsening of the particles at later stage of deoxidation but not early stage under the condition of no stirring, which is in agreement with the conclusion in previous study 36 . Influencing factors on collision of particles. The effect of liquid particle percentage, average inter-surface distance between particles and ln σ values on the collision rate for particles are summarized in Fig. 8. The changing trend of collision rate with liquid particle percentage is the same with number density in Fig. 4(d) and contrary to inter-suface distance in Fig. 6(c), indicating that the collision rate is mainly affected by particle number density and the distance of particles; as particle number density increases, the inter-surface distance between particles decreases and then the collision rate increases. Figure 9(b) shows an obvious decrease in collision rate with increasing D AV values from 10 to 45 μm; however, with further increasing D AV values, the collision rate exhibits a slight decrease. The change of collision rate with ln σ values in Fig. 8c shows that the collision rate www.nature.com/scientificreports www.nature.com/scientificreports/ increases with the increase of ln σ values, which means that the collision rate of particles with broad spreading of size distribution is higher than that with narrow spreading of size distribution.
Coarsening mechanism of particles. The coarsening mechanism of particles in Fe-O-Al-Ca melts can be summarized in Fig. 10. The collision and coagulation of particles start after their nucleation and continue during whole deoxidation process. In the case of low nucleation rate and small inter-surface distance, the collision and coagulation tend to occur more easily, and hence, the maximum size of particle is larger in early stage of deoxidation as shown in Fig. 10(a) which is in agreement with the experimental result that the size of largest particle is larger in the steel containing higher Ca. Nevertheless, it is verified in our previous study 36 that the average size of particles is mainly dependent on the Ostwald growth. As the rapid rise of liquid particles with large size and fractal dimension in Fig. 10, the inter-surface distance between particles in high Ca-containing melts becomes large and it is larger than that in low Ca-containing melts as in Fig. 10(b-d). With the consumption of Ca, Al and O, the coarsening of particles is mainly affected by their collision, but not Ostwald growth at later stage of deoxidation process. Therefore, the size of particles decreased with an increase of Ca addition after deoxidation for 3900 s.
Conclusion
The behavior and characteristics of particles in the Fe-O-Al-Ca melts under the condition of no external stirring at 1600 °C was investigated using HT-CSLM, SEM-EDS detection. Most aggregation and coagulation observed between calcium aluminate particles were caused by collision. The characteristics of particles in three-dimensional, i.e. size, number density, volume fraction, spreading of particle size, inter-surface distance and distribution based on stereological analysis indicate that their coarsening is not only dependent on Ostwald growth as studied in previous study, but also collision and coagulation, and floatation. The collision of particles affects the maximum size of particle during whole deoxidation process and dominates the coarsening of particles at later stage of deoxidation. The calculated result based on population balance model indicates that the collision rate of particles increases with an increase of their number density, i.e. decrease of inter-surface distance, and it is high in the case for particles with narrow spreading of size distribution which is affected by nucleation rate. The particles with relatively larger size and fractal dimension have higher ascending velocity, resulting more fine particles with large inter-surface and low collision rate left in the melts. This mechanism can be used to explain that the collision, coarsening behavior and characteristic change of particles in melts with different amounts of Ca addition.
Data Availability
The data that support the findings of this study are available from Linzhu Wang upon reasonable request. | 5,035.8 | 2019-03-06T00:00:00.000 | [
"Materials Science"
] |
Multi-domain Features of the Non-phase-locked Component of Interest Extracted from ERP Data by Tensor Decomposition
The waveform in the time domain, spectrum in the frequency domain, and topography in the space domain of component(s) of interest are the fundamental indices in neuroscience research. Despite the application of time–frequency analysis (TFA) to extract the temporal and spectral characteristics of non-phase-locked component (NPLC) of interest simultaneously, the statistical results are not always expectedly satisfying, in that the spatial information is not considered. Complex Morlet wavelet transform is widely applied to TFA of event-related-potential (ERP) data, and mother wavelet (which should be firstly defined by center frequency and bandwidth (CFBW) before using the method to TFA of ERP data) influences the time–frequency results. In this study, an optimal set of CFBW was firstly selected from the number sets of CFBW, to further analyze for TFA of the ERP data in a cognitive experiment paradigm of emotion (Anger and Neutral) and task (Go and Nogo). Then tensor decomposition algorithm was introduced to investigate the NPLC of interest from the fourth-order tensor. Compared with the TFA results which only revealed a significant difference between Go and Nogo task condition, the tensor-based analysis showed significant interaction effect between emotion and task. Moreover, significant differences were found in both emotion and task conditions through tensor decomposition. In addition, the statistical results of TFA would be affected by the selected region of interest (ROI), whereas those of the proposed method were not subject to ROI. Hence, this study demonstrated that tensor decomposition method was effective in extracting NPLC, by considering spatial information simultaneously as the potential to explore the brain mechanisms related to experimental design.
Introduction
Electroencephalogram (EEG) has been extensively used in neuroscience since Berger Hans first recorded it from the human cerebral cortex in 1929 (Berger 1929). In early studies, most researchers mainly focused on the amplitude of an individual waveform in the time domain. With the introduction of computers, besides waveform, spectral and spatial characteristics of the component(s) of interest (COI) for group-averaged EEG/event-related-potential (ERP) data are analyzed (Luck 2014). They found that ERP components could be evoked from the related experiments, and have specific temporal, spectral and spatial characteristics. For instance, when words and other meaningful (or potentially meaningful) excitations include visual and auditory words, sign language signs, pictures, faces, environmental sounds, and smells are used for experimental stimuli, N400, a negative waveform which reaches a peak around 400 ms after stimulus onset and can extend the time window from 250 to 500 ms, can be discovered Federmeier 2000, 2011;Kutas and Hillyard 1980). Meanwhile, it is typically maximal over centro-parietal electrodesites. Therefore, all the temporal, spectral and spatial properties of ERP component(s) are useful for the investigation of brain mechanisms in cognitive processes and these properties may be coupled. Several techniques have been developed for ERP data processing and analyze to dig out the potential information in the cognitive processes, such as time domain analysis and time-frequency analysis (TFA).
Most previous studies focus on time domain analysis. The conventional method averages several single-trial data of the same stimulus in the time domain to obtain ERP components. The advantage of this method is that the energy of ERP is enhanced, with the amplitude of spontaneous EEG and noise extremely reduced (Cohen 2014). Some advanced signal processing and analyze methods also have been developed to extract COI from group-averaged ERP data such as Independent Component Analysis (ICA) (Hyvärinen 2013;Jung et al. 2000) and Principal Component Analysis (PCA) (Dien 2010a(Dien , b, 2012Dien et al. 2005;Möcks and Verleger 1985;Kawaguchi et al. 2013). However, the main drawback of the time domain analysis is that it cannot reveal COI changes in the frequency domain over time so that the pivotal rhythm (or oscillation) information is neglected.
To extract the temporal and spectral characteristics of ERP component(s) simultaneously, some researchers use short-time Fourier transform (STFT) or wavelet transform algorithm (WTA) to convert time domain signals into time-frequency domain signals. There are two strategies for TFA of ERP data. One is evoked method in which multi-trial data are averaged before the computation of the time-frequency transforms of averaged ERP data. The event-related oscillations (EROs) obtained by this type of TFA are extremely phase-locked to stimulus onset because of the simultaneous co-occurrence of enhanced EROs. The time locked and phase locked component (TLPLC) can be obtained and it is called evoked brain activity. The other one is based on averaging the time-frequency transforms of every single-trial. Both TLPLC and non-phase-locked component (NPLC) are summed up so that it refers to all-brain activities. And this strategy is considered as the induced method (Herrmann et al. 2004(Herrmann et al. , 2014Tallon-Baudry and Bertrand 1999). The induced method has two superiorities over time domain analysis or evoked method. The first one is that it can simultaneously exploit the temporal and spectral properties of an ERP component and reveal additional NPLC activity. For the other one, the results are non-negative, which means that it can avoid the amplitude of COI being cancelled out in the averaged ERP data, if they are randomly distributed for each single-trial in the time domain (Cong et al. 2015b). The TLPLC can be obtained by time domain analysis and evoked method, whereas NPLC is generated by averaging the time-frequency transforms of every trial and this type ERO is evoked by some high-order processes (David et al. 2006;Singer and Gray 1995). Meanwhile, as described in the study (David et al. 2006), TLPLC reflects some stimulus locked event related response, while NPLC might be evoked by nonlinear and autonomous mechanisms. In short, the neuronal processes and mechanisms of TLPLC and NPLC are different (David et al. 2006).
Since Tallon Baudry et al. proposed the NPLC-oriented TF method in 1996 (Tallon-Baudry et al. 1996), it has been widely used in the fields of cognitive neuroscience and medicine, such as Parkinson's disease (Wiesman et al. 2016), depression (Shaw et al. 2013), children sleep (Piantoni et al. 2013), and language cognition (Araki et al. 2016;Kielar et al. 2015;Wang et al. 2012). Hence, NPLC includes significant information of all-brain activities. However, the spatial information is still not utilized in TFA and sometimes statistically significant results cannot be obtained by TFA, which poses some challenges for the exploration of brain mechanisms. In such a context, we propose a NPLCoriented tensor decomposition analysis of ERP data. Tensor decomposition exploits the interaction among modes. Firstly defined in the mathematics field (Hitchcock 1927), it has been extensively applied in the fields of psychometrics and chemometrics for multi-mode data analysis (Kroonenberg 2008;Smilde et al. 2004).
Aiming to overcome the shortcomings of time domain analysis and TFA, some researchers have attempted to use Canonical Polyadic decomposition (CPD) (Hitchcock 1927) to extract multi-domain features of COI simultaneously from high-order tensor composed of time-frequency results. Here, the high-order tensor is a fourth-order tensor. The order of the fourth-order tensor represents the number of its "ways", "dimensions", "domains", or "modes", which includes four modes: frequency, time, channels/space, and subjects-stimuli/conditions (Zhou et al. 2016). The component is selected if its temporal, spectral, and spatial components are consistent with characteristics of COI in the time, frequency, and space domains, and then its multi-domain feature mode (the last mode) is applied to statistical analysis (Cong et al. 2012c(Cong et al. , 2013(Cong et al. , 2014. Despite the use of tensor decomposition algorithm to extract TLPLC of interest (Cong et al. 2012c), NPLC from all brain activities has not been investigated with tensor-based multi-mode analysis (more than three modes).
There are two problems to be solved before extracting NPLC. For one thing, when referring to TFA of ERP data, it is typically calculated by WTA (Herrmann et al. 2014;Tallon-Baudry and Bertrand 1999;Tallon-Baudry et al. 1996, 1997, 1998. Some researchers also use STFT for TFA of ERP data, but the central idea is similar to WTA (Hu et al. 2014). When WTA is used for TFA of ERP data, a mother wavelet should be firstly defined by a set of center frequency and bandwidth (CFBW). Since the differences of CFBW may result in divergent time-frequency results, different CFBW should be attempted for an optimal time-frequency result (Zhang et al. 2017). For another, NPLC is mixed together with other components (Jung et al. 2000). The key problem is how to separate NPLC from mixed signals. This study is dedicated to the investigation of these issues and the following steps are used for implementing the idea of NPLC-oriented tensor decomposition analysis. After ERP data preprocessing, CFBW are optimized by selecting from 80 sets of CFBW to define a mother wavelet for complex morlet continuous wavelet transform (CMCWT), which is used to solve the first problem as mentioned above (as shown in Fig. 2). The induced method was conducted to convert the time domain signals of every participant into the time-frequency domain signals, so that the fourth-order tensor was formed. Subsequently, the temporal components, spectral components, spatial components, and features of subjects-stimuli/ conditions mode of NPLC are extracted simultaneously by CPD from the fourth-order tensor (to solve the second problem as mentioned above). Finally, a comparison was made of the diversity between NPLC extracted by CPD and TFA in the temporal component, spectral component, spatial component, and repeated measure analysis of variance (rm-ANOVA) results (the flow of data processing and analyze as shown in Fig. 1).
Participants and EEG Data Acquisition
Fifteen college students were recruited to participate as paid volunteers from Shanghai University of Sports in China. Seven were females and eight were males (Mean age: 20.8; Std: 1.4). All the participants were right-handed, presented with normal or corrected to normal visual acuity and they did not know or see the experimental paradigm before the experiment. Previous studies reported that anger appeared to be an important factor in human behavior (Denny and Siemer 2012) and the emotion Go/Nogo task was used in a great number of studies to explore the underlying mechanisms (Goldstein et al. 2007;Shafritz et al. 2006;Verona et al. 2012;Yu et al. 2014). Following this line, in this study, all participants were required to participate in emotion (Anger and Neutral) Go/Nogo task. The details of the experiment materials and the paradigm can be found in our previous research (Xia et al. 2018). EEG recordings at 64 locations were collected according to the standard 10-20 system. The EEG data were referenced online against the FCz electrode and grounded at the AFz electrode. Meanwhile, a vertical electrooculogram was obtained below the left eye, and the horizontal electrooculogram was obtained at the outer canthus of the right eye. Impedances were less than 5 kΩ . The BrainAmp amplifier and BrainVision Recorder 2.0 system (Brain Products GmbH, Germany) were used to record electrical activity for each participant with a 500 Hz sampling rate and the data were filtered between 0.01 and 100 Hz by a BrainAmp amplifier.
Data Preprocessing
The preprocessing of EEG was conducted in two sorts of software. Firstly, by the Analyzer 2.0 system (Brain Products), the FCz electrode was restored when the data were re-referenced offline to an average of both posterior ear papillae (TP9 and TP10) for each participant (Debener et al. 2012). Subsequently, using the EEGLAB toolbox (Delorme and Makeig 2004) running in the MATLAB environment (MathWorks, Natick, MA), the data preprocessing was performed offline. The EEG signals were filtered offline with a 45-55 Hz notch infinite impulse response (IIR) (Delorme and Makeig 2004;Guan et al. 2004;Kropotov 2010;Lopez-Calderon and Luck 2014;Nishida et al. 1993;Widmann et al. 2015) filter (to remove line noise), a high-pass IIR filter of 0.2 Hz and a low-pass IIR filter with 30 Hz, respectively. Furthermore, the filtered continuous recordings were epoched from 200 ms before the stimulus onset to 1000 ms after the stimulus onset. Epochs/trials whose maximum magnitude exceeded 100 μV were excluded ( 5.7% of epochs/ trials were rejected) and then remaining epochs were baseline corrected. Considering the COI is below 30 Hz and in order to reduce the impact of low-frequency components, a band-pass filter with 3-30 Hz based on Fast Fourier Transformation (FFT) was applied to filter the single-trial data (Cong et al. 2015b). Taking into account the diversity of bad channels of each participant, 22 bad channels were removed for all participants. They were identified based on the data distribution and variance of channels, by using the EEGLAB's function-pop_rejchan (Delorme and Makeig 2004) and the FASTER toolbox (Nolan et al. 2010).
Complex Morlet Continuous Wavelet Transform
STFT and WTA are two common algorithms in the TFA of ERP data. STFT has been extensively utilized for TFA of ERP data since it is proposed by Potter in 1947 (Araki et al. 2016;Cohen 1989;Ehm et al. 2011;Fumuro et al. 2015;Kauppi et al. 2013). This method is to calculate the Fourier transform of the windowed signals, which are approximately stationary over the window. However, the length of the window is the same for all frequencies. If the length of the window is too long, it will lead to low time-resolution at higher frequencies and low frequency-resolution at lower frequencies. In contrast, when the window length is relatively shorter, it will present the opposite results. Compared with STFT, the wavelet transform uses short windows at high frequencies and long windows at low frequencies (Rioul and Vetterli 1991;Peng and Chu 2004). That is to say, the wavelet transform is more adapted to TFA of non-stationary signals, for example, EEG/ERP data (Peng and Chu 2004). Therefore, WTA is used to achieve a trade-off between timeresolution and frequency-resolution in this study.
When the length of discrete sequence signal y(t) is T(t = 0, 1, 2, …, T − 1) , then the wavelet transform can be expressed as (Zhang et al. 2017): In the above formula, ( t−t 0 a ) is the mother wavelet. a and t 0 are called scaling and shifting parameters, respectively. In this study, the complex Morlet Wavelet is defined as the mother wavelet (Tallon-Baudry and Bertrand 1999;Tallon-Baudry et al. 1996, 1997, 1998Bertrand and Tallon where f b and f c stand for bandwidth and center frequency, respectively. And a gaussian shape respectively in the time domain and frequency domain around its f c can be obtained (Zhang et al. 2017).
A wavelet family is characterized by a constant ratio (Tallon-Baudry et al. 1996): In this formula, f bf = 1 2 f b , K should be more than 5 (Zhang et al. 2017).
Given an ERP data x c,n (t) , c and n are the number of the electrodes/sensors and trials, respectively. The definition of induced method can be given (Herrmann et al. 2005) by the following equation: In Eq. 4, | X c,n (t, f ) | 2 represent the power values of ERP data in cth electrode and nth trial.
Selection of an Optimal Set of CFBW for CMCWT
As shown in our previous study (Zhang et al. 2017), different parametric settings may result in divergent time-frequency results. CMCWT [the MATLAB function (Daubechies 1992;Mallat 1999)] was used under the MATLAB environment for the TFA of ERP data with an optimal set of CFBW selected from a number of sets of CFBW (as shown in Fig. 2). The specific steps are as below.
Each set of CFBW corresponded to a time-frequency representation (TFR) and topography (obtained by averaging the same region of interest, the time window from 300-600 ms and the frequency range from 3 to 7 Hz for all sets of CFBW) obtained by TFA. Meanwhile, third-order tensor including frequency, time, and channels can be composed by the time-frequency results of all sets of CFBW, respectively.
A typical topographical distribution of time-frequency results was referred to as the template. For instance, when f b = 1 , the value of f c can be respectively set as 1, 2, 3, 4, 5, 6, 7, 8, 9, and 10. The topographical distribution of f c 4 = 4 was finally chosen as the template T template (f b10 , f c 4 ) , based on the comparison of its topography and TFR with those of other sets of CFBW. That is to say, the time-resolution and frequency-resolution of its TFR are better than other sets of CFBW, and the template could represent most of the topographic maps of all sets of CFBW. After defining f c n , the correlation coefficients (CCs) between the template ( T template ) and spatial components s r (f b10 , f c n ) obtained by CPD were calculated by the following equations (R components were extracted in each mode based on the method as described in our previous studies (Cong et al. 2012a(Cong et al. , 2015a, the detail of extracted number of every set of CFBW is shown in Table 1). where r = 1, 2, …, R;n = 1, 2, …, 10 ; Y is the CC of each component for every set of CFBW. -------36 28 2 ----35 45 25 42 35 32 3 --51 43 32 34 28 25 45 45 4 --28 50 45 42 30 36 46 42 5 -25 44 46 25 35 25 25 40 25 6 -37 52 35 20 45 40 30 30 32 7 -34 37 30 42 40 28 42 42 45 8 -26 26 36 31 36 36 30 45 28 9 25 40 44 25 40 25 35 40 33 30 10 45 36 38 24 42 35 35 48 30 45 1 3 where ∈ (−1, 1) represents the CC between s r and T template ; I is the number of channels; T i template and s i r are the value of each channel for T template and s r , respectively; T template and s r are the mean value of T template and s r , respectively.
Subsequently, the maximal CC for each set of parameters was chosen as: Then, the corresponding r th components with the maximum CC were obtained. The same procedures were applied to the other sets of CFBW.
Finally, an optimal set of CFBW was selected. Take the mutil-domain components of the first four maximum CCs of four sets of CFBW as example, as shown in Fig. 3. Comparing the corresponding waveforms of the maximum CCs of the first four sets of CFBW, the set of f b = 0.7, f c = 4 was firstly discarded, because the COI was evoked after the stimulus onset. Then, we considered that the period of waveform of COI was relatively narrow in the time domain (usually not more than one second) and there were few irrelevant spikes for waveform and spectrum of multi-domain features extracted by CPD. f b = 1 and f c = 1 were used to define the mother wavelet.
Tensor Decomposition Algorithm
In an ERP experiment, there should be at least three modes including time, channels/space, and subjects-stimuli/conditions. When the time-domain data are converted into the time-frequency domain, a fourth-order tensor including time, frequency, channels/space, and subjects-stimuli/ conditions can be formed. Moreover, EEG data are used to identify common activities over subjects. It is necessary and interesting to study the interaction among modes, such as time, frequency, and channels/space modes. Here, CPD (Hitchcock 1927;Cong et al. 2015a) is applied to extract COI from the high-order tensor.
Given an Nth-order tensor X ∈ R I 1 ×I 2 ×⋯×I N , the CPD can be defined as: In Eq. 8, X approximates the high-order tensor X ; E ∈ R I 1 ×I 2 ×⋯×I N is a Nth-order error tensor, whose sizes of all dimensions are the same as X ; ‖u (n) represents a component matrix for mode #n , and n = 1, 2, …, N.
In this study, the fourth-order tensor consisted of the time-frequency results. It can be extracted by CPD (Cong et al. 2014(Cong et al. , 2012d): In the above formula, I is a diagonal tensor, the value of every element of its super-diagonal is equal to one; the component matrix F contains the multi-domain features of R , respectively brain activity (R components are extracted in each mode), and each column corresponds to one feature. The component matrix corresponds to the rth components in the time domain ( u (t) r ), frequency domain ( u (f ) r ) and space domain ( u (s) r ), respectively. Those components reveal the properties of the rth multi-domain feature in the domains as well (Cong et al. 2015a(Cong et al. , 2012a. In the CPD, the rth temporal component, rth spectral component, and rth spatial component are interrelated, but neither of them is associated with other multi-domain components (Cong et al. 2015a). Meanwhile, combining with the generative mechanisms of NPLC of interest, CPD is selected to extract NPLC of interest from the fouth-order tensor. According to Eqs. 1 and 4, the time-frequency transforms are obtained by calculating the product of the constant and the sum of the square of the absolute value of the convolution between signals and the mother wavelet. Therefore, the elements of the highorder tensor are nonnegative in the study. In our previous study (Cong et al. 2014), the fourth-order tensor (the size of last mode is conditions by groups by subjects) is formed to find the discrepancy of cognitive processes between the two groups under every condition. Likewise, our interest is to identify the differences between emotion factor under Go/ Nogo tasks by calculating statistical results of the features of last mode extracted from the fourth-order tensor (frequency by time by channels/space by subjects-stimuli/conditions: 30 × 600 × 42 × (15 × 4)).
Time-Frequency Analysis
Combining the previous studies (Benvenuti et al. 2017;Harper et al. 2014;Karakaş et al. 2000;Kirmizi-Alsan et al. 2006;Pandey et al. 2016) with the time-frequency representations (TFRs) and topographies of all conditions of the present data, we selected the Fz, FCz electrodes for analysis of the theta oscillation (range 3-7Hz) between 300 and 600ms. Then multivariate rm-ANOVAs were computed on theta oscillation using emotion (Anger and Neutral), task (Go and Nogo) as within-subject factors. Figure 4a-c displayed the grand averaged TFR of every condition at Fz and FCz, topography of the theta oscillation, and the corresponding mean power of every condition, respectively. In order to show the corresponding power of theta oscillation of every participant under each condition, the scatter plots with boxplots were displayed in Fig. 4d as well.
In order to demonstrate that the statistical results are affected by the selected ROI for TFA compared with those of the proposed method, the ANOVA reults of another RIO (time window: from 200 to 700 ms; frequency range: 3-7 Hz) were also shown. The main effect o f e m o t i o n (F (1,14) = 1.955, p = 0.1840, 2 p = 0.123) and interaction effect between the two factors Fig. 4 a The grand averaged time-frequency representations (TFRs). b Topographical distributions of the theta oscillations at Fz and FCz with the time window of 300-600 ms. c The mean power of every condition. d The scatter plots with boxplots of the mean power of every condition. Anger-Go, go task of the anger-associated words; Anger-Nogo, Nogo task of the anger-associated words; Neutral-Go, go task of the neutral words; Neutral-Nogo, Nogo task of the neutral words; '**' represents p < 0.01 (F (1,14) = 0.023, p = 0.8810, 2 p = 0.002) were insignificant, respectively. In addition, there was a significant main effect for tasks (F (1,14) = 8.643, p = 0.0110, 2 p = 0.382) . The methods, which can be used to precisely determine the ROI of TFR of every condition according to the corresponding boundary, were not discussed in this study.
Multi-domain Features of NPLC
Aiming at extracting the NPLC of interest, the results from each step of the tensor decomposition analysis of ERP are as following.
Using the CFBW confirmed in "Selection of an Optimal Set of Parameters for CMCWT " section ( f b = 1, f c = 1 ), the induced method was performed on all participants' data for TFA. The sampling point is nonlinear distribution in the frequency domain, with 30 points set in the whole frequency band of interest (3-30 Hz).
The fourth-order tensor was formed by the time-frequency results.
According to the fit value as shown in Fig. 5d, 36 components were extracted in each mode. The detail of how to define the number of extracted components for CPD can be found in our previous studies (Cong et al. 2012c(Cong et al. , 2013(Cong et al. , 2014(Cong et al. , 2012b. We considered the temporal, spectral and spatial properties of NPLC of interest as shown in "Time-Frequency Analysis" section. Its latency fell in the range from 300 to 600 ms in the time domain, the peak of corresponding spectrum is below 8Hz, and its peak amplitudes are distributed in the frontal-central cortex in the space domain. The 11th component was chosen (in Fig. 5a). In addition, the TFR in Fig. 5a was based on the outer product of the temporal and spectral components.
When the multi-domain features were determined, twoway (emotion and task) repeated measurement statistical test were performed to investigate the between-task differences under emotion condition (Anger and Neutral) with 0.05 as the level of significance, and Greenhouse Geisser correction was performed when necessary. The results showed that the interaction effect between emotion and task reaches a significant level (F (1,14) = 10.607, p = 0.006, 2 p = 0.431 ). There was a significant main effect of both emotion (F (1,14) = 6.162, p = 0.026, 2 p = 0.306) and t ask (F (1,14) = 17.688, p = 0.001, 2 p = 0.558) . Through post hoc analysis, the results demonstrated that the power of anger condition was larger than that of neutral stimuli in the Nogo task (p = 0.005) , not in the Go task (p = 0.367) . By contrast, there was a significant main effect of task conditions under both anger (p < 0.001) and neutral condition (p = 0.005) . Thus, this study found that the power of NPLC oscillation obviously increases in anger words when compared to neutral words in the Nogo task as shown in Fig. 5b. In addition, the scatter plots with boxplots were also shown in Fig. 5c such that the feature of every participant in every condition can be observed.
Conclusion and Discussion
Using CPD to separate the multi-domain features of NPLC of interest, this study investigated the differences between tensor decomposition analysis and TFA of ERP data. The tensor-based results were more discriminative than those derived from TFA. The method based on tensor decomposition showed not only the significant main effect of task condition, but also significant interaction effect between emotion and task. The main effect of emotion was found Fig. 5 a Multi-domain features of NPLC of interest as well as the corresponding temporal, spectral, and spatial components were extracted from all brain activity. b The mean magnitude of every condition. c The scatter plots with boxplots of the mean magnitude of all conditions. d The magnitude of FIT, DIFFIT is performed on this curve. '**' represents p < 0.01 to reach a significant level. Moreover, the proposed method ensured that statistical analysis results donot change with ROI. This manifested that the derived features fulfill expectations in this study, and it should be fundamental for the extension of our proposed method for the analysis of other EEG/ERP data.
In this study, the time-frequency results are used by averaging the time-frequency transforms of each single-trial data to separate the multi-domain features of NPLC of interest, and it is different from those methods which obtain COI by averaging multi-trial waveforms in time domain and calculating the time-frequency transforms of averaged ERP data. Moreover, there are several methods to form a highorder tensor to extract the information of short ERPs data simultaneously in the time, frequency, space, and participants modes. For example, in order to study the properties of NPLC at the single-trial level, the fourth-order tensor can be comprised of frequency by time by channels by subjects-trials. Particularly, the third-order tensor is constructed including temporal, spectral, and spatial information, because the time-locked characteristics of NPLC in different single-trial are not deterministic. Few strictly time-locked characteristics of NPLC are preserved.
Furthermore, CPD is a group analysis method that performs on the high-order tensor of brain activity collected from different participants and stimuli/conditions (Zhou et al. 2016). It assumes that all subjects share the same information of components in the time domain, frequency domain, and space domain, while the variance in signatures of all participants is revealed by those common components (Cong et al. 2015a). As we all know, the EEG/ERP data of one subject in each condition/stimulus can form one block tensor. In other words, for one subject's data, the block tensor can be a third-order tensor (time by channels by stimuli/ conditions) or fourth-order tensor (frequency by time by channels by stimuli/conditions), so multi-participant data can form multi-block data. Therefore, coupled/constrained matrix and tensor factorizations can be applied to extract common and individual components and/or build links between them (Zhou et al. 2016).
There are several drawbacks for using CPD and TFA to extract the NPLC of interest from ERP data. One limitation is the small number of subjects were recruited to participate in the experiment in this study. The ERP data were only collected from 15 participants. Another one is the method to extract NPLC by tensor decomposition has not been employed in other ERP data. Additionally, in Fig. 4a, the grand averaged TFRs clearly display that the theta oscillation of interest of every condition has a specific and visible boundary and the ROIs of the four conditions are different. Hence, the same ROI for all conditions used for statistical analysis in the research is unreasonable and arbitrary. The techniques, such as edge detection method based on Canny, Marr-Hildreth, Deriche, Sobel, and Laplacian algorithms, can be used to precisely mark the edge of ERO of interest of every condition respectively in the TFR (Milanović et al. 2019). In this study, the tensor decomposition was used to extract the multi-domain features of NPLC simultaneously for an expected statistical results, evidencing that this method is promising with substantial potentials in neuroscience applications.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/.
International conference on neural information processing. | 7,246.4 | 2019-12-26T00:00:00.000 | [
"Computer Science"
] |
The Secret Ingredient for Exceptional Contact‐Controlled Transistors
Contact‐controlled transistors are rapidly gaining popularity. However, simply using a rectifying source contact often leads to unsatisfactory operation, merely as a thin‐film transistor with low drain current and reduced effective mobility. This may cause otherwise promising experiments to be abandoned. Here, data from literature is analyzed in conjunction with devices that have been recently fabricated in polysilicon, organic and oxide semiconductors, highlighting the main factor in achieving good saturation, namely keeping saturation coefficient γ well below 0.3. Secondary causes of suboptimal electrical characteristics are also discussed. Correct design of these alternative device structures will expedite their adoption for high gain, low‐frequency applications in emerging sensor circuits.
Introduction
The reduction of contact effects in thin-film transistors (TFTs) is a principal concern for achieving high operating frequencies, especially in scaled devices with emerging materials. [1][2][3][4][5][6][7] However, in many focused applications, high transit frequency is not critical. [8][9][10][11] Making the engineering decision to rely on the source contact area as the main control mechanism of current control may bring operational benefits that more than outweigh the drawbacks. [9,[12][13][14][15] Devices such as the source-gated transistor (SGT) are rapidly gaining interest, [9,[16][17][18][19] as they trade off switching frequency for improved gain, saturation characteristics and stability. [20][21][22] While the general recipe for making such devices is straightforward, a crucial element is frequently overlooked. Specifically, the electrostatic properties of the layers making up the device at the edge of the source forms a capacitive voltage divider with the gate insulator. [14,22,34] This pins the voltage under the source edge at a value where K is a constant which depends on the state of free charge in the semiconductor and is the saturation coefficient, with C i = ε i /t i and C s = ε s /t s as the insulator and semiconductor capacitances per unit area, respectively. [16,23,34] When V SAT2 = V GS − V th , the channel pinches off at the drain end in the usual manner, where V GS and V th are the gate-source voltage and threshold voltage, respectively (see Figure 1c).
Contrary to conventional TFT design rules, a relatively low gate-insulator capacitance (i.e., low permittivity ε i or large insulator thickness t i ) is preferred to attain γ << 1, so that draincurrent saturation can occur at very low drain-source voltages.
Rearranging Equation 2 yields which allows us to plot the γ values extracted from the fabrication methods used in literature (Figure 2a). The devices shown in Figure 1 all have exemplary characteristics and low saturation coefficients (γ < 0.2). Indeed, the measured ∂V SAT1 /∂V GS Figure 1. Source-gated transistor (SGT) structure and output characteristics. a) Schematic cross-section of a staggered top-gate SGT, emphasizing the capacitive divider at the edge of the source in the inset. Output characteristics from published literature, in various material systems, reproduced with permission: b) low temperature polysilicon (LTPS) SGT (reprinted from Ref. [9] under the Creative Commons CC-BY-NC-SA license); c) InGaZno (IGZO) with graphene contact SGT (reprinted from Ref. [28] with the permission of AIP publishing); d) silicon nanowire (Si NW) SGT (reprinted from Ref. [29] under the Creative Commons CC BY license); e) ZnO nanosheet (NS) SGT (reprinted from Ref. [30] under the Creative Commons CC BY license); f) IGZO SGT with Schottky contact (reprinted from Ref. [16] under the Creative Commons CC BY license).
www.advelectronicmat.de
for these devices reliably matches the calculated γ value for each case (Figure 2b). In practice, the calculated γ does not always result in commensurate variations of V SAT1 with V GS . Poorly chosen device electrostatics, fixed and free charges, layer nonuniformity, barrier nonuniformity, and low barrier (preferred for high current) lead to higher ∂V SAT1 /∂V GS than expected. This is shown schematically in Figure 2c
Challenges for High-Quality Saturation
Devices deliberately designed with a source barrier frequently produce characteristics substantially inferior to those in Figure 1. Most often, this is a result of a poorly chosen γ ( Figure 2).
By choosing a contact metal other than Au, we induced a barrier at the source. Yet aside from the appearance of a nonlinear region near the origin, the output characteristics of these Cr/Au-contact TFTs saturated in a FET-like manner, with a measured ∂V SAT1 /∂V GS ≈ 1. Calculating the saturation coefficient for these devices yields γ = 0.84, a comparatively large value but expectedly so, given the high dielectric capacitance (700 nF cm -2 ) produced by the thin gate insulator (t i about 8 nm). As a consequence, the device has FET-like saturation. Simply creating a rectifying contact is not sufficient to achieve SGT characteristics and operation.
We therefore fabricated additional DNTT TFTs with a thicker, lower-permittivity gate insulator (Figure 3b), having a gate-dielectric capacitance of approximately 34 nF cm -2 and thus a smaller predicted γ ≈ 0.21. Transistors with Cr/Au contacts showed the largest drain current and mobility (2 cm 2 V -1 s -1 , comparable with 2.1-3.0 cm 2 V -1 s -1 obtained from the previous devices with high specific gate capacitance), and a ∂V SAT1 /∂V GS ≈ 1. The high mobility and typical TFT saturation behavior indicate that these devices are effectively operating in Ohmic mode, and not as SGTs, contrary to expectations. The Cr work function, 4.5 eV, should create a sizeable energy barrier, which would pinch off the source. The explanation is likely a shift in the shadow mask position between the Cr and Au evaporations, leading to the presence of Au at the edge of the source electrode closest to the drain, instead of Cr. This eventually would result in the absence of the desired large energy barrier, and with it, the inability of the contact to pinch off the semiconductor and achieve low-voltage saturation. This Au contact area would also be responsible for high charge injection density at the source. The high extracted values for effective mobility corroborate this hypothesis.
Transistors with Cu/Au, Ti/Au, and Al/Au contacts showed relatively lower drain current and reduced mobility of 1.3, 0.27 Figure 2. Saturation coefficient γ plays an essential role in the electrical characteristics of SGTs. a) Calculated γ as a function of semiconductor and gate insulator permittivities for devices in a variety of material systems found in literature, as well as fabricated organic (DNTT) and LTPS transistors with P or BF 2 barrier modification implants. The plot identifies permittivity ratios (ε i /ε s ) of frequently used semiconductor/insulator combinations and facilitates choosing the minimum insulator thickness (t i ) required to achieve a certain γ value, given a material system and semiconductor layer thickness (t s ). b) Deviations from the calculated γ values have been observed, and these discrepancies are usually larger in devices with larger γ. c) Saturation may occur at higher voltages than predicted by the value of γ if the charge density in the semiconductor is too high, or the contact barrier is too low to allow full depletion of the semiconductor at the edge of the source closest to the drain. www.advelectronicmat.de and 0.015 cm 2 V -1 s -1 , respectively. This trend is consistent with the reducing metal work function (4.5-5.5, 4.33, and 4.1-4.3 eV, respectively). Although the metal work function is not a precise indicator of contact barrier height, lower work functions result in lower current density. Additionally, in all cases, the measured ∂V SAT1 /∂V GS ≈ 0.5 indicates substantially earlier saturation than would be expected of a conventional TFT. These observations are fully consistent with SGT operation. The disagreement with the calculated γ is not unique to these devices. As we will show next, correctly choosing the design parameters to obtain a low γ is the first of a series of strategies that contribute to drastically improving SGT output characteristics.
First, the source pinch-off will occur once the semiconductor is fully depleted, and, at lower drain-source voltages, the drain current cannot exceed the envelope defined by the current capability of the semiconductor channel in series with the barrier Figure 3. Color plots represent original data, while grayscale graphs are reproduced from literature, with permission. a) Organic DNTT transistors with a gate-dielectric capacitance of ≈700 nF cm -2 have a large saturation coefficient, γ = 0.84, which leads to FET-like saturation whether either Ohmic (Au) or rectifying (Cr/Au) contacts are used. b) Organic DNTT devices with a gate-dielectric capacitance of ≈34 nF cm -2 and Cr/Au, Cu/Au, Ti/Au, Al/Au show a reduction in drain current with decreasing source-metal work function. Transistors with Cr/Au sources exhibit FET-like behavior, while the expected SGT-like saturation is seen in the low work function realizations. c) LTPS SGTs, comprising a P barrier lowering implant at the source, demonstrate higher than expected saturation voltage, due to the inability of the source barrier to fully deplete the semiconductor at the edge of the source. d) IGZO SGTs with thick (t s = 50 nm) exhibiting poor saturation performance, due to the partly depleted semiconductor at the edge of the source (reprinted from Ref. [16] under the Creative Commons CC BY license). e) ZnO SGT in which high excess charge leads to high output conductance in saturation and inability to turn off completely. f) MoS 2 SGT with poor output conductance between V SAT1 and V SAT2 , due to suboptimal lateral field screening (reprinted from Ref. [36] with permission from Elsevier). g) IGZO SGT with inkjet-printed contacts, demonstrating the same behavior as (f) (reprinted from Ref. [37] under the Creative Commons CC BY license).
www.advelectronicmat.de (Figure 2c). For a long channel, the source is initially more conductive than the source-drain gap and will only reach the saturation current at a drain-source voltage higher than the expected V SAT1 . This explains the discrepancy between the measured and calculated values in the DNTT devices in Figure 3b with a comparatively large 100 µm channel length.
Second, and for similar reasons, the source-barrier height needs to be sufficiently high. As shown in Figure 3c, a transistor otherwise identical to the LTPS SGT [9] in Figure 1b but with a substantially lower source barrier, produced ∂V SAT1 /∂V GS ≈ 0.3, much higher than the calculated γ of 0.04. Here, the lower barrier is more conductive and current saturates at a higher reverse bias, once again in a similar manner to the qualitative description of Figure 2c. A similar reasoning may be followed to account for the discrepancy observed in Figure 2b for the IGZO SGTs with MoCr contacts. [24] These constraints, brought about by the interplay between the relative conductivities of the source region and FET channel, are generally responsible for a deviation from the calculated γ. We visually illustrate this in Figure 2b. It is also apparent that devices with lower γ values usually suffer less deviation from expected behavior; in the output characteristics, the maximum allowable current envelope of the FET channel is steeper at low drain-source voltages. Data from numerous material systems leads to the following rule of thumb: successful designs should aim for γ values significantly below 0.3, and preferably under 0.1.
Due to a related effect, poorly saturating curves are also frequently found when the semiconductor is comparatively thick, even if the relative semiconductor and insulator capacitances are well chosen. This is attributable to the inability of the semiconductor layer to reach full depletion under the given biasing condition, and manifests itself as a large output conductance (sloping saturated curves) in the region above V SAT1 [16] (see Figure 3d). A full treatment is found in Ref. [16] (Supporting Information).
Achieving full source pinch-off may also be inhibited if the semiconductor layer has substantial excess charge, or if the insulator interface creates a surface doping effect. When this occurs, the transistor also tends not to switch off well, due to the enduring presence of a conduction path between source and drain. A set of solution-processed ZnO SGTs we have recently fabricated presented this behavior (Figure 3e). Here, the saturation voltage change with V GS appears to be in the order of 0.25 V/V and the drain current reaches a relatively modest 20 nA µm -1 , indicating the dominance of contact effects. Nevertheless, the saturation performance is poor, with a linear increase of drain current with drain voltage, explained by the inability of the reverse-biased source barrier to fully deplete the semiconductor layer.
Finally, poor saturation can also occur due to ineffective screening of the source from the lateral field induced by the drain (Figure 3f,g). [32] This effect is generally seen in shortchannel devices, in which the lateral (drain) electric field competes in magnitude with the gate field normal to the source. In such cases, V SAT1 is clearly defined in the output characteristics, but the transistor has poor gain above V SAT1 , as full saturation only occurs above V SAT2 . For all practical purposes, the device operates as a conventional TFT with reduced current and mobility. General mitigation strategies include increasing the source injection area [34] and providing lateral field relief via electrode patterning [32] or local doping. [20,38] Specifically, in contrast to the devices in Guo et al., [36] Liu et al. [26] obtain record intrinsic gain figures by: improving the gate control via a higher dielectric capacitance; using Pt to create a relatively high barrier at the source contact; operating the device at low gate overdrive voltage where the barrier pull-down is minimized; and incorporating an essential field relief structure. [32]
Outlook for Successful SGT Realization
As evidenced by measurements of devices we have recently fabricated, simply designing TFTs with rectifying source contacts does not inevitably lead to favorable source-gated transistor operation. This is confirmed by examples from literature, and it is very likely that, faced with suboptimal characteristics, many authors may choose to forgo publication, and may altogether halt this line of research. Our intention here is to show that awareness of the various limiting factors makes possible the realization of high-performance contact-controlled devices with ease.
To summarize, in aiming for superior SGT characteristics, it is of critical importance to choose materials which: are compatible within the process and application; have suitable permittivities; and are deposited with layer thicknesses calculated to yield γ of at most 0.3, preferably smaller than 0.1. A simple recipe for design in a given material system would involve first deciding on the minimum semiconductor thickness acceptable for the process and on the desired γ. The minimum insulator thickness is then obtained by rearranging Equation 3 For example, to reach γ < 0.1, organic transistors such as the ones presented in Figure 3 with a 20-nm-thick semiconductor layer, would require the gate insulator to be thicker than 463 nm when utilizing Al 2 O 3 , but only 201 nm if SiO 2 was used instead. An IGZO/Al 2 O 3 transistor with identical 20 nm active layer would only require 101 nm insulator thickness.
From Figure 2a, it is clear that the most convenient way of designing SGTs is by using high permittivity, thin semiconductor layers. As such, the relatively high ε s = 16 of IGZO makes it an ideal candidate for facile realization of SGT-type devices. Conversely, the nanometer-sale active layers achievable in some organic semiconductors or MoS 2 allow them to produce excellent devices despite the relatively low material permittivity.
The ideal SGT combines low permittivity insulators with high permittivity semiconductors. IGZO, Silicon, Germanium, InN or InAs active layers and air/vacuum gaps represent the ultimate match.
In the dialog between designers and process engineers, it may be convenient to discuss designs in terms of layer capacitances. If so, the conditions can be expressed equivalently as
www.advelectronicmat.de
In conceiving contact-controlled devices, one should be mindful of the trade-off between low γ values and gate-source voltages required, given the reduced insulator capacitance as γ increases. Even with the correct sizing and material, care should also be taken, to: ensure optimal interplay between barrier height and channel length; reduce bulk and interface charge, which may prevent source pinch-off; and minimize the detrimental consequences of lateral electric fields.
Contact-controlled device research is witnessing an unmistakable upsurge. By following these simple but critical design rules, their valuable properties can be utilized to their potential. High-gain, low-frequency analog applications can be designed for a vast range of emerging applications: innovative displays, printed sensor front-ends, disposable low-power wearables, and flexible IoT devices.
Experimental Section
Organic Transistor Fabrication: Organic transistors were fabricated in the inverted staggered (bottom-gate, top-contact) device architecture using the small-molecule semiconductor DNTT (Sigma Aldrich). [35] The organic transistors were fabricated either on silicon substrates or on flexible polyethylene naphthalate (PEN) with a thickness of 125 µm (Teonex Q65 PEN; kindly provided by William A. MacDonald, DuPont Teijin Films, Wilton, U.K.). For the organic transistors fabricated on silicon substrates, the heavily doped silicon serves as both the substrate and a common gate electrode. In these TFTs, the gate dielectric is a stack of 100-nm-thick silicon dioxide grown by thermal oxidation, 8-nm-thick aluminum oxide deposited by atomic-layer deposition (ALD) and a self-assembled monolayer (SAM) of n-tetradecylphosphonic acid (PCI Synthesis, Newburyport, MA, USA) [39] with a unit-area capacitance of 34 nF cm -2 . For the organic transistors fabricated on flexible PEN, patterned gate electrodes are prepared by depositing 30-nm-thick aluminum through a shadow mask (CADiLAC Laser, Hilpoltstein, Germany). [40] The gate dielectric is a stack of 7-nm-thick aluminum oxide grown by plasma oxidation and an n-tetradecylphosphonic acid SAM with a unit-area capacitance of 700 nF cm -2 . For all organic transistors, nominally 25-nm-thick DNTT was deposited by thermal sublimation in vacuum with a deposition rate of 0.3 Å s -1 and with the substrate held at a temperature of 60 °C. For the source/drain contacts, either Au with a thickness of 30 nm or a stack of either Cr, Cu, Ti, or Al (with a thickness of 30 nm) followed by Au (also having a thickness of 30 nm) was deposited by thermal evaporation in vacuum with a rate of 0.3 Å s -1 . The organic transistors have a channel length of 100, 150, or 200 µm and a channel width of 200 µm.
ZnO Solution Preparation: ZnO solgel solution with a final concentration of 0.1 m in 2-methoxyethanol (99.8%, anhydrous, Sigma Aldrich UK) was prepared using zinc acetate dihydrate (≥98%, Sigma Aldrich UK) as the ZnO precursor and ethanolamine (≥99%, Sigma Aldrich UK) as the stabilizer (molar ratio precursor: stabilizer 1:1). The solution was immediately transferred to a hot plate at 60 °C and stirred continuously for 2 h in air ambient, followed by a minimum of 4 d at room temperature (RT) to complete a stable sol formation. The solution was used for spin coating in its sol form (clear solution) before white particulates appeared.
ZnO Transistor Fabrication: The ZnO solution was used as the semiconductor in solution processed top contact bottom gate transistor devices with a Si/SiO 2 /ZnO/MoO 3 /Cr/Au structure. To fabricate the devices, first, p-doped Si wafers with 300 nm layer of thermally grown SiO 2 insulator were cleaned acetone, IPA and DI water subsequently (15 min each) in an ultrasonic bath, followed by O 2 plasma treatment (100 W, 5 min). Next, the ZnO solgel solution was spin coated on the Si/SiO 2 substrates in a 2-step process at 1000 rpm, 10 s then 5000 rpm, 20 s. The samples were immediately baked on a hotplate at 150 °C for 10 min. Two layers of ZnO were coated repeating this process, followed by a final anneal at 450 °C for 2-3 h. After returning to RT, this layer was pattered by photolithography and etched with a 5% v/v solution of acetic acid (glacial, ≥99%, Sigma Aldrich UK) to create the active layer islands. The S-D contacts were then aligned (Mask Aligner Suss MA 1006) on top of the ZnO islands and patterned by photolithography, prior to thermally evaporating (Moorfiled Nanotechnology Thermal Evaporator) a 3 nm thin barrier layer of MoO 3 nanoparticles (Sigma Aldrich UK). Next, the metal contacts 10 nm Cr and 100 nm Au were deposited through electron beam deposition (Univex 5009 Electron Beam Evaporator), followed by lift-off of the photoresist in acetone, to realize the completed transistor devices.
Polycrystalline Silicon Transistor Fabrication: Self-aligned bottom gate top contact LTPS SGTs were fabricated on glass substrates in multiple batches at Philips MiPlaza in 2006-2009 (the full process has been reported in Ref. [20]). Starting with definition of a Cr gate, consecutive 100 nm gate dielectrics of SiN x and SiO 2 were deposited by PECVD, followed by 40 nm a-Si:H. After definition and doping of the high n-type drain region, polysilicon islands were formed with excimer laser and dry etching. A 120 nm SiO 2 field plate insulator was then deposited and source contact windows were opened, through which 5 keV ion implantation of either 5 × 10 12 cm -2 P (device thus far unpublished and denoted LTPS(P)) or 1 × 10 13 cm -2 BF 2 device first reported in Ref. [9] and identified as LTPS(BF 2 ) was performed to modify the source energy barrier height. Cr was deposited as the contact metal and Ti/Al used for the field plate structure. | 4,933.6 | 2021-12-15T00:00:00.000 | [
"Engineering"
] |
Lung Nodule Detection via Deep Reinforcement Learning
Lung cancer is the most common cause of cancer-related death globally. As a preventive measure, the United States Preventive Services Task Force (USPSTF) recommends annual screening of high risk individuals with low-dose computed tomography (CT). The resulting volume of CT scans from millions of people will pose a significant challenge for radiologists to interpret. To fill this gap, computer-aided detection (CAD) algorithms may prove to be the most promising solution. A crucial first step in the analysis of lung cancer screening results using CAD is the detection of pulmonary nodules, which may represent early-stage lung cancer. The objective of this work is to develop and validate a reinforcement learning model based on deep artificial neural networks for early detection of lung nodules in thoracic CT images. Inspired by the AlphaGo system, our deep learning algorithm takes a raw CT image as input and views it as a collection of states, and output a classification of whether a nodule is present or not. The dataset used to train our model is the LIDC/IDRI database hosted by the lung nodule analysis (LUNA) challenge. In total, there are 888 CT scans with annotations based on agreement from at least three out of four radiologists. As a result, there are 590 individuals having one or more nodules, and 298 having none. Our training results yielded an overall accuracy of 99.1% [sensitivity 99.2%, specificity 99.1%, positive predictive value (PPV) 99.1%, negative predictive value (NPV) 99.2%]. In our test, the results yielded an overall accuracy of 64.4% (sensitivity 58.9%, specificity 55.3%, PPV 54.2%, and NPV 60.0%). These early results show promise in solving the major issue of false positives in CT screening of lung nodules, and may help to save unnecessary follow-up tests and expenditures.
Lung cancer is the most common cause of cancer-related death globally. As a preventive measure, the United States Preventive Services Task Force (USPSTF) recommends annual screening of high risk individuals with low-dose computed tomography (CT). The resulting volume of CT scans from millions of people will pose a significant challenge for radiologists to interpret. To fill this gap, computer-aided detection (CAD) algorithms may prove to be the most promising solution. A crucial first step in the analysis of lung cancer screening results using CAD is the detection of pulmonary nodules, which may represent early-stage lung cancer. The objective of this work is to develop and validate a reinforcement learning model based on deep artificial neural networks for early detection of lung nodules in thoracic CT images. Inspired by the AlphaGo system, our deep learning algorithm takes a raw CT image as input and views it as a collection of states, and output a classification of whether a nodule is present or not. The dataset used to train our model is the LIDC/IDRI database hosted by the lung nodule analysis (LUNA) challenge. In total, there are 888 CT scans with annotations based on agreement from at least three out of four radiologists. As a result, there are 590 individuals having one or more nodules, and 298 having none. Our training results yielded an overall accuracy of 99.1% [sensitivity 99.2%, specificity 99.1%, positive predictive value (PPV) 99.1%, negative predictive value (NPV) 99.2%]. In our test, the results yielded an overall accuracy of 64.4% (sensitivity 58.9%, specificity 55.3%, PPV 54.2%, and NPV 60.0%). These early results show promise in solving the major issue of false positives in CT screening of lung nodules, and may help to save unnecessary follow-up tests and expenditures.
Keywords: lung cancer, computed tomography, lung nodules, computer-aided detection, reinforcement learning inTrODUcTiOn Computed tomography (CT) is an imaging procedure that utilizes X-rays to create detailed images of internal body structures. Presently, CT imaging is the most preferred method to screen the earlystage lung cancers in at-risk groups (1). Globally, lung cancer is the leading cause of cancer-related death (2). In the United States, lung cancer strikes 225,000 people every year and accounts for $12 billion in healthcare costs (3). Early detection is critical to give patients the best chance of survival and recovery. Screening high risk individuals with low-dose CT scans has been shown to reduce mortality (4). However, there is significant inter-observer variability in interpreting screenings as well as a large number of false positives which increase the cost and reduce the effectiveness of screening programs. Given the high incidence of lung cancer, optimizing screening by reducing false positives and false negatives has significant public health impact by limiting unnecessary biopsies, radiation exposure, and other secondary costs of screening (5).
Several studies have shown that imaging can predict lung nodule presence to a high degree (6). Clinically, detecting lung nodules is a vital first step in the analysis of lung cancer screening results-the nodules may or may not represent early-stage lung cancer. Numerous computer-aided detection (CAD) methods have been proposed for this task. The majority, if not all, utilize classical machine learning approaches such as supervised/unsupervised methods (7). The goal of this work is to adopt for the first time a reinforcement learning (RL) algorithm for lung nodule detection. Developed by Google DeepMind, RL is a cutting-edge machine learning approach which has improved upon numerous CAD systems and helped to beat the best human players in the game of Go, one of the most complex games humans ever invented (8). Here, we apply RL to the lung nodule analysis (LUNA) dataset and analyze the performance of the RL model in detecting lung nodules from thoracic CT images.
MaTerials anD MeThODs lung nodule Data
For the training of our algorithm, we utilize the LUNA dataset, which curates CT images from publicly available LIDC/IDRI database. In total, there are 888 CT scans included. The database also contains annotations collected in two phases with four experienced radiologists. Each radiologist marked lesions they identified as non-nodule (<3 mm) and nodule (≥3 mm) and the annotation process has been described previously (9). The reference standard consists of all nodule ≥3 mm accepted by at least three out of four radiologists. Annotations that are not included in the reference standard (non-nodules, nodules <3 mm, and nodules annotated by only one or two radiologists) are referred to as irrelevant findings (9). A key benefit of this dataset is the inclusion of voxel coordinates in the annotation of nodules, which proves immensely useful when using a RL approach, described in the next section. Figure 1 illustrates examples of nodule and non-nodules from a single CT scan.
Data normalization
To balance the intensity values and reduce the effects of artifacts and different contrast values between CT images, we normalize our dataset. The Z score for each image is calculated by subtracting the mean pixel intensity of all our CT images, μ, from each image, X, and dividing it by σ, the SD of all images' pixel intensities. This step is helpful when inputting information into a neural network because it fine-tunes the input information fed into a convolution algorithm (10).
reinforcement learning
Reinforcement learning is the science of mapping situations to actions (11). It is a type of machine learning that bridges the well-established classical approaches of supervised and unsupervised learning, where target values are known and unknown, respectively. RL differs in that it seeks to model data without any labels, but rather with incremental feedback. Its recent popularity stems from its ability to develop novel solution schemas, even outperforming humans in certain domains, because it learns to solve a task by itself (12). Essentially, it is a way of programming agents by either a reward or a punishment without the need to specify how a task is to be achieved. A simple RL model is shown in Figure 2 illustrating how an agent's actions in a given environment affect its resulting reward and state. In its infancy, RL was inspired by behavioral psychology, where agents (i.e., rodent) learned tasks by being given a reward for a correct action taken in a given state. This mechanism ultimately creates a feedback loop.
Whether the agent, in our case a neural network model, navigates a maze, plays a game of ping pong, or detects lung nodules, the approach is the same. A basic reinforcement algorithm is modeled after a Markov decision process. For a set number of states, there are a given number of possible actions, and a range of possible rewards (13). To help optimize an agent's actions a Q-learning algorithm is used (14).
, * , , How a model knows the potential rewards from taking a certain action comes from experience play. That is, it stores numerous combinations of state to state transitions (s→s +1 ), with the corresponding action, a, taken by the model and the resultant reward, r: denoted as (s, a, r, s +1 ). For instance, in a game environment, the best action to take would be the action that leads to the greatest future rewards (i.e., winning the game), even though the most immediate action may not be rewarding in the short term. As shown in Eq. 2, the expected future rewards are approximated by multiplying the discount rate, λ, by the value of the action that would return the largest future reward based on all possible actions, maxQ(st+1, at). For a given action, what is learned is the reward for that action, rt+1, plus the largest future reward expected less current action value, Q(st, at). This is learned at a rate, r, the extent to which the algorithm overrides old information, and it is valued between 0 and 1. To learn which series of actions result in the greatest number of future rewards, RL algorithms depend on both greedy and exploratory search. The two methods allow a model to explore all possible ways to accomplish a task, and select the most efficient rewarding (12).
Using the RL approach to tackle the lung nodule task requires one main adaptation, which is how we define a state. In a typical RL task, a state would refer to a snapshot of everything in an environment at a certain time. However, with lung CT images, which are a collection of axial lung scans, we define a state as every 10 stacked axial images. Hence, our environment is very deterministic. That is, any action taken in a lung CT image state would lead to the succeeding 10 scans, from top to bottom. Whereas in a conventional task, such as playing a game, depending on the action there is more than one succeeding state possible. This key difference adapts our reward function to act solely as a reward function and evaluate a state on whether it immediately has a reward or not, instead of incorporating a value function which factors the total reward our agent can expect from a given state in the distant future. This makes logical sense given that there is only one possible distant future in our radiographic image environment, whereas in a game environment there is more than one possible distant future. As such, rewards are 1 and 0, depending on whether a classification is correct or incorrect, respectively, for the immediate state at hand only. Thus, the memory replay used to train our model, excludes the succeeding state, and only captures current state, action, and reward. convolutional neural networks (cnns) Learning to control agents directly from high dimensional sensory inputs (i.e., vision and speech) is a significant challenge in RL (11). A key component of our RL model is a CNN. It helps our model make sense of the very high dimensional CT images that we insert into our model. A standard slice has a width and length of 512 × 512. With our input of 10 slices for every state, this amounts to approximately 2,621,440 pixels. A CNN is able to contend with this because it creates a hierarchical representation of high dimensional data such as an image (10). Unlike a regular neural network, the layers of a CNN have neurons arranged in three dimensions (width, height, and depth) and respond to a receptive field, a small region of the input image, as opposed to a fully connected layer which responds to all the neurons. For a given neuron, it learns to detect features from a local region, which facilitates the capturing of local structures while preserving the topology of the image. The final output layer reduces the image into a vector of class scores. A CNN deep learning system is composed of five layers: an input layer, a convolutional layer, an activation layer, a pooling layer, and a fully connected layer. With most CNN architectures having more than one of each layer, they are thus referred to as "deep" learning (10). The function of each layer is described further below.
Input Layer
This layer holds the raw pixels values of the input image (colored blue in Figure 3).
Convolutional Layer
This layer visualized by the red boxes in Figure 3 is composed of several feature maps along the depth dimension, each corresponding to a different convolution filter. All neurons with the same spatial dimension are connected to the same receptive field of the input image. This facilitates capturing a wide variety of imaging features. The depth of the layer, meaning the number of convolution filters, represents the number of features that can be extracted from each input receptive field. Each neuron in a feature map shares exactly the same weights, which define the convolution filter. This allows reducing the number of weights, and thus increasing the generalizability of the architecture (10).
Activation Layer
Often seen as one with the convolutional layer, as in Figure 3, the activation layer applies a threshold function to the output of each neuron in the previous layer. In our network, we use a rectified linear unit (RELU) activation, where RELU(x) = max(0, x), meaning it fires the real value of the output and thresholds at zero. It simply replaces the negative values with "0. "
Pooling Layer
Typically placed after an activation layer, this layer down-samples along spatial dimensions. Shown by the purple box in Figure 3, it selects the invariant imaging features by reducing the spatial dimension of the convolution layer. The most commonly used is max pooling, which selects the maximum value of four of its inputs as the output, thus preserving the most prominent filter responses.
Fully Connected Layer
Shown as green in Figure 3, this layer connects all neurons in the previous layer with a weight for each connection. As the output layer, each output nodes represents the "score" for each class.
To facilitate the learning of complex relationships, multiple convolutional-pooling layers are combined to form a deep architecture of nonlinear transformations, helping to create a hierarchical representation of an image. This allows learning complex features with predictive power for image classification tasks (10). As illustrated in Figure 3, we use 3D CNN given that nodules are spherical in shape, and can best be captured with 3D convolutions.
Data augmentation
Overfitting is a result of network parameters greatly outnumbering the number of features in the input images. Given the network size and the number of features available from the CT images, our model tended to overfit, hence the need to increase the number of CT images. To counter this overfitting, we used standard deep neural network methods, such as artificially augmenting the dataset using label-preserving transformations (15). The data augmentation consists of applying various image translations, such as rotations, horizontal and vertical flipping, and inversions. We apply a random combination of these transformations on each image, thus creating nominally "new" images. This multiplies the dataset by many folds and helps in reducing overfitting (10).
iMPleMenTaTiOn anD eXPeriMenTs implementation Our python code uses the Keras package (16) where TP, FP, TN, and FN stand for true positive, false positive, true negative, and false negative, respectively. Figures 4 and 5, for both loss and accuracy we observed a steady improvement. In Figure 4, showing the loss value over time, or epochs, there is a steady decline to approximately zero. A similar pattern holds with accuracy, in Figure 5, but with the steady increase to a value of one, meaning perfect score. Both graphs were generated from training on 70% of the dataset (1,607 states) and cross-validating on 20% of that (321 states). As observed in both graphs, the model is "learning", however there still remains considerable volatility as shown by the validation curves.
As shown in
The conclusive results from the training and testing for our model is detailed in Table 2 The test sample size was 30% of the dataset (668 states).
The testing results listed in Table 2 are based on a cutoff value of 0.5. Given our model is a binary classifier, this means that for any state that it predicts, the likelihood of nodule is at least 0.5.
DiscUssiOn
In this study, we present a robust non-invasive method to predict the presence of lung nodules, a common precursor to lung cancer, from lung CT scans using a RL method. A major advantage of this approach is that it allows to develop novel and unpredictable solutions to complex problems. From the results of our training using the LUNA dataset, we were able to achieve superb sensitivity, specificity, accuracy, PPV, and NPV (all greater than 99%). While the metrics for the testing dataset were lower, they were consistent. In both data size and number of trials, we achieved similar results. This consistency suggests that our research approach of using RL with non-pre-processed data is reproducible. Moreover, given the nature of RL, the model will only continue to improve with time and more data. The way in which RL algorithms continue to improve depends not only on the quality of the dataset, but also more importantly its size. In the training of the AlphaGo, it was trained on master-level human players, instead of picking up the best strategies to win from scratch (8). In addition, the RL algorithm learned through more than 30 million human-on-human games. Factoring in hardware, AlphaGo required $25 million in computer hardware (17), it was trained on master-level human players (8).
Although the tasks of playing a game of Go is very different from detecting lung nodules, an inference we can draw is that reinforcement learning algorithms, such as AlphaGo, require substantial data to train. Given the original dataset's small size, there is an inherent difficulty in capturing the huge variability and structural differences in the lung volumes of human beings. With only 888 CT scans and approximately 1,148 nodule states in our dataset, with 70% of that being used for training, the lesson we have learned is that our model needs a significant amount of more data. This is evidenced by the tremendous amount of data and hardware needed to train AlphaGo to reach super human performance.
It is worth noting that AlphaGo's performance is based on how well it performed against human players. Similarly, our model performance is based on how well it performed against at least three radiologists in detecting lung nodules. As described by Armato et al. (9) how a given lesion was classified as a nodule was determined by a consensus of at least three of the four radiologists. A significant variability is observed when comparing the number of lesions classified as a nodule by one radiologist versus at least three radiologists. For the lesions identified in all the scans, 928 lesions were classified as nodules ≥3 mm by all four radiologists and 2,669 lesions were classified as a nodule ≥3 mm by at least one radiologist. This means for nodules ≥3 mm, the false discovery rate for a given individual radiologist is 65.2% (9). In contrast, despite the overfitting, our model classification yields a false discovery rate of 44.7% on the validation dataset, which is an improvement compared to an individual radiologist.
Given the very high training results, the question of overfitting arises. With a small dataset, the underlying probability distribution of lung nodules is not sufficient to create a fully generalizable model, especially given it is based on RL. As with most parametric tests, a fundamental assumption of samples is that they adequately capture the variance of the population they represent. With small datasets, depending on the variable, a random subset of the data may not adequately capture the variance of the overall dataset. With the LUNA dataset, this is particularly an issue given the fact that it is very high dimensional and our model requires significantly more data to capture the true variance of its countless variables. Most CT image datasets comprise of thousands of images, as compared to the millions of games in AlphaGo, and thus the comparison is not quite the same. We employed dropout and data augmentation to increase the generalizability of our model in response to the overfitting. Together these two approaches have minimally dampened the effect. An alternate approach we also experimented with was to reduce the network size, however, this approach resulted in significant volatility in the training and validation results. Regardless of the overfitting, the performance on the validation data set indicates that our model achieves enough generalization to compete with a human radiologist and could serve as a second reader.
A strength of our research approach is the lack of preprocessing. It is known that medical imaging, including CT images, can be very heterogeneous. From the number of image slices, scanning machine used, and scanning parameters used, the image data for each patient is very disparate. A significant negative byproduct of this heterogeneity is the astronomical number of insignificant features generated that are unrelated to one's outcome of interest, such as the presence of lung nodules. For a machine learning algorithm to contend with this either the data size has to exponentially increase or many of the insignificant features have to be pre-processed out by filtering for only the relevant features. The former option of increasing the dataset is impractical, as the LUNA dataset is already one of the largest and most comprehensive image datasets. Hence, most, if not all, approaches in the current literature on CAD systems for lung nodule detection take the second option of pre-processing. From using various filters, masks, and general pre-processing tools, these methods heavily curate and alter the raw medical image data. As a result, this can create an infinite number of variations of the original dataset, and such a subjective practice makes it very difficult to reproduce any of the experimental results. We choose to use data without pre-processing to ensure that our results are reproducible.
Our work highlights the promise of using RL for lung nodule detection. There are several practical applications of this model, one of which is to serve as a second opinion or learning system for radiologists and trainees in identifying lung nodules. A strong appeal of using a RL approach is that the model is always in a learning state. With every new patient, the model expands its learning by factoring in the new information and building upon its probabilistic memory of historical information from previous patients. This phenomenon is what allowed the artificial intelligence model AlphaGo to keep improving after each match, eventually beating each player after several matches, including the reigning world champion. Likewise, we expect that our model will continue to improve as it observes more and more cases.
aUThOr cOnTriBUTiOns IA and GH: carried out primary experiments of project. GG, MK, and XM: provided guidance on methodology and overall project. YL, WM, and BN: provided lab and technical support. JD: generated research ideas, provided guidance on methodology and overall project, and reviewed manuscript. | 5,490.8 | 2018-04-16T00:00:00.000 | [
"Computer Science",
"Medicine"
] |
The beta-adrenergic receptor kinase (GRK2) is regulated by phospholipids.
The β-adrenergic receptor kinase (βARK) is a member of growing family of G protein coupled receptor kinases (GRKs). βARK and other members of the GRK family play a role in the mechanism of agonist-specific desensitization by virtue of their ability to phosphorylate G protein-coupled receptors in an agonist-dependent manner. βARK activation is known to occur following the interaction of the kinase with the agonist-occupied form of the receptor substrate and heterotrimeric G protein βγ subunits. Recently, lipid regulation of GRK2, GRK3, and GRK5 have also been described. Using a mixed micelle assay, GRK2 (βARK1) was found to require phospholipid in order to phosphorylate the β2-adrenergic receptor. As determined with a nonreceptor peptide substrate of βARK, catalytic activity of the kinase increased in the presence of phospholipid without a change in the Km for the peptide. Data obtained with the heterobifunctional cross-linking agent N-3[125I]iodo-4-azidophenylpropionamido-S-(2-thiopyridyl)cysteine ([125I]ACTP) suggests that the activation by phospholipid was associated with a conformational change in the kinase. [125I]ACTP incorporation increased 2-fold in the presence of crude phosphatidylcholine, and this increase in [125I]ACTP labeling is completely blocked by the addition of MgATP. Furthermore, proteolytic mapping was consistent with the modification of a distinct site when GRK2 was labeled in the presence of phospholipid. While an acidic phospholipid specificity was demonstrated using the mixed micelle phosphorylation assay, a notable exception was observed with PIP2. In the presence of PIP2, kinase activity as well as [125I]ACTP labeling was inhibited. These data demonstrate the direct regulation of GRK2 activity by phospholipids and supports the hypothesis that this effect is the result of a conformational change within the kinase.
The -adrenergic receptor kinase (ARK) is a member of growing family of G protein coupled receptor kinases (GRKs). ARK and other members of the GRK family play a role in the mechanism of agonist-specific desensitization by virtue of their ability to phosphorylate G protein-coupled receptors in an agonist-dependent manner. ARK activation is known to occur following the interaction of the kinase with the agonist-occupied form of the receptor substrate and heterotrimeric G protein ␥ subunits. Recently, lipid regulation of GRK2, GRK3, and GRK5 have also been described. Using a mixed micelle assay, GRK2 (ARK1) was found to require phospholipid in order to phosphorylate the  2adrenergic receptor. As determined with a nonreceptor peptide substrate of ARK, catalytic activity of the kinase increased in the presence of phospholipid without a change in the K m for the peptide.
Data obtained with the heterobifunctional cross-linking agent N-3-[ 125 I]iodo-4-azidophenylpropionamido-S-(2-thiopyridyl)cysteine ([ 125 I]ACTP) suggests that the activation by phospholipid was associated with a conformational change in the kinase. [ 125 I]ACTP incorporation increased 2-fold in the presence of crude phosphatidylcholine, and this increase in [ 125 I]ACTP labeling is com-
pletely blocked by the addition of MgATP. Furthermore, proteolytic mapping was consistent with the modification of a distinct site when GRK2 was labeled in the presence of phospholipid. While an acidic phospholipid specificity was demonstrated using the mixed micelle phosphorylation assay, a notable exception was observed with PIP 2 . In the presence of PIP 2 , kinase activity as well as [ 125 I]ACTP labeling was inhibited. These data demonstrate the direct regulation of GRK2 activity by phospholipids and supports the hypothesis that this effect is the result of a conformational change within the kinase.
The molecular mechanisms involved in signal transduction of G protein-coupled receptors are best understood in the visual system where rhodopsin serves as the "receptor" for light (1) and the -adrenergic pathway in which the -adrenergic receptor (AR) 1 binds catecholamines (2,3). A feature common to both model systems as well as many other G protein receptors is the diminished responsiveness with time to a signal of equal intensity. This phenomenon is known as desensitization (4) and exhibits both an agonist-specific and nonspecific pattern. Rapid, agonist-specific desensitization of rhodopsin and the  2 -adrenergic receptor ( 2 AR) occurs in response to the phosphorylation of the receptor by the enzymes rhodopsin kinase and the -adrenergic receptor kinase (ARK) (5). Rhodopsin kinase and ARK are members of a family known as G proteincoupled receptor kinases (GRKs). A common feature to the GRK family of kinases is multi-site phosphorylation of receptor substrates in response to agonist occupancy (6). The relationship between agonist occupancy and receptor phosphorylation by GRKs is key to the specificity of the desensitization process, while other kinases such as protein kinase A and C play a role in nonspecific or heterologous desensitization. Two possible mechanisms could explain the enhanced phosphorylation of the activated form of the receptor by kinases of the GRK family. First, receptor occupancy may induce a conformational change exposing potential phosphorylation sites previously sequestered from the kinase. Alternatively, interaction of the kinase with the agonist-bound form of the receptor could result in enhanced catalytic activity of the kinase. The bulk of the experimental evidence supports the latter hypothesis (7)(8)(9). In addition to the enhanced catalytic activity of GRKs in the presence of agonist-occupied receptor, GRK2 and GRK3 activity is also increased by heterotrimeric G protein ␥ subunits (10 -13). The potential for finely controlled desensitization by the interplay of receptors and ␥ subunits is an exciting possibility given the evidence for dual regulation of GRK2 and GRK3 by these proteins (14).
While G protein-coupled receptors serve as substrates for the kinase after reconstitution into phospholipid vesicles, only recently has specific lipid requirements for GRKs been described. GRK5 was reported to require phospholipid for maximal catalytic activity (15). In this case, phospholipid-stimulated autophosphorylation of GRK5 was necessary for phosphorylation of the  2 AR and rhodopsin. In addition, GRK2 and GRK3 were regulated by phospholipids via the interaction with the carboxyl-terminal portion of the kinase known as the pleckstrin homology domain (16,17). In the initial report, the incorporation of negatively charged lipids into phospholipid vesicles resulted in a physical interaction of GRK2 or GRK3 with the vesicle. With the exception of PIP 2 , this resulted in enhance phosphorylation of the human m2 muscarinic acetylcholine receptor. The addition of PIP 2 resulted in inhibition of phosphorylation of the receptor in a competitive manner with respect to other phospholipids. Purified heterotrimeric G protein ␥ subunits were able to reverse this inhibition. Furthermore, the lack of additivity suggested a common site of interaction on the kinase for the lipids and G protein ␥ subunits. This hypothesis was further supported by the finding that two previously characterized G protein ␥ subunit binding proteins, phosducin and glutathione S-transferase-ARK (466 -689) fusion protein, prevented the effects of the phospholipids. In a subsequent report, similar effects in terms of PIP 2 -enhanced binding of GRK2 to phospholipid vesicles was described. In contrast to the previous manuscript, data are presented that demonstrate increased GRK2 activity when coincubated with both PIP 2 and G protein ␥ subunits. Additionally, the remaining lipids previously reported to increase kinase activity in the absence of ␥ subunits in this case required the addition of G protein ␥ subunits to enhance GRK2 activity. The interpretation by these authors was that effective membrane localization of ARK, which enhanced both the rate and extent of phosphorylation of receptor substrates, required the simultaneous presence of two pleckstrin homology domain ligands.
In this manuscript, we provide evidence of an acidic phospholipid requirement of GRK2 based on the phosphorylation of dodecyl maltoside-solubilized receptors and the direct activation of the kinase toward peptide substrates by the addition of various phospholipids. Additionally, lipids that failed to enhance kinase activity did not increase labeling of GRK2 with the the heterobifunctional reagent [ 125 I]ACTP. Finally, data obtained in the absence of G protein ␥ subunits agree with the original report in which PIP 2 promotes kinase binding to phospholipid vesicles but inhibits enzymatic activity (16). Thus, we provide evidence for catalytic activation as well as a conformational change in GRK2 following the interaction of the kinase and phospholipid. These data raise the possibility of a third level of regulation of GRK2 activity within the cell and suggest that the mechanism of phospholipid is more complex than simply targeting of the kinase to the plasma membrane.
EXPERIMENTAL PROCEDURES
Materials-Isoproterenol, alprenolol, and all phospholipids were purchased from Sigma. Western blot detection reagents including donkey anti-rabbit horseradish peroxidase-conjugated secondary antibody was obtained from Amersham Corp. The peptide substrate (RRREEEEESAAA) was synthesized using t-butoxycarbonyl chemistry with an Applied Biosystems 430A peptide synthesizer. Prior to use, the synthetic peptide was purified by reverse phase high performance liquid chromatography using a C-18 column and a 0 -50% acetonitrile gradient in 0.1% trifluoroacetic acid/water. [␥-32 P]ATP, Na 125 I, and [ 125 I]iodocyanopindolol were obtained from DuPont NEN. All other reagents were of the the highest commercial grade available.
Preparation of ARK-ARK (GRK2) was overexpressed and purified from Sf9 cells using the baculovirus expression system as previously detailed (18). Briefly, cells were harvested 48 h after infection by low speed centrifugation. Following homogenization in 20 mM Hepes, pH 7.2, 250 mM NaCl, 5 mM EDTA, 3 mM phenylmethylsulfonyl fluoride, and 3 mM benzamidine, a high speed supernatant was prepared. The soluble fraction was diluted and applied to a SP-Sepharose column, which was washed and eluted in a 50 -300 mM NaCl linear gradient. Peak activity fractions were pooled, diluted, and loaded on a heparin-Sepharose column. The ARK was eluted from the heparin column using a 100 -600 mM NaCl gradient. The peak activity was pooled and made 0.02% final in Triton X-100 and stored at 4°C. Protein concentration was determined by the method of Bradford (19) using purified bovine serum albumin as a standard. Purity of the ARK preparation was determined by SDS-polyacrylamide gel electrophoresis and was routinely Ͼ95%.
Preparation of  2 AR-The hamster  2 AR was expressed in Sf9 cells using the baculovirus system and purified on an alprenolol-Sepharose column using a modification of previously described techniques (20,21). n-Dodecyl -D-maltoside (10 mM) in 100 mM NaCl, 10 mM Tris-HCl, pH 7.4, 5 mM EDTA, 10 g/ml each of leupeptin, benzamidine, pepstatin A, and soybean trypsin inhibitor, along with 1 mM phenylmethylsulfonyl fluoride was used to effect solubilization of the receptor from the Sf9 cell pellet. Following a high and low salt wash of the alprenolol Sepharose column, the  2 AR was eluted into 50 ml of 1 mM dodecyl maltoside, 100 mM NaCl, 10 mM Tris-HCl, pH 7.4, 5 mM EDTA containing 100 M (Ϫ)-alprenolol. The receptor was concentrated to ϳ2 ml by ultrafiltration on a YM-30 or YM-100 membrane (Amicon) and stored at Ϫ80°C.
For reconstitution studies, the purified receptor was reinserted into phosphatidylcholine vesicles, pelleted by centrifugation, and resuspended in 20 mM Tris-HCl, pH 7.4, 2 mM EDTA as described previously (22). The concentration of receptor was determined using the -adrenergic receptor antagonist [ 125 I]iodocyanopindolol.
For studies in mixed detergent-lipid micelles, the purified receptor was diluted in 1 mM dodecyl maltoside, 100 mM NaCl, 10 mM Tris-HCl, pH 7.4, 5 mM EDTA and concentrated on a YM-100 membrane using a centricon device (Amicon). Alternatively, receptor purified in digitonin underwent detergent exchange on a G-50 column equilibrated in 0.5 mM dodecyl maltoside, 100 mM NaCl, 10 mM Tris-HCl, pH 7.4, 5 mM EDTA and concentrated on a YM-100 membrane using a centricon device. Crude phosphatidylcholine vesicles were produced using a tip sonicator with three 1-min bursts on ice. The desired amount of  2 AR was added to various amounts of phospholipid in 20 mM Tris-HCl, pH 7.4, 2 mM EDTA. Other phospholipids, stored as stock solutions in chloroform, were first dried under a stream of N 2 . The desired amount of phospholipid was mixed with dodecyl maltoside-solubilized  2 AR, from which alprenolol had been removed as described above, prior to use in the phosphorylation assay.
Phosphorylation of the  2 AR-Reconstituted  2 AR (0.03-0.5 pmol) was incubated with GRK2 in a total volume of 25-50 l containing 20 mM Tris-HCl, pH 7.4, 2 mM EDTA, 7.5 mM MgCl 2 , 0.1 mM [␥-32 P]ATP (200 -1,000 cpm/pmol) at 30°C. When indicated, (Ϫ)-isoproterenol was included at a final concentration of 10 -50 M. The reaction was stopped by the addition of 50 l of SDS-PAGE stop solution and the receptor resolved on 9 or 12% polyacrylamide gels (23). Phosphorylated  2 AR was visualized by autoradiography, and the corresponding bands were excised and counted to determine the extent of phosphate incorporation. Unlike some previous studies with reconstituted receptor, no correction factor for the stoichiometry of receptor phosphorylation was used in these studies.
Phosphorylation of dodecyl maltoside-solubilized  2 AR was performed in the presence or absence of crude phosphatidylcholine or various purified phospholipids in a buffer of 20 mM Tris-HCl, pH 7.4, 2 mM EDTA, 7.5 mM MgCl 2 , 0.1 mM [␥-32 P]ATP (200 -1,000 cpm/pmol). The final volume was 50 l, and the phosphorylation reaction was carried out at 30°C for various times as indicated. The reaction was stopped, and the phosphate incorporation was determined as detailed above.
Phosphorylation of Synthetic Peptides-A stock solution of purified synthetic peptides was prepared, and the pH was adjusted to 7.4 by the addition of Tris base. The peptides were incubated with GRK2 (ϳ80 ng/assay tube) in a buffer containing 20 mM Tris-HCl, pH 7.4, 2 mM EDTA, 0.1 mM [␥-32 P]ATP (200 -1,000 cpm/pmol), 7.5 mM MgCl 2 in a final volume of 25 l at 30°C. The reaction was stopped by transferring the entire reaction mixture to a 2 ϫ 2-cm square of P-81 paper followed by six washes in 75 mM phosphoric acid (10 ml/square). GRK2 activity was defined as the difference in phosphate incorporation in the presence and absence of peptide. Similar results are obtained if the kinase activity was determined in the presence or absence of GRK2. As phosphorylation reactions exhibited a higher blank when performed in the presence of added phospholipid, separate blanks were determined for assays in the absence or presence of additional phospholipid. A nonlinear regression program (Enzfitter, Elsevier-Biosoft, Cambridge, UK) was used to estimate the kinetic parameters.
[ 125 I]ACTP Labeling-[ 125 I]ACTP was synthesized as described previously by Dhanasekaran et al. (24). GRK2 (8 g or 0.1 nmol) is reacted with an excess (1-200-fold) of [ 125 I]ACTP in dimethylformamide (final concentration of DMF is Ͻ10%). After a 120-min incubation in the dark at 4°C, the reaction was stopped by the addition of SDS (2% final) and 40 mM N-ethylmaleimide. The labeled kinase band was resolved by SDS-PAGE under nonreducing conditions. The GRK2 band was localized by autoradiography, excised, and counted. The stoichiometry was calculated after determining the specific activity of the [ 125 I]ACTP preparation (typically 1 Ci/mmol).
Immunodetection of GRK2-Polyacrylamide gels were transferred overnight to nitrocellulose membranes. Immunodetection of GRK2 was performed using a rabbit antisera raised to purified ARK1 at a 1:4,000 dilution. Detection of the GRK2 using horseradish peroxidase conjugated donkey anti-rabbit antibody was as described by the manufacturer (Amersham Corp.).
RESULTS
A unique feature of the GRK family is the ability to phosphorylate the agonist-occupied form of a variety of G proteincoupled receptors. In order for a receptor to serve as a substrate for ARK, the protein is typically purified and reinserted into phospholipid vesicles. If phosphorylation of the receptor in detergent is attempted, no significant incorporation of phosphate is observed. Based on binding data, the receptor exhibits appropriate binding properties and is not degraded. While the possibility exists that detergents inhibit the kinase or reconstitution into bulk lipid provides a conformational structure required for interaction with GRKs, only recently have specific lipid requirement for GRKs been described (15)(16)(17). Fig. 1 demonstrates the phosphorylation of the  2 AR in dodecyl maltoside by GRK2. The phosphorylation of the receptor requires the addition of crude phosphatidylcholine and is stimulated by the addition of -adrenergic agonist. The stoichiometry of phosphorylation is maximal at 4 -5 mol of phosphate/mol of receptor. In data not shown, the effect of isoproterenol is blocked by the addition of the -adrenergic receptor antagonist alprenolol. Also, there is a linear dependence between the amount of  2 AR added to the reaction mixture and the phosphate incorporation observed after resolving the receptor by SDS-PAGE with the maximal stoichiometry remaining ϳ4 mol phosphate/mol receptor. Finally, the phosphorylated receptor is not pelleted by a 300,000 ϫ g centrifugation step in contrast to reconstituted receptor.
In order to define the phospholipid specificity of the GRK2 phosphorylation reaction, solubilized  2 AR is added to a variety of neutral, acidic, and basic phospholipids. As shown in Fig. 2, only lipids with a net negative charge including cardiolipin, phosphatidylglycerol, phosphatidic acid, phosphatidylserine, and phosphatidylinositol support the phosphorylation of the  2 AR by GRK2. The addition of crude, but not purified, phosphatidylcholine results in receptor phosphorylation. This suggests that a phospholipid other than phosphatidylcholine is responsible for the activation of GRK2 observed above. Fig. 3 compares the effects of phosphatidylinositol to those of PIP 2 . While phosphatidylinositol enhanced receptor phosphorylation by GRK2, there is no significant  2 AR phosphorylation in mixed micelles containing PIP 2 .
To further investigate the effect of phospholipid on GRK2 activity, a nonreceptor peptide substrate (RRREEEEESAAA) previously shown to serve as a ARK1 substrate is used (25). The time course of phosphorylation of the peptide by GRK2 is linear for 2 h in the absence or presence of phospholipid (Fig. 4). However, there is a substantial increase in phosphate incorporation observed with the addition of crude phosphatidylcholine to the reaction mixture. The effect of phosphatidylcholine is not due to protection of the kinase from degradation or other nonspecific effects as Western blotting reveals equal amounts of the 80,000 M r kinase band without evidence of proteolytic cleavage (data not shown).
The kinetic parameters of phosphorylation are determined in the presence of varying amounts of the peptide substrate. As shown in Table I, the effect of phospholipid is to increase the V max of the phosphorylation reaction approximately 3-fold (8.7-25.4 nmol/min⅐mg of ARK) without a change in the K m for the peptide substrate. In data shown in Table II, phosphatidylinositol increased phosphorylation of the peptide substrate 6-fold while PIP 2 decreased GRK2 activity to 30% of the control level.
As GRK5, a member of the GRK family related to ARK, has been shown to undergo phospholipid-stimulated autophosphorylation and association with phospholipid vesicles (15), we examine GRK2 to determine if a similar mechanism may be responsible for the phospholipid activation of the kinase. GRK2 does not autophosphorylate to any significant degree in the presence or absence of crude phosphatidylcholine (Fig. 5). At 1 h, the maximal amount of autophosphorylation is observed with a stoichiometry of 0.1 mol phosphate/mol kinase. A Western blot of GRK2 incubated with vesicles prepared from purified lipids demonstrates a significant amount of immunoreactivity associated with the pellet (Fig. 6). 10 -20% of the immunoreactive GRK2 did pellet with phosphatidylinositol, and ϳ5% pelleted with PIP 2 . These data stand in contrast to that seen with GRK5 (15) and suggest different mechanisms of lipid activation of the two kinases. Moreover, the demonstration of GRK2 association with vesicles containing either phosphatidylinositol or PIP 2 is in agreement with the previously published findings (16,17).
The heterobifunctional cross-linking reagent, N-3-[ 125 I]iodo-4-azidophenylpropionamido-S-(2-thiopyridyl)cysteine has been used to map the molecular structure of transducin's ␣ subunit (24). Under mild, nondenaturing conditions, [ 125 I]ACTP derivatizes reduced sulfhydryls to form a mixed disulfide easily cleaved by the addition of excess reducing agents. When GRK2 is incubated for 2 h in the dark with a 100-fold molar excess of [ 125 I]ACTP relative to kinase, there is incorporation of ϳ1 mol [ 125 I]ACTP/mol kinase. The addition of phospholipid vesicles to the reaction results in a 2-fold enhancement of incorporation to a stoichiometry of 2 mol of [ 125 I]ACTP/mol of ARK1 (Fig. 7). The additional [ 125 I]ACTP incorporation observed in the presence of phospholipid vesicles is blocked by the addition of MgATP at concentrations identical to those used in the phosphorylation assay. The effect of MgATP was specific for [ 125 I]ACTP in response to phospholipid as there is no effect observed with [ 125 I]ACTP incorporation in the absence of phos- Receptor phosphorylation is carried out as described in the text, and the reaction is quenched by the addition of SDS sample buffer. The receptor is resolved by 9% SDS-polyacrylamide gel electrophoresis followed by autoradiography. The phosphorylation reaction is performed in the presence (lanes 1 and 2) or absence (lane 3) of crude phosphatidylcholine (50 g). Isoproterenol (50 M) is added (lanes 2 and 3) to demonstrate the agonist dependent nature of receptor phosphorylation by GRK2. The stoichiometry of phosphorylation is determined by excising the receptor band, quantitating the 32 P and expressing the data as mol of phosphate/mol of  2 AR.
pholipid. In all cases, the [ 125 I]ACTP incorporation is sensitive to reducing agents, indicating the presence of a mixed disulfide and not covalent attachment via the azide moiety. When a variety of lipids are examined, only the acidic phospholipids previously shown to enhance GRK2 activity led to an increase in [ 125 I]ACTP labeling of the kinase (Fig. 8). Of note is the observation that PIP 2 not only failed to increase the labeling of GRK2, but decreased [ 125 I]ACTP incorporation to a level below that seen in the basal state. A preliminary mapping experiment demonstrates that the 125 I associated with GRK2 resulted in a unique proteolytic map when cleaved with V-8 protease. The appearance of proteolytic bands of 14 and 6 kDa are observed when the kinase is labeled in the presence of the activating lipid phosphatidic acid (Fig. 9). These cleavage products are greatly diminished by co-incubation of lipid and MgATP or with the omission of the phospholipid to the labeling reaction (data not shown). DISCUSSION Regulation of G protein-coupled receptor function involves the process of desensitization in which a cell exposed to an agonist becomes less sensitive to subsequent stimulation. In the  2 AR-adenylyl cyclase system, nonselective, and agonistspecific forms of desensitization occur and appear to be related to phosphorylation of the receptor (26). Kinases of the GRK family are thought to play a role in rapid, agonist-specific desensitization, as these enzymes phosphorylate the receptor in an agonist-dependent fashion. Several lines of evidence support this proposed role of GRKs in the desensitization process. First, cells that express  2 ARs that have had the putative GRK2 phosphorylation sites deleted exhibit delayed desensiti-zation (27). Second, a permeabilized cell system has been used to demonstrate that heparin, a potent inhibitor of GRK2, blocked both agonist-induced receptor phosphorylation and desensitization (28). Third, type-specific antibodies directed toward GRK3 attenuated odorant-induced desensitization in olfactory cells (29,30). Fourth, Ishii et al. (31) have shown that GRK3 blocks thrombin signaling when the receptor and kinase are coexpressed in Xenopus oocytes. Finally, overexpression of a GRK2 dominant negative mutant in airway epithelial cells attenuates desensitization of the  2 AR (32). At this time, these data are consistent with a role of GRK-mediated receptor phosphorylation in the process of agonist-specific desensitization. The agonist-dependent phosphorylation of receptors by GRK2 and other members of the GRK family is a key feature of this class of enzymes. A conformational change in the receptor could expose potential phosphate acceptor sites to the kinase resulting in agonist-dependent phosphorylation of the receptor. However, this does not appear to be the mechanism involved in receptor-GRK interactions (7). Alternatively, the kinase ap- GRK2 is incubated with a 100-fold molar excess of [ 125 I]ACTP and the addition of 50 g of various lipids as described above. The reaction is terminated by the addition of N-ethylmaleimide and SDS, and the kinase band is resolved by electrophoresis. An autoradiograph of labeled GRK2 is shown. pears to interact with the agonist-occupied form of the receptor, which primarily results in an increase in the V max of the enzyme (8,9). Presumably, a conformational change occurs in GRK2 and other GRKs, which results in enhanced catalytic efficiency. Recently, it has been shown that the peptide mastoparan increases the activity of rhodopsin kinase (8) and a GRK isolated from porcine brain with properties similar to ARK1 (13). Since mastoparan activates G proteins by mimicking a structure similar to agonist-occupied receptors (33), a similar mechanism would seem likely in the stimulation of kinase activity.
In addition to the activation of the kinase following the interaction with agonist-occupied receptors, GRKs also interact with membranes via different mechanisms. Photostimulation of rhodopsin results in the association of rhodopsin kinase with the retinal membrane. The translocation to the membrane requires the farnesylation of rhodopsin kinase, a post-translational modification unique to this member of the GRK family (6). GRK2 (ARK1), GRK3 (ARK2), and a related kinase from porcine brain all exhibit enhanced phosphorylation of agonistoccupied receptors in the presence of ␥-subunits from heterotrimeric G proteins (10 -12). The effect of exogenous ␥-subunits is to increase the rate and maximal stoichiometry of phosphorylation (11,12). This effect is synergistic with the activation of the kinase by agonist-occupied receptor or mastoparan (13). Similarly to rhodopsin kinase, ARK1 activity has been shown to translocate from the cytosol to the plasma membrane following stimulation of the target cell with a wide variety of agonists including isoproterenol, PGE 1 , somatostatin, and platelet activating factor (34 -36). Unlike rhodopsin kinase, GRK2 does not undergo isoprenylation (37,38). However, the interaction of the kinase with the ␥-subunits appears to target ARK to the membrane (11). Thus, two different molecular mechanisms exist for localizing GRKs to a membrane surface. Most recently, a third member of the GRK family (GRK5) was cloned and found to lack the sequence required for either isoprenylation or interaction with ␥-subunits (15). Consistent with the later is the finding that  2 AR or rhodopsin phosphorylation by GRK5 is not enhanced by the addition of ␥-subunits. However, phospholipid-stimulated autophosphorylation of GRK5 at Ser-484 and Thr-485 increased receptor phosphorylation ϳ15-fold and represents yet a third mechanism for membrane association of GRKs.
Despite the experimental data, which demonstrate the importance of phospholipid associations between GRKs, a clear effect of phospholipids is only now beginning to emerge. The observation that G protein-coupled receptors serve as substrates of GRKs following reconstitution into phospholipid vesicles or if expressed in high numbers in the plasma membrane of cells such as Sf9 insect cells (39) suggests the importance of lipids in the phosphorylation of receptor substrates. The traditional detergent for the solubilization of  2 ARs has been digitonin (20). While biologic activity is preserved, the digitonin is difficult to remove due to its low critical micellar concentration (CMC). Furthermore, digitonin tends to concentrate with most ultrafiltration techniques. We have used dodecyl maltoside to effect solubilization and purification of the  2 AR. Dodecyl maltoside has a defined critical micellar concentration and forms micelles of ϳ50,000 Da. We have taken advantage of these properties to concentrate the purified receptor using a YM-100 membrane. Under these conditions, detergent passes through while detergent-receptor micelles are retained by the membrane. In this manner, we could manipulate the receptor preparation without excessive concentration of the detergent. In addition, enzymatic activity of rhodopsin kinase (40) and ARK1 2 is minimally affected by dodecyl maltoside while other detergents completely inhibit kinase activity despite concentrations below the critical micellar concentration of the detergent.
In the current study, we clearly demonstrate that detergentsolubilized  2 AR serves as a substrate for GRK2, provided phospholipid is added to the phosphorylation reaction. Under the conditions used in this study, the receptor resides in a mixed detergent-lipid micelle. The concentration of detergent used would not permit the formation of pure lipid vesicles typical of previous reconstitution experiments. Furthermore, the receptor under these conditions does not pellet following a 300,000 ϫ g centrifugation step adding support to the notion that the receptor is present in mixed micelles. This data would suggest that GRK2 has a phospholipid requirement for phosphorylation of receptor substrates. Using a variety of neutral, acidic, and basic phospholipids, we clearly demonstrate that negatively charged phospholipids, including cardiolipin, phosphatidylglycerol, phosphatidic acid, phosphatidylserine, and phosphatidylinositol, were necessary for phosphorylation of the  2 AR by GRK2. Previously, purified receptor was first inserted into crude phosphatidylcholine vesicles in order to observe GRK2-dependent phosphorylation. Therefore, we initially performed studies of phospholipid requirements of GRK2 using the same preparation of crude phosphatidylcholine. The fact that crude, but not purified, phosphatidylcholine preparations resulted in kinase activity is consistent with the notion that a phospholipid(s) other than phosphatidylcholine is required by GRK2. Thus, the long recognized requirement for reconstitution of the  2 AR into phosphatidylcholine vesicles most likely serves to provide a source of negatively charged phospholipid to the phosphorylation reaction.
As mentioned above, the lipid profile demonstrates that phospholipids with a net negative charge at physiologic pH enhance the phosphorylation of the  2 AR when studied in mixed detergent lipid micelles. A notable exception is the effect of PIP 2 , as receptor phosphorylation is not observed when this 2 J. J. Onorato, unpublished observation. phospholipid is included in the phosphorylation assay. Similar data has recently been reported when phosphorylation of the m2 muscarinic acetylcholine receptor was studied in reconstituted lipid vesicles (16). In contrast, others reported that PIP 2 enhanced GRK2 phosphorylation of the  2 AR only in the presence of added ␥ subunits of heterotrimeric G proteins (17). While the stoichiometry of phosphorylation is rather low compared with that previously reported using crude phosphatidylcholine, qualitatively similar results were noted for a variety of lipids tested. In contrast to our findings and that of DebBurman et al. (16), in which Ͼ4 mol of phosphate/mol of receptor was achieved in the absence of ␥ subunits, a recent manuscript (17) indicated the stoichiometry was Ͻ 0.5 mol of phosphate/mol of  2 AR without the addition of G protein ␥ subunits.
The initial step in the mechanism of lipid regulation of GRK2 activity must involve the interaction between lipid and the kinase or the receptor. Evidence of a specific lipid-kinase interaction is provided by the finding that GRK2 becomes associated with vesicles provided they contain negatively charged lipids such as phosphatidylserine, phosphatidylinositol, or PIP 2 . However, the phosphorylation data presented in this manuscript as well as that previously reported (16) suggests that the effects of lipids are more complex than simply targeting the kinase to the membrane surface. This is evident by the the effect of PIP 2 to cause membrane association in addition to inhibition of receptor phosphorylation.
In order to test the hypothesis that GRK2 activity is increased by phospholipids, we used a previously characterized peptide substrate of ARK (25). The advantage of the peptide substrate was 2-fold. First, the peptide substrate was designed to bind to ion exchange paper in 75 mM phosphoric acid permitting a large number of phosphorylation reactions necessary to obtain kinetic data. Second, the peptide substrate permits the identification of direct effects of phospholipid upon the kinase in the absence of any possible phospholipid-receptor interactions. The catalytic activity increases 3-fold with respect to the peptide substrate without a change in the K m in the presence of crude phosphatidylcholine. We determined the kinetic parameters using crude phosphatidylcholine as this was the source of lipid that has been used for years in the reconstitution assay. Knowing that the crude preparations were 20% phosphatidylcholine and the results that demonstrate that crude but not purified phosphatidylcholine resulted in receptor phosphorylation in the mixed micelle system, we suspect that other phospholipids were responsible for ARK activity in the reconstitution assay. As further support, we demonstrate that the inclusion of phosphatidylinositol to the peptide assay results in a dramatic enhancement of phosphate incorporation. Moreover, PIP 2 inhibited the ability of GRK2 to phosphorylate the synthetic peptide substrate, consistent with our data and that of others (16), in which receptor phosphorylation was studied. In general, peptides are poor substrates for GRKs when compared with reconstituted receptors based on their low affinity for the kinase; however, they provide valuable data as to the mechanism of GRK activation. In this situation, the simplest explanation of the data is that GRK2 is directly activated following interaction with phospholipid. This observation would explain in part the apparent requirement for G proteincoupled receptors to be reconstituted into phospholipid vesicles in order to serve as GRK2 substrates.
Finally, we have used [ 125 I]ACTP as a probe to assess conformational changes that may occur in the kinase. We have observed that GRK2 will incorporate ϳ1 mol of ACTP/mol of kinase under basal conditions. In the presence of crude phosphatidylcholine, the stoichiometry of [ 125 I]ACTP labeling dou-bled. We suggest that this represents a sulfhydryl group exposed following the interaction of GRK2 and phospholipid. This hypothesis is further supported by three observations. First, the increase in [ 125 I]ACTP incorporation secondary to phospholipid exposure is completely blocked in the presence of MgATP. Second, the V-8 proteolytic map of [ 125 I]ACTP-labeled GRK2 identifies two unique bands in the presence of phospholipid that are diminished under basal conditions or in the presence of MgATP. Finally, lipids, which have been shown to activate GRK2, also increased the [ 125 I]ACTP incorporation. More importantly, PIP 2 , which binds GRK2 but results in an inhibition of catalytic activity completely abolished any [ 125 I]ACTP labeling, including what appears to be the site labeled in the absence of lipids. At this time, our working hypothesis is that the sulfhydryl group(s) exposed following phospholipid interaction with GRK2 is near the ATP binding site and/or catalytic groove of the kinase and protected from ACTP labeling by the binding of MgATP. Furthermore, our ability to label this additional site serves as a probe of a putative conformational change, which occurs as the kinase is activated by regulatory lipids.
Data presented in this manuscript provide the first direct evidence to support the direct regulation of GRK2 by lipid. The effect on catalytic activity, in addition to the membrane localization, which occurs via the interaction between various lipids and the pleckstrin homology domain of GRK2 and GRK3, have clear implications as to the regulation of the kinase. Given the apparent association of several members of the GRK family with phospholipid membranes, it is tempting to speculate that many of the GRKs require phospholipid for maximal catalytic activity. While several different types of interaction between various GRKs and phospholipid membranes have been described, it will prove valuable to test whether a common molecular mechanism exists among this family of kinases. We are currently mapping the sites of ACTP incorporation in GRK2 to permit such a study. Additional studies are ongoing to define specific phospholipid interactions with ARK and extend the current studies to other members of the GRK family. | 8,087.8 | 1995-09-08T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Phylogeography and Population History of Eleutharrhena macrocarpa (Tiliacoreae, Menispermaceae) in Southeast Asia’s Most Northerly Rainforests
: The diversification of Tiliacoreae and the speciation of Eleutharrhena are closely linked to Southeast Asia’s most northerly rainforests which originate from the Himalayan uplift. Migration routes across biogeographical zones within the Asian clade, including those of Eleutharrhena , Pycnarrhena , and Macrococculus , and their population structures are still unexplored. We combine endocarp morphology, phylogenetic analyses, divergence time estimation, ancestral area reconstruction, as well as SCoT method to reconstruct the past diversification of Eleutharrhena macrocarpa and to understand their current distribution, rarity, and evolutionary distinctiveness. The disjunct, monospecific, and geographically restricted genera Eleutharrhena and Macrococculus both have a dry aril, a unique feature in Menispermaceae endocarps that further confirms their close relationship. Pycnarrhena and Eleutharrhena appeared during the end of the Oligocene c. 23.10 million years ago (Mya) in Indochina. Eleutharrhena speciation may be linked to climate change during this time, when humid forests became restricted to the northern range due to the Himalayan uplift. Differentiation across the Thai–Burmese range could have contributed to the isolation of the Dehong populations during the Miocene c. 15.88 Mya, when exchange between India and continental Asia ceased. Dispersal to the Lanping–Simao block and further differentiation in southeastern and southern Yunnan occurred during the Miocene, c. 6.82 Mya. The specific habitat requirements that led to the biogeographic patterns observed in E. macrocarpa contributed to a low genetic diversity overall. Population 1 from Dehong, 16 from Pu’er, and 20 from Honghe on the East of the Hua line have a higher genetic diversity and differentiation; therefore, we suggest that their conservation be prioritized.
Introduction
Eleutharrhena macrocarpa (Diels) Forman is a woody liana belonging to the Tiliacoreae tribe in the Menispermaceae or moonseed family. It is primarily distributed across China (southern Yunnan), Laos, Myanmar, and India [1] (Figure 1). Like many species of the
Introduction
Eleutharrhena macrocarpa (Diels) Forman is a woody liana belonging to the Tiliacoreae tribe in the Menispermaceae or moonseed family. It is primarily distributed across China (southern Yunnan), Laos, Myanmar, and India [1] (Figure 1). Like many species of the moon-seed family, members of the Tiliacoreae are typically dioecious lianas inhabiting tropical rainforests and monsoon forests in South America, Africa, Asia, and Oceania [2] . Tiliacoreae Miers sensu Ortiz [2] includes 16 genera and 111 species, among which 6 genera are monospecific, including Eleutharrhena [2][3][4]. Although the sister relationships within the Tiliacoreae are weakly supported, the monophyly of the tribe is well established (24 species, 10 genera, and 3 plastid regions were used in analyses by Ortiz, Wang, Jacques and Chen [2], whereas 26 species, 11 genera, and 7 plastid plus 2 nuclear regions were used in the analyses by Lian, et al. [5]). The tribe is morphologically characterized by male flowers with more than four whorls of sepals, longitudinally grooved seed endocarps that are ribbed or rugose abaxially, hippocrepiform seeds without endosperm, and subcylindrical embryos [2]. Seeds play an important part in the classification of Menispermaceae and are highly diagnostic [6]. Fruits of Menispermaceae have woody endocarps and the seeds are often curved around an invagination of the lateral sides, termed the condyle [7]. The origin of the condyle is described by Ortiz [7] as resulting from the development of the ovary wall. Two main types of condyle were identified. Only the Menispermum condyle type occurs in Tiliacoreae, which have deeply curved seeds resulting from a bilaterally compressed, laminiform condyle. Eleutharrhena is a sister to Pycnarrhena [5] and is morphologically similar to Macrococculus [8]. Their ranges overlap, but E. macrocarpa morphologically differs by its free stamens, stipitate drupelets, and grouped stomata on the abaxial leaf surface. It can become a large woody vine reaching the canopy and produces very large drupelets, although these are not as large as those of Macrococculus, which are said to be dispersed by flightless cassowary birds (Casuarius) [9]. Eleutharrhena macrocarpa was assessed to be a critically endangered species [10] and a plant species with extremely small populations (PSESP) in China [11]. Hou et Eleutharrhena is a sister to Pycnarrhena [5] and is morphologically similar to Macrococculus [8]. Their ranges overlap, but E. macrocarpa morphologically differs by its free stamens, stipitate drupelets, and grouped stomata on the abaxial leaf surface. It can become a large woody vine reaching the canopy and produces very large drupelets, although these are not as large as those of Macrococculus, which are said to be dispersed by flightless cassowary birds (Casuarius) [9]. Eleutharrhena macrocarpa was assessed to be a critically endangered species [10] and a plant species with extremely small populations (PSESP) in China [11]. Hou et al. [12] reported approximately 40 plants surviving in Yunnan, China, and Lang et al. [1] estimated that 60 individuals occurred in Yunnan and we sampled 48 individuals in our study ( Figure 2). The origin of Menispermaceae has been evaluated to be c. 109 Mya in the I layan region and it was suggested that the radiation of the family is linked to the neous appearance of modern neotropical and paleotropical forests around the w lowing the mass extinction at the Cretaceous-Paleogene boundary [13]. The trop forests in southern China may have appeared during the late Tertiary accompani uplift of the Himalayan mountains and the development of a monsoon climate structure of these forests is similar to that of SE Asian lowland rainforests, but th a deciduous tree layer, fewer megaphanerophytes and epiphytes, and more abu anas [14].
Several distinct floristic regions and biogeographical areas can be recognized ical southern China: Taiwan, Hainan, part of Guangxi, southeastern Yunnan, an ern Yunnan [14]. The movement of the Indochina geoblock and the Lanping-Sim together with the Himalayan uplift has profoundly influenced the flora of south nan [15]. The floras of southern and southeastern Yunnan have a high proportion ical Asian elements, but the southern part is more closely related to the Indo-M flora, suggesting that a boundary exists between these two parts [15]. Eleutharr typical Indo-Malaysian element and its distribution may closely correspond to th ographical events that led to the formation of southern Yunnan flora. Several li been proposed to explain floral boundaries in Yunnan; the Hua line [15] between and southeastern Yunnan floras; the Tanaka line [16] which was used to describ ference between the Sino Himalayan forests to the west and the Sino Japanese f the east; and the Salween-Mekong divide, separating the east Himalayan from th duan mountains during glacial periods [17]. However, the influence of these lines and the period of occurrence may not all apply to the tropical flora of south nan or may only be valid for a part of it. The origin of Menispermaceae has been evaluated to be c. 109 Mya in the Indo-Malayan region and it was suggested that the radiation of the family is linked to the simultaneous appearance of modern neotropical and paleotropical forests around the world following the mass extinction at the Cretaceous-Paleogene boundary [13]. The tropical rainforests in southern China may have appeared during the late Tertiary accompanied by the uplift of the Himalayan mountains and the development of a monsoon climate [14]. The structure of these forests is similar to that of SE Asian lowland rainforests, but they have a deciduous tree layer, fewer megaphanerophytes and epiphytes, and more abundant lianas [14].
Several distinct floristic regions and biogeographical areas can be recognized in tropical southern China: Taiwan, Hainan, part of Guangxi, southeastern Yunnan, and southern Yunnan [14]. The movement of the Indochina geoblock and the Lanping-Simao block together with the Himalayan uplift has profoundly influenced the flora of southern Yunnan [15]. The floras of southern and southeastern Yunnan have a high proportion of tropical Asian elements, but the southern part is more closely related to the Indo-Malaysian flora, suggesting that a boundary exists between these two parts [15]. Eleutharrhena is a typical Indo-Malaysian element and its distribution may closely correspond to the biogeographical events that led to the formation of southern Yunnan flora. Several lines have been proposed to explain floral boundaries in Yunnan; the Hua line [15] between southern and southeastern Yunnan floras; the Tanaka line [16] which was used to describe the difference between the Sino Himalayan forests to the west and the Sino Japanese forests to the east; and the Salween-Mekong divide, separating the east Himalayan from the Hengduan mountains during glacial periods [17]. However, the influence of these different lines and the period of occurrence may not all apply to the tropical flora of southern Yunnan or may only be valid for a part of it.
Although Menispermaceae are closely linked to the expansion of tropical rainforests and their speciation could be linked to interglacial periods as well as distinct biogeographic events, few studies have reported genetic structures at the species level. Examples include the distribution of Sinomenium acutum (Thumb.) Rehd. et Wils. in subtropical China using chloroplast marker haplotypes [18], the genetic structure of Stephania yunnanensis H.S.Lo using DALP [19], and the link between Chasmanthera dependens Hochst. climatic refugia and genetic diversity in West Africa using AFLP and plastid markers [20].
In this study, we combine phylogenetics, time-measured phylogenies, ancestral area reconstruction, and population genetics based on the Start Codon Targeted (SCoT) polymorphism methods to answer questions about the phylogeography and the population history of Eleutharrhena macrocarpa. Specifically, we investigate how the genetic diversity is linked to the diversification of the southern Yunnan flora, the Himalayan mountain uplift, interglacial periods, and the occurrence of distinct population partitions. Furthermore, we focus on the seed morphology and its evolution within Tiliacoreae. Finally, we provide data on the populations' genetic structure that can be used to prioritize populations for conservation.
Morphological Observations and Analysis
One fruiting specimen of Eleutharrhena macrocarpa was selected and fruits were boiled for three hours and then left to soak overnight. Endocarps were then dissected and observed under a stereomicroscope (Nikon corporation). For comparison with other seeds having a strongly curved embryo, a conspicuous condyle and intrusive tissue such as the raphe, we also dissected the fresh fruits of Haematocarpus validus (Miers) Bakh.f. ex Forman (Pachygoneae). Haematocarpus is often misidentified as E. macrocarpa in the field but markedly differs in its seed morphology [5]. The specimens were collected in Menglun forest reserve in Xishuangbanna, Yunnan. For description and terminology, we followed Wefferling, Hoot and Neves [6]. Endocarp characteristics were mapped on the Bayesian phylogenetic tree (see 2.3) using Mesquite v. 3.70 (https://www.mesquiteproject.org/ accessed on 5 May 2022) to observe the changes in character states. A maximum likelihood (ML) analysis was performed to determine the ancestral character states. Trees were visualized using the ball and stick method, with pie charts at each node indicating the proportional likelihood of ancestral characteristics. A heuristic search with Maxtrees set to 100 and a majority rule consensus tree based on 17 morphological characters [3] (Table A1) was also performed for all the genera in Tiliacoreae using Mesquite v. 3.70.
Sampling, DNA Extraction, PCR, and Electrophoresis
Thirty-three genetic sequences and one outgroup [21] from Tiliacoreae were downloaded from the National Center for Biotechnology Information (NCBI) for the phylogenetic study. Seven plastid regions (matK, ndhF, rbcL, atpB, trnL-F, trnH-psbA, and rps16) and two nuclear regions (26S rDNA and ITS) were selected according to several recently published Menispermaceae phylogenetic studies [2,5,22] (see Table A2). One additional DNA sample collected in Xishuangbanna was extracted and its whole chloroplast was sequenced using NGS, aligned, assembled, and annotated by us (GenBank: MZ502223, detailed methods not shown here). The regions of interest were then extracted using Geneious v. 8.1.3 and added to our matrix. The sequences were aligned using the Muscle Algorithm [23] and the resulting sequences were edited with Geneious v. 8.1.3.
Silica-dried leaves of 48 individuals were collected in China for the SCoT analysis. Each population (from Dehong to Honghe) was more than 4 km apart. DNA was extracted using the Geneon Plant Genomic DNA Kit (Geneon, Inc. Changchun, China) and five geographically distant samples were selected to test 36 universal SCoT primer pairs [24]. PCR amplifications were conducted in a 50 µL volume, including 45 µL Tsingke master mix, 4 µL primers (10 µM), and 1 µL DNA extract. A C1000 Touch thermocycler (Bio-Rad Laboratories, Inc. China) was used to perform amplifications with the following program: 95°C for 5 min; 35 cycles at 98°C for 30 s, 52°C for 30 s, 72°C for 1 min, and 72°C for 5 min, and finally held at 4°C. An ABI-3730xl Sequencer (Applied Biosystems, Ltd. UK) was used to perform the capillary electrophoresis using 5 -labeled fluorescent primers. Electrophoresis results were transferred to a 0-1 matrix for further analysis.
Phylogenetic Analysis, Dating, and Ancestral Area Reconstruction
Phylogenetic trees were inferred with maximum likelihood (ML) and Bayesian inference (BI). ML analyses were performed using RAxML-HPC v. 8 on XSEDE [25] using the CIPRES Science Gateway v. 3.3 portal [26] based on GTR-CAT model with 1000 replications. The Bayesian analysis was implemented with MrBayes v. 3.2.3 [27] in PhyloSuite v. 1.2.2 platform [28] specifying the DNA substitution model selected by partitionFinder2 [29]. We performed four runs using 10 million generations with four chains, sampling trees every 1000 generations. The first 20% of trees were discarded and a 50% majority rule consensus tree was reconstructed from the remaining post-burn-in trees. The average standard deviation of the split frequency was used to verify that all runs reached stationarity and converged on the same distribution. To estimate the divergence time in Menispermaceae and Tiliacoreae, we ran a Bayesian relaxed molecular clock analysis with the plastid and nuclear combined dataset in BEAST v. 2.6.6 [30]. Our partitions were set in BEAUTi part of the BEAST package, and one fossil calibration point was inserted following Lian et al. [21] (Triclisia inflata 17.7 Mya can be assigned to extant Triclisia [31]). The parameters were the relaxed clock with a lognormal distribution and the Yule model. The maximum age of the Tiliacoreae root was set to 49.3 Mya [13] with a normal distribution and standard deviation of three. One run with 100 million generations and sampling every 5000 generations was conducted in BEAST. The first 10% of trees were discarded as burn-in. We used Tracer v. 1.7 [32] to assess the convergence and effective sample size. Tree Annotator v. 1.8.0 (part of the BEAST package) was used to summarize the set of post-burn-in trees and their parameters. FigTree v. 1.4.2 (http://tree.bio.ed.ac.uk/software/figtree/ accessed on 1 February 2021) was used to visualize the tree and divergence times. Tiliacoreae clade (2), including Eleutharrhena and its sister genera, was extracted from the maximum clade credibility tree inferred with BEAST, and input in BioGeoBEARS [33] implemented in RStudio v. 1.4.1717 [34]. Four major geographical ranges were designed: Indochina, southern Yunnan, Sundaland including Wallacea and Australasia including New Guinea. Six models were tested, namely DEC, DEC + J, DIVALIKE, DIVALIKE + J, BAYAREALIKE, and BAYAREALIKE + J [33], and the most fitting model was selected by calculating the best value for the Likelihood Ratio Test (LRT). The resulting probabilities of the ancestral states were drawn as pie charts at the node on the provided tree.
Population Genetic Analysis
POPGENE v. 3.2 [35] was used to analyze the observed number of alleles (Na), the effective number of alleles (Ne), Shannon's information index (I), the percentage of polymorphic loci (PPL), and Nei's genetic diversity index (H). Genetic distance was visualized and interpreted using a PCoA analysis in GenAIEx v. 6.1 [36]. STRUCTURE v. 2.3.4 [37] was used to infer population structure following the Falush, et al. [38] input method, with a burn-in period of 100,000 iterations, 1,000,000 Markov Chain Monte Carlo (MCMC) repetitions, 1-5 K ranges, and 10 independent runs. Evanno's method [39] was used to determine the best number of subpopulations in the Structure Harvester v. 0.6.94 [40]. CLUMPP v. 1.1.2 [41] was used to align clusters and Distruct v. 1.1 [42] was used to visualize the results. Populations including more than one individual were used to infer genetic differentiation using phiPT (Φpt), a measure that allows intra-individual variation to be suppressed and is therefore ideal for comparing codominant binary data [43,44] and gene flow (Nm = [(1/Φpt) −1]). An analysis of molecular variance (AMOVA) was performed among populations and within populations. Mantel's test was performed to determine the relationship between genetic and geographic distances in GenAIEx v. 6.51.
Endocarps
The druplets of both Eleutharrhena macrocarpa and Haematocarpus validus are very similar in terms of size, number, and the stipitate base. The exocarps of H. validus (Figure 3) differ by their deep red and fleshy cells, as well as the position of the style scar, which is basal and not subapical as in E. macrocarpa (Figure 3). Endocarps have a deep dorsal longitudinal groove as well as several lateral grooves in H. validus, whereas only a ventral ridge and faint transversal ridge can be seen on the lateral side of E. macrocarpa endocarps. Two large extruded funicle apertures can be observed at the base of the ventral part in E. macrocarpa. The endocarp of H. validus is clearly deeply curved with a pronounced bilaterally compressed condyle, which is absent in E. macrocarpa, and the raphe is clearly visible and intrusive inside the condyle in H. validus. Both genera have seeds without endosperm and fleshy cotyledons, although E. macrocarpa cotyledons are unequal. E. macrocarpa's endocarps are more similar to Pycnarrhena's but markedly differ by the presence of two basal and ventral funicle apertures and the presence of a well-developed bilobed aril surrounding the cotyledons (Figure 3). We could not obtain the fruit of Macrococculus, but Forman [8] observed that the seed is covered with a reticulate membrane as well as two basal and ventral funicle apertures and is thus very similar to Eleutharrhena.
Endocarps
The druplets of both Eleutharrhena macrocarpa and Haematocarpus validus are very similar in terms of size, number, and the stipitate base. The exocarps of H. validus (Figure 3) differ by their deep red and fleshy cells, as well as the position of the style scar, which is basal and not subapical as in E. macrocarpa (Figure 3). Endocarps have a deep dorsal longitudinal groove as well as several lateral grooves in H. validus, whereas only a ventral ridge and faint transversal ridge can be seen on the lateral side of E. macrocarpa endocarps. Two large extruded funicle apertures can be observed at the base of the ventral part in E. macrocarpa. The endocarp of H. validus is clearly deeply curved with a pronounced bilaterally compressed condyle, which is absent in E. macrocarpa, and the raphe is clearly visible and intrusive inside the condyle in H. validus. Both genera have seeds without endosperm and fleshy cotyledons, although E. macrocarpa cotyledons are unequal. E. macrocarpa's endocarps are more similar to Pycnarrhena's but markedly differ by the presence of two basal and ventral funicle apertures and the presence of a well-developed bilobed aril surrounding the cotyledons (Figure 3). We could not obtain the fruit of Macrococculus, but Forman [8] observed that the seed is covered with a reticulate membrane as well as two basal and ventral funicle apertures and is thus very similar to Eleutharrhena. The node at the base of the Tiliacoreae indicates a 99% maximum likelihood (ML) that the common ancestor had endocarps with deeply curved embryo and a conspicuous condyle. The node at the base of clade (2) had a 95% ML for a deeply curved embryo with The node at the base of the Tiliacoreae indicates a 99% maximum likelihood (ML) that the common ancestor had endocarps with deeply curved embryo and a conspicuous condyle. The node at the base of clade (2) had a 95% ML for a deeply curved embryo with intrusive condyle. Additionally, the node at the base of Pycnarrhena and Eleutharrhena had a 45% ML for weakly curved embryo and weak condyle and a 43% ML for a straight embryo with an intrusive aril (Figure 4). Based on 17 morphological characters (Table A1) the heuristic search produced an incongruent tree with the molecular data ( Figure A1). However, Macrococculus and Eleutharrhena had a 98% support for their sister relationship. 7 of 20 intrusive condyle. Additionally, the node at the base of Pycnarrhena and Eleutharrhena had a 45% ML for weakly curved embryo and weak condyle and a 43% ML for a straight embryo with an intrusive aril (Figure 4). Based on 17 morphological characters (Table A1), the heuristic search produced an incongruent tree with the molecular data ( Figure A1). However, Macrococculus and Eleutharrhena had a 98% support for their sister relationship.
Dating Analyses
The maximum clade credibility tree of the plastid and nuclear combined alignment using the Yule model is displayed in Figure 5
Dating Analyses
The maximum clade credibility tree of the plastid and nuclear combined alignment using the Yule model is displayed in Figure 5
Ancestral Area Reconstruction of Eleutharrhena and Pycnarrhena
BAYAREALIKE + J model (Bayesian approach with a large number of areas under ML) had higher statistical support (see Table A3). The ancestral range of
Ancestral Area Reconstruction of Eleutharrhena and Pycnarrhena
BAYAREALIKE + J model (Bayesian approach with a large number of areas under ML) had higher statistical support (see Table A3). The ancestral range of
Population Genetic Diversity
SCoT primers 7, 8, and 23 were the most polymorphic, producing unambiguous clear bands and were subsequently selected for amplification and analysis. Forty-eight individuals generated 536 bands in total. Genetic diversity indices for populations including more than one individual are presented in Table 1. Nei's genetic diversity index (H) ranged from 0.228 to 0.3120, and Shannon's information index (I) ranged from 0.335 to
Population Genetic Diversity
SCoT primers 7, 8, and 23 were the most polymorphic, producing unambiguous clear bands and were subsequently selected for amplification and analysis. Forty-eight individuals generated 536 bands in total. Genetic diversity indices for populations including more than one individual are presented in Table 1
Cluster Analysis and Population Structure
The principal coordinate analysis (PCoA) including all samples shows that three subpopulations can be observed (Figure 6a). Subpopulation 1 has two individuals from Dehong. Individuals from Lincang are all in subpopulation 2. Subpopulations 2 and 3 are mixed between Xishuangbanna, Dehong, Honghe, Lincang, and Pu'er individuals.
PCoA results indicate that subpopulations 2 and 3 are distinct, although some individuals from Xishuangbanna belong to subpopulation 2 or 3. Most individuals from southeastern Yunnan, eastern Xishuangbanna and Pu'er are grouped in subpopulation 3 and most individuals from western southern Yunnan and western Xishuangbanna are in subpopulation 2. No natural geographical barriers exist between subpopulations 2 and 3 except for the Lancang river. Structure analyses and Evanno's methods also indicated the best K = 3, and Honghe individuals have a similar admixture to that of the Xishuangbanna, Pu'er, and Lincang populations, whereas two individuals from subpopulation 1 in Dehong have more private alleles, as also reflected in the plastid phylogenetic analysis (Figure 6b).
Analysis of Molecular Variance and Mantel's Test
The AMOVA results show that most of the genetic variation occurred within populations (83%) whereas only 17% of the variance occurred among populations (see Table A4). The Mantel test shows no correlation between the geographical and genetic distance (R 2 = 0.2299), (see Figure A3).
Analysis of Molecular Variance and Mantel's Test
The AMOVA results show that most of the genetic variation occurred within populations (83%) whereas only 17% of the variance occurred among populations (see Table A4). The Mantel test shows no correlation between the geographical and genetic distance (R 2 = 0.2299), (see Figure A3).
Endocarp Morphology
Tiliacoreae is characterized by male flowers with more than four whorls of sepals, longitudinally grooved seed endocarp that are ribbed or abaxially rugose, hippocrepiform seeds without endosperm and subcylindrical embryos [2]. A lack of morphological data, especially for the flowers of both sexes and the fruit has resulted in the patchy morphological data analyses of Menispermaceae tribes [6]. Using the morphological characteristics listed in the synopsis of the family [3], very few morphological characters could be used to produce a tree congruent with the phylogenetic analyses based on seven plastid and two nuclear regions ( Figure A1). However, hippocrepiform seeds without endosperm and a conspicuous condyle seem to be a shared primitive characteristic of the tribe, except within Eleutharrhena, Pycnarrhena, and Albertisia ( Figure 4). In Eleutharrhena (as well as Macrococculus, not sampled in our phylogenetic analysis), the condyle has completely disappeared, and instead, an aril-like tissue is present (Figure 3). We hypothesize that the bilobed aril is derived from the aborted ovule and could represent the dry cotyledons that now enclose the remaining fertile ovule [47]. This character may also be linked to the intrusion of organs within the condyle, as shown by the raphe in Haematocarpus validus ( Figure 3) and does not seem to have an ecological function. Further apomorphies include the perforate condyle in Syrrheonema, the straight endocarp and reduced condyle in Albertisia as well as the ruminate endosperm in Tiliacora. The later ruminate endosperm is said to be of secondary origin, as some Tiliacora species have no endosperm.
Tiliacoreae and Eleutharrhena Biogeography
Our phylogenetic results based on seven plastid and two nuclear regions are similar to previously obtained phylogenies [2,5] and confirm three clades within the tribe. The distribution of the tribe Tiliacoreae includes South America, Africa, Asia, and Australasia and resembles groups that have experienced western Gondwana vicariance, c. 105 Mya [48].
Pycnarrhena and Eleutharrhena appeared during the Oligocene c. 23.10 Mya in Indochina. During the late Eocene and Oligocene, the climate for much of Southeast Asia was seasonal and humid forests became isolated [49]. We can infer that this climatic change may have favored the speciation of Eleutharrhena in its northern range due to the Himalayan uplift and the apparition of a monsoon climate. Pycnarrhena dispersal to Australasia and Sundaland followed a distinctive pattern that can also be observed in Tiliacora-firstly that of a dispersal from Indochina to Australasia during the Miocene c. 15.51 Mya and then back to Sundaland c. 2.16 Mya. The main change at the beginning of the Miocene is the change in Sundaland from a seasonally wet to humid climate. This corresponds to the closure of the Indonesian throughflow, and the Australian plate collision with Southeast Asia [49]. Dispersal from Asia to Australia may have occurred when the sea level is known to have been lower, allowing for some migrations to happen across the Wallace line [50,51]. The Wallace line is a well-known biogeographical barrier between New Guinea/Australia and Sundaland [52], a deep trench that separated both floras and which could have separated earlier lineages. The disjunction of Eleutharrhena + Macrococculus (based on morphological data only) across the Wallace line may be explained by the narrow habitat requirements of these genera and is linked to the distribution of humid rainforests.
Eleutharrhena macrocarpa samples from Xishuangbanna diverged very early from the individuals from Dehong during the Miocene c. 15.88 Mya ( Figure 5). Differentiation across the Thai-Burmese border range (Tenasserim Hills, Dawna Range, and Karen Hills) was proposed to explain the diversity in several bird and mammal species [53]. Molecular data [54] suggest that exchanges between the Indian subcontinent and mainland Asia peaked from 44 Mya in the Eocene to the mid Miocene, and then decreased after 14 Mya due to the drier conditions developing in northern India. It is therefore possible that the Dehong population may represent a separate refugium across the Thai-Burmese range that was previously connected to northern India and later became separated.
Eleutharrhena Macrocarpa Population Structure
A relatively high number of bands was obtained from the SCoT analysis, although between 53712 and 78921 unigenes were detected in Menispermum canadense and M. dauricum [55]. Our total bands therefore represent between 0.7% and 1% of the total number of unigenes, which may be due to the use of fluorescent primers and capillary electrophoresis.
Menispermaceae species are known to be outbreeding species which should result in reduced population differentiation. Sinomenium acutum populations were shown to have a high gene flow and low population differentiation, although refugia populations were identified by their higher genetic diversity (H T = 0.828, H S = 0.710, N ST < G ST , AMOVA within populations 83.62%, among populations 16.38% N m = 2.552) [18]. Tinospora cordifolia had relatively low genetic diversity but this study was only based on five individuals (H = 0.2114) [56]. Menispermum canadense and M. dauricum were found to have a relatively high inbreeding coefficient and low genetic diversity (inbreeding coefficient of 0.198, H o = 0.377, and H e = 0.342 and no significant deviation from the Hardy-Weinberg equilibrium) [55]. The genetic diversity of Chasmanthera dependens showed opposite results from the cpDNA haplotypes and AFLP results (high population differentiation and weak geographical correlation for cpDNA, with F ST = 0.797 and low population differentiation and strong geographical correlation for AFLP, with F ST = 0.064) [20]. Stephania yunnanensis showed high population differentiation and relatively low gene flow (N a = 1.6192; N e = 1.4001; H = 0.2298; I = 0.43401) [19]. The view that most Menispermaceae species are outbreeding species with weak population differentiation contrasts with these previous and recent studies and could also show that the differentiation may be due to refugia, the founder effect, geographical isolation, and selection pressure [57].
Eleutharrhena macrocarpa shows low genetic diversity but high population differentiation and medium gene flow. The low level of genetic diversity and medium gene flow among populations but high population differentiation might be due to mixed forces involving (1) small population sizes and specific habitat requirements along with (2) sympatric barriers such as mountain ranges or rivers and (3) pollinator specificity [58]. Fragmentation and small population sizes could explain the relatively low differentiation of the populations from Honghe at the eastern limit of the range across the Hua line. The Hua line [15] was used to describe the different flora composition between southern and southeastern Yunnan. It has been suggested that this could correspond to the movement of the Indochina geoblock as well as the Lanping-Simao block together with the Himalayan uplift [15]. However, this line does not seem to have strongly affected the dispersal of E. macrocarpa to southeastern China.
Finally, Eleutharrhena has highly specific habitat requirements [59] which may be reflected by the higher population differentiation despite its low genetic diversity.
Evolutionary Distinctiveness
Our results suggest low genetic diversity and high population differentiation in E. macrocarpa and indicate that populations p1, p16, and p20 should be prioritized for conservation because of their higher genetic diversity and differentiation. The conservation value can be assessed using several methods and priorities such as threats, economic benefits, environment quality, and phylogenetic distinctiveness [60]. Our results show that Eleutharrhena has a high phylogenetic distinctiveness value because it is morphologically related to Macrococculus both having an unusual aril-like structure surrounding the seed. It has a high biogeographic value, as shown by the disjunction between these two genera as well as the isolation of the Dehong population. This is the first time that we have documented differentiation across the Thai-Burmese border range for any plant species in southern Yunnan. Eleutharrhena macrocarpa is a relict plant of the mighty rainforests that has colonized and persisted in the northern range of the Southeast Asian biodiversity hotspot.
Conclusions
Eleutharrhena and Macrococculus are morphologically similar and Eleutharrhena is a sister to Pycnarrhena. Eleutharrhena may have evolved during the Himalayan uplift during the Miocene c. 15.88 Mya and its speciation may be linked to changes in climate during this time, when humid forests became restricted to the northern range. Dispersal to the Lanping-Simao block and further differentiation between southeastern and southern Yunnan occurred during the Miocene c. 6.82 Mya. The conservation of species with rich evolutionary histories and that are linked to major biogeographic events such as the Himalayan uplift should be prioritized.
Further studies involving the sampling of Macrococculus, the remaining genera within tribe Tiliacoreae, and the use of codominant genetic markers would improve our understanding of such complex biogeographical patterns. | 7,434 | 2022-05-30T00:00:00.000 | [
"Environmental Science",
"Biology"
] |
The first near-linear bis(amide) f-block complex: a blueprint for a high temperature single molecule magnet
Since their initial discovery, single molecule magnets (SMMs) have been lauded as candidates for high density data storage devices. A major breakthrough in the field occurred in 2003 with the observation of SMM behavior in a monometallic {TbPc2} complex with an energy barrier, Ueff = 230 cm . The ensuing decade saw rapid growth in lanthanide SMMs with the Ueff barrier to magnetization reversal increased to 652 cm 1 for another derivative of {TbPc2}, 6 and 585 cm 1 for a polymetallic Dy@{Y4K2} complex. The highest blocking temperature TB (i.e. the temperature at which hysteresis is observed) was also increased to 14 K, via an N2 3 radical bridge in a {Tb2N2 3 } complex. Although three of these milestones employ the Tb ion, by far the most utilized lanthanide ion in SMMs is Dy by virtue of its unique electronic structure. Apart from a radical-bridged {Dy2N2 3 } complex, nearly all polymetallic Dy-based SMMs possess negligible interactions between magnetic spin centres, and instead rely on the single ion anisotropy of Dy (i.e. the local crystal field environment) to provide the barrier to the reversal of magnetization. Intraor intermolecular interactions are often detrimental to the performance of Dy SMMs so that doping a small amount of the paramagnetic ion into a diamagnetic host lattice (usually the Y analogue) often results in an increased Ueff. 7
Since their initial discovery, 1 single molecule magnets (SMMs) have been lauded as candidates for high density data storage devices. 2 A major breakthrough in the field 3 occurred in 2003 with the observation of SMM behavior in a monometallic {TbPc 2 } À complex with an energy barrier, U eff = 230 cm À1 . 4 The ensuing decade saw rapid growth in lanthanide SMMs 5 with the U eff barrier to magnetization reversal increased to 652 cm À1 for another derivative of {TbPc 2 }, 6 and 585 cm À1 for a polymetallic Dy@{Y 4 K 2 } complex. 7 The highest blocking temperature T B (i.e. the temperature at which hysteresis is observed) was also increased to 14 K, via an N 2 3À radical bridge in a {Tb 2 N 2 3À } complex. 8 Although three of these milestones employ the Tb III ion, by far the most utilized lanthanide ion in SMMs is Dy III by virtue of its unique electronic structure. 9 Apart from a radical-bridged {Dy 2 N 2 3À } complex, 10 nearly all polymetallic Dy III -based SMMs possess negligible interactions between magnetic spin centres, and instead rely on the single ion anisotropy of Dy III (i.e. the local crystal field environment) to provide the barrier to the reversal of magnetization. Intra-or intermolecular interactions are often detrimental to the performance of Dy III SMMs so that doping a small amount of the paramagnetic ion into a diamagnetic host lattice (usually the Y III analogue) often results in an increased U eff . 7 An electrostatic model for the design of ideal ligand environments to exploit the maximal anisotropy of Dy III has been postulated, 11,12 and shown to be in good agreement with multiconfigurational complete active space Self consistent field (CASSCF) ab initio calculations 12 that are often employed to examine 4f complexes, pioneered by Chibotaru. 7,13 Electrostatic approaches suggest that the optimal ligand environment to exploit the oblate spheroidal electron density of Dy III is axial, where rigorously axial systems have the benefit of maintaining a single, unique quantization axis for the total angular momentum m J states. 14 A set of unadulterated m J states implies that the probability of quantum tunnelling of the magnetization (QTM) is reduced, therefore increasing magnetic relaxation times. 2 The simplest axial ligand environment is a linear two-coordinate complex with donor atoms exclusively on a single Cartesian axis; the U eff barrier is so large for the {Dy 5 } and {Dy 4 K 2 } alkoxide complexes 7 because of the strongly axially repulsive crystal field potentials along the local z-direction of each Dy III . Other compounds such as [(C 8 H 8 ) 2 Ln] À (ref. 15) or Cloke's bis(arene) lanthanide complexes 16 are sometimes described as linear, but lack donor atoms directly on the axis. Linear 3d-metal compounds also show remarkable magnetic behaviour with very high U eff values. 17 A one coordinate lanthanide complex [DyO] + has been considered theoretically with a very large U eff predicted, 14 however such an entity is not chemically feasible.
Very low coordination numbers for 4f-ions are difficult to achieve as these are large, electropositive ions, which require a sterically demanding ligand. Such a pro-ligand HN(Si i Pr 3 ) 2 was designed, and synthesised from ClSi i Pr 3 Formally each nitrogen atom carries a single negative charge and the Sm II ion is divalent, with an [Xe]4f 6 configuration. The f 6 configuration leads to a formally diamagnetic 7 F 0 ground state, with close lying excited states that provide a non-zero magnetic moment at room temperature. Magnetic measurements on 1 give a room temperature magnetic moment of 3.62 m B that falls towards zero at low temperature ( Fig. S2 and S3, ESI †). This is clearly incompatible with interesting low temperature magnetic behaviour. However, the structure of 1 is close to the ideal linear arrangement to stabilize the large angular momentum states of Dy III and produce monstrous uniaxial magnetic anisotropy.
Such a Dy III compound is challenging to make; we believe a route via the heteroleptic [Dy{N(Si i Pr 3 ) 2 } 2 I] treated with the potassium salt of a large anion might work through precipitation of a potassium iodide. Other routes can be imagined, and here we present predictions of the magnetic properties of such a complex, intending to inspire synthetic work towards the linear Dy III complex, and, more ambitiously, the isoelectronic Tb II analogue.
The properties of [( i Pr 3 Si) 2 N-Dy-N(Si i Pr 3 ) 2 ] + 2 are predicted by CASSCF/RASSI/SINGLE_ANISO 22 ab initio calculations (see ESI † for details) employing the structure of 1, where Sm II has been replaced by Dy III . The validity of the method was tested by calculating the variable temperature magnetic behavior of 1, where the agreement is excellent (Fig. S2 and S3, ESI †). Dy III has a 6 H 15/2 ground multiplet, which is split by the crystal field into eight Kramer's doublets with total angular momentum projections m J = AE1/2, AE3/2,. . . AE15/2. The ab initio calculations show that the lowest six Kramers doublets are the almost pure m J states of m J = AE15/2, AE13/2, AE11/2, AE9/2, AE7/2 and AE5/2, sharing a common quantization axis ( Fig. 3 and Tables S1 and S2, ESI †). The two most energetic doublets are strongly mixed; a characteristic of low symmetry complexes due to the lack of a rigorous molecular C N axis. 14 Along the main magnetic axis these two states can be expressed as |c ab i = 64%|AE3/2i + 26%|81/2i and |c cd i = 68%|AE1/2i + 31%|83/2i and (Table S2, ESI †), giving the most energetic Kramers doublet a large g y value of B17.5 perpendicular to the main magnetic axis.
Magnetic relaxation in lanthanides follows three possible routes: (1) QTM within the ground doublet (e.g. |À15/2i -|+15/2i in Fig. 3), (2) thermally assisted QTM (TA-QTM) via excited states (e.g. |À15/2i -|À13/2i -|+13/2i -|+15/2i), or (3) an Orbach process composed of direct and/or Raman mechanisms (e.g. |À15/2i -|À13/2i -|+15/2i). The most probable pathway depends on the composition of the states involved and their interactions with phonons. For example, the slow magnetic relaxation for {Dy 4 K 2 } was shown to occur via the first or second excited states (TA-QTM), depending on the number and location of neighboring Dy III ions providing a source of transverse magnetic field. 7 The states with opposing magnetic projections are mixed proportionally to the product of the transverse field and the transverse g-factors and therefore TA-QTM will occur via the excited state which has transverse g-factors above a certain threshold or where its main magnetic axis is non-collinear with that of the ground state. All non-QTM transitions are induced by the vibrational modes of the lattice (phonons) which create local oscillating magnetic fields through modulation of dipolar fields as well as an oscillating crystal field potential. 23 To a first approximation, we can associate the probability of a phonon induced transition with the average magnetic 13,14,24 and crystal field perturbation matrix elements (see ESI † for details). Compared to all known Dy III complexes the calculated properties for 2 are unique with very small transverse g-factors and a common principal axis for the lowest six Kramers doublets. This suggests that both the probability of QTM within the ground doublet and TA-QTM is vanishingly small until the two most energetic doublets. Orbach relaxation is also strongly disfavoured in the low lying states as magnetic transition probabilities due to phonons are miniscule (Fig. 3). Efficient magnetic relaxation will only occur via the highest energy doublets (Fig. 3, Fig. S4 and Tables S4 and S5, ESI †). Therefore the ab initio calculation predicts U eff E 1800 cm À1 for 2 -far greater than any complex to date. Whilst such calculations may over-estimate the energies of the crystal field states, 25,26 we can predict a T B in excess of 77 K as such temperatures are often around 1/20th of the U eff value if QTM within the ground doublet is disfavored, e.g. the T B /U eff ratios for {Tb 2 N 2 3À }, {Mn 12 } and {Mn 6 } are approximately 1/16, 1/15 and 1/13 cm À1 K À1 , respectively. Calculations for the Tb II analogue 3, which is also a 4f 9 ion, predict analogous behavior to 2 (Table S6, ESI †). The high local symmetry at the Dy III site implies that the nuclear quadrupole and hyperfine interactions will be axially symmetric, preventing efficient QTM within the lower energy doublets.
To examine the stability of 2, we have performed ab initio calculations for modified geometries where the N-Dy-N angle and the Dy-N bond lengths have been altered by AE0.51 and AE0.01 Å, respectively (Fig. S5, ESI †). The results show that 2 is stabilized when the Dy-N bond length is shortened and the N-Dy-N angle is closer to 1801 compared to 1, yielding more favorable electronic properties. These calculations do not take into account the inclusion of a counter-ion in the structure, which may have consequences for crystal packing and the local structure of 2.
Compound 1 is the first near-linear bis(amide) 4f-block complex. It allows us to propose a blueprint for the first generation of 'high-temperature' SMMs, with blocking temperatures exceeding that of liquid N 2 (77 K). The synthesis of the proposed archetypes, viz. the Dy III and Tb II analogues of 1, is currently underway in our laboratory, however we believe this is a target many other groups should be pursuing. Calculations on other f n ions suggest that f 9 is ideal; even for the oblate f 8 Tb III analogue, 4, we find that the pseudo-doublets show strong mixing between the |Àm J i and |+m J i projections, (Tables S7 and S8, ESI †), which would lead to strong zero-field QTM.
While 2 would have a huge U eff , an even higher U eff barrier might be possible if dianionic monodentate ligands could be incorporated, e.g. [( i Pr 3 Si) 2 C-Dy-C(Si i Pr 3 ) 2 ] À , containing dianionic methanediides. Our preliminary results suggest this could raise U eff by a factor of 1.2 to 1.3. The incredible advances made in low coordination number metal-organic compounds in the last decade suggest that such hypothetical complexes are now chemically feasible. These metal-organic compounds are becoming of great importance in molecular magnetism. 8,10,27,28 This work was supported by the EPSRC (grant number EP/K039547/1) (UK). N.F.C. thanks The University of Manchester for a President's Doctoral Scholarship. R.E.P.W. thanks The Royal Society for a Wolfson research merit award. We would like to thank J. P. S. Walsh for assistance with graphics. | 2,865.8 | 2015-01-04T00:00:00.000 | [
"Physics"
] |
SYMPOSIUM ON DIGITAL TRADE AND INTERNATIONAL LAW DIGITAL TRADE, DEVELOPMENT, AND INEQUALITY
The links between and among digital trade, development, and inequality are multifaceted and ever evolving. They depend on what is understood as development and as inequality, concepts that transcend the North-South divide, and the fora in which these issues arise. Conceptually, development and inequality are intrinsically intertwined as the measures to address both are often complementary or even the same. 1 In this essay, we consider development and inequality as pertaining to the ability of developing countries 2 and least-developed countries (LDCs) 3 to shape and participate in the digital economy, and particularly, the regulatory framework for digital trade. We explore how the relationships between digital trade, development, and inequality are addressed in the main venues for digital trade rulemaking: the World Trade Organization (WTO) and Preferential Trade Agreements (PTAs). We then examine two contentious issues in digital trade: the customs duty moratorium and data governance.
The difference in nature and scope between the WTO E-commerce Work Programme and JI negotiations affects the participation of developing countries and LDCs.These countries, which constitute 75 percent of the WTO's membership, are better represented in the WTO E-commerce Work Programme, given its multilateral character, than in the plurilateral JI negotiations. 8However, the WTO E-commerce Work Programme is modest in scope and mandates only four of the nearly fifty WTO bodies to consider the link between e-commerce and the WTO agreements.Nevertheless, the WTO's Committee on Trade and Development has been specifically tasked to consider the development dimension.Unfortunately, these discussions have fallen prey to the negotiation impasse at the WTO and progress on substantive discussions stalled in 2016.
In contrast, the JI negotiations include only those WTO members that wish to participate.To date, of the eightyeight JI participants, thirty are developing countries and four are LDCs.These negotiations cover an expansive range of issues similar to those contained in the most advanced PTAs with digital trade rules.This raises concerns over how less-resourced countries can effectively participate in crafting rules that will regulate one of the most important areas of trade policy. 9The relatively low rate of participation of many developing countries and nearly all LDCs (particularly from Africa) in the JI negotiations could also be attributed to the critical stance they have taken against these negotiations in favor of the WTO E-commerce Work Programme discussions. 10Moreover, their skepticism in negotiating cutting-edge rules has been justified by their need to first understand the phenomenon and implications of digital trade before agreeing on its international regulatory framework.Notably, the very identity of some developing countries is contested at the WTO.There are members who do not think that certain members should be considered developing countries and those who believe that it is their right to self-identify as such. 11The issue of who qualifies as a developing country could have systemic implications, including who accesses the special and differential treatment provisions of the concluded JI.
Beyond the WTO, developing countries and LDCs are negotiating rules on e-commerce and digital trade in PTAs.According to the TAPED database, 12 as of December 2022, 107 PTAs with an e-commerce or digital trade chapter have been signed since 2001.Out of these, ninety-eight involve at least one developing country.Moreover, thirty-seven PTAs are South-South agreements.So far, two LDCs, Cambodia and the Lao People's Democratic Republic, have negotiated a PTA, the Regional Comprehensive Economic Partnership, which includes a digital trade chapter.The current negotiations of the Digital Trade Protocol of the African Continental Free Trade Area Agreement comprise the highest number of LDCs negotiating these rules: twenty-five.These developments highlight the importance of digital trade, the increasing diversity of countries negotiating its rules, and the preference of developing countries and LDCs to negotiate more tailored and less ambitious rules intra-regionally (rather than at the WTO).Developing countries' increasing participation in digital trade norm-setting in PTAs might result in more "development issues" permeating these rules.
The Issues Under Negotiation
Substantive development and inequality issues at the WTO focus on bridging the "digital divide." 13This term refers to, among others, gaps in connectivity, infrastructure, digitalization, and regulatory frameworks, as well as low digital literacy and gender inequalities.The ongoing JI negotiations also elucidate upon other development issues, especially the participation of micro-, small-, and medium-sized enterprises in digital trade.In most developing countries and LDCs, these entities account for the majority of businesses and employment.Hence, facilitating intra-, and cross-border trade for these enterprises could spur economic growth or even increase their involvement in global value chains. 14n PTAs, digital trade norm-setting is progressing rapidly. 15However, PTAs' substantive provisions on digital trade have only cursory references to development and inequality.In some cases, there are discrete references to economic and social development on specific issues such as open data. 16Inequality related issues, such as how to address the digital gender divide, have been largely absent until recently.Thus far, provisions on digital inclusion appear explicitly in five agreements: the Digital Economy Partnership Agreement (DEPA); 17 the Chile-Paraguay Free Trade Agreement (FTA); the India-United Arab Emirates Comprehensive Economic Partnership Agreement; the Singapore-United Kingdom (UK) DEA (SUKDEA); and the UK-New Zealand FTA.Interestingly, all but the UK-New Zealand FTA have at least one developing country party.These provisions, which initially involved providing economic opportunities to micro-, small-, and medium-sized enterprises, have expanded to encompass women, rural populations, low socioeconomic groups, disabled people, and Indigenous Peoples.SUKDEA is unique in that it specifically targets fair labor conditions, worker protection, and improving digital skills.Moreover, its parties also recognize the digital divide between countries and undertake to promote the participation of other countries in digital trade.Singapore, which identifies as a developing country at the WTO, has emerged as a legal innovator in digital inclusion.It is evident that it closely associates its economic development with effective participation in the digital economy.
Ultimately, given the gaps in the regulatory frameworks in and between different countries, PTAs' main contribution to addressing the digital divide is the negotiation of the necessary rules to effectively govern cross-border digital trade.Furthermore, PTA negotiations raise questions on future aspects of digital trade governance that might impact development and equality, both in the Global North and South.For instance, provisions on Artificial Intelligence 18 recognize the economic and social importance of developing ethical and governance frameworks for the trusted, safe, and responsible use of Artificial Intelligence technologies. 19inally, PTAs also highlight another key aspect shaping the domestic policy space: exceptions and carveouts in digital trade chapters.These often include exceptions for privacy, national security, domestic taxation, information held or processed by a government, government procurement, or exclude specific sectors (e.g., financial, audiovisual services). 20These flexibilities may be more meaningful to developing countries as they develop their own digital trade regulatory frameworks.
The remaining parts of this essay highlight how current digital trade issues intersect with development and inequality by exploring two contentious areas: the customs duty moratorium and data governance.
Customs Duty Moratorium
The WTO practice of not imposing customs duties on electronic transmissions (Customs Duty Moratorium) 21 has recently become controversial.It was most recently renewed at WTO's 12th Ministerial Conference (MC12) in Geneva in June 2022. 22Yet, it is uncertain if WTO members will maintain this practice.According to the MC12 Ministerial Decision, if MC13 is delayed beyond March 31, 2024, the Customs Duty Moratorium will expire on that date unless it is extended by consensus.The reason for this about-turn is that some WTO members, particularly developing countries, believe that the Customs Duty Moratorium disproportionately disadvantages developing countries and LDCs.Specifically, it affects their ability to collect customs duties, which are a significant source of revenue. 23In a paper circulated at the WTO in 2019, India and South Africa argued against the maintenance of the Customs Duty Moratorium for this reason. 24heir arguments are based on a study conducted by Rashmi Banga for the United Nations Conference on Trade and Development (UNCTAD) in 2019.The study estimated that the Customs Duty Moratorium resulted in a potential annual tariff revenue loss of U.S.$5.2 billion for developing-country WTO members and U.S.$344 million for LDCs.In comparison, the revenue loss experienced by high-income WTO members was estimated at U.S.$289 million per annum. 25In contrast, the Organisation for Economic Co-operation and Development (OECD) noted that the estimated revenue implications of the Customs Duty Moratorium range from U.S.$280 million to U.S.$8.2 billion. 26The study's authors found that the potential foregone revenue amounts to only an average 0.08-0.23 percent reduction in government revenue for developing countries. 27They nevertheless conceded that the potential foregone revenue tends to be higher for developing countries because tariffs are generally higher in those countries.However, developing countries stand to enjoy higher welfare gains from tariff liberalization. 28oreover, the Customs Duty Moratorium is contentious because there is no agreement amongst WTO members on the definition of an "electronic transmission."Consequently, WTO members treat cross-border electronic transmissions differently in their domestic customs and tax regimes.Some do not tax digital products at all.Others, like Indonesia, have classified them as "intangible goods" in their goods schedules, while countries like Australia, New Zealand, and South Africa, apply a general sales tax or value-added tax.
Notwithstanding the controversy, 104 PTAs include a provision upholding the Customs Duty Moratorium.Of these, thirty-six (or just over a third) are South-South agreements.This suggests that developing countries have https://doi.org/10.1017/aju.2023.17Published online by Cambridge University Press either not fully engaged with this issue or that some of them support the Customs Duty Moratorium.This also further highlights that developing countries' views on digital trade are not monolithic.
Data Governance
Any discussion on development and inequality in digital trade must address the challenges posed by the governance of one of its key elements: data.The multidimensional nature of data and the promise of innovation based on its use 29 have led to different views on the benefits of liberalizing data-related trade policies.
One issue of contention is whether data, including personal data, should be allowed to flow freely or if countries should restrict its flow.The restrictions include "data localization" measures in their most extreme version.The reasons for such measures can be related to privacy protection, but some also highlight economic development concerns, which governments believe could be alleviated by forcing corporations to locate personal data within their borders, thus supporting the creation of domestic data industries. 30Similar to "data localization" measures, "data sovereignty"-part of the broader notion of "digital sovereignty"-is a concept increasingly found in digital trade policy discourse as a mechanism for, among others, economic development.This notion asserts the "control of data flows via national jurisdiction." 31As data policies relate to inequality, the notion of "data colonialism"whereby developing countries and LDCs become mere sources of raw data, repeating old patterns of natural resources extractive industries 32 -highlights the power imbalances existing in the digital economy due to the control of data infrastructures by large corporations, mostly located in the Global North.
These issues transcend the traditional distinction between developing and developed countries, the former generally insisting on greater trade liberalization while the latter generally imposing the most restrictions.For instance, the European Union, a developed region, has made technological sovereignty a key element in its digital policy. 33rivacy protection also transcends the North-South divide and economic concerns.It is no longer only found in PTAs led by developed regions, but South-South PTAs increasingly require parties to establish minimum rules on privacy protection.As of March 2023, a total of thirty-three South-South PTAs include a relevant clause.It is presumed that they create consumer trust, which, in turn might accelerate the uptake of the digital economy and hence economic growth, a pre-requisite to reduce inequality and promote development.
The issue of data governance and its developmental aspects would be incomplete without considering that data is a key element of Artificial Intelligence, an essential part of the future of commerce.As Artificial Intelligence becomes a general-purpose technology, 34 countries with Artificial Intelligence capabilities, including access to large amounts of data, will be able to innovate.Once more, this challenges the notion of the North-South digital divide.China, which identifies as a developing country, is at the forefront of Artificial Intelligence innovation, 35 in part due to its ability to tap into large amounts of data.There are also emerging concerns about the inequalities that Artificial Intelligence technologies might create.The recent releases of Dall-E 2, GPT-4, and other technologies in the rapidly developing area of generative Artificial Intelligence, 36 incite debates about the extent to which they may lead to business model disruption and job replacement, how they will alleviate or accelerate inequalities in the developing and developed world, and what the consequences on trade policies will be.
Concluding Observations
The digital economy offers opportunities to enhance economic development and reduce inequality.Yet, negotiating digital trade rules will deliver full benefits only if participants are committed to including the most marginalized.The inclusion of provisions in WTO negotiations and, particularly, PTAs to promote the participation of micro-, small-, and medium-sized enterprises and marginalized communities in digital trade are positive steps.But there is also a need for international development cooperation, as well as regional and domestic inter-agency bestpractice and experience-sharing.PTAs contemplate such cooperation provisions.Although they admittedly lack legal enforceability, they are crucial to increasing understanding among domestic policymakers and regulators of the implications of liberalizing digital trade.The more frequent and inclusive (of participants and topics) these exchanges are, the more significant the long-term benefits of digital trade policies that effectively promote development and reduce inequality will be. 36Benjamin Larsen & Jayant Narayan, Generative AI: A Game-Changer that Society and Industry Need to Be Ready for, WORLD ECON.F. (2023). | 3,119.2 | 2023-05-08T00:00:00.000 | [
"Economics"
] |
Conditional observability
For a quantum Hamiltonian H =H(p) the observability of the energies E may be robust (whenever all E are real at all p) or, otherwise, conditional. Using a pseudo-Hermitian family of N-state chain models H we discuss some generic properties of conditionally observable spectra.
Introduction
At the low energies and for the sufficiently weak interactions, quantum mechanics is a reliable theory. A transition to the full-fledged apparatus of relativistic quantum field theory is, moreover, generally believed to offer its natural extension to the higher energies and/or to the stronger interaction forces. In between these two extremes, unfortunately, there exists a huge territory of open theoretical questions. The loss of the internal consistency of many approximative phenomenological models may be encountered. Their limits of validity at certain parameters are often marked by the loss of the reality of the measurable quantities. One of the best known illustrations of such a "quantum catastrophe" is the complexification of the ground-state energy in the solvable model of a Dirac fermion moving in a overcritical external Coulomb field [1]. The same effect is encountered for the Klein-Gordon boson moving in an external scalar field [2], etc.
A particularly popular simulation of the breakdown of quantum stability has recently been found via the study of complex, PT −symmetric local potentials admitting the completely real spectra [3]. Such an extension of the usual phenomenology offers multiple advantages in the context of physics (cf., e.g., its recent review [4]).
At the same time, its proper implementation requires a fairly complicated mathematical apparatus [5]. This is a serious technical obstacle for any easy explanation of the parameter-controlled transitions between the real and complex energies. In fact, within the class of the PT −symmetric local potentials the reality of the spectrum may prove extremely fragile [6]. For this reason one feels inclined to turn attention to the constructive study of the control of the observability of the energies (i.e., of the reality of all the eigenvalues of H) via some simplified models specified, say, by some finite-dimensional matrix Hamiltonians.
The first results in this direction were already obtained in our two recent remarks [7,8] where we discussed some phenomenological aspects of certain "first nontrivial", viz., two-and three-dimensional real-matrix models, respectively. In our present continuation of this effort we intend to extend our attention to the whole family of simplified matrix models of an arbitrary (say, even) dimension N = 2J.
The key encouragement of such a project has been found in our computer-algebrabased paper [9] where we revealed that a transparent mathematical structure can emerge not only in the above-mentioned low-dimensional models H (2) and H (3) but also in some of their specific tridiagonal generalizations H (N ) of any dimension N.
For our present purposes we pick up a subset of the latter models with the even dimensions N = 2J. The reason is pragmatic -one finds just purely formal differences between the separate even-and odd-dimensional series of the models H (N ) of ref. [9].
Our main message will be preceded by a brief summary of the state of the art in section 2. A more detailed explanation of the problem will then follow in section 3.
We emphasize there that several relevant properties of our family of the Hamiltonians may already be observed in its first, one-parametric real-matrix two-by-two member H (2) (λ). We point out the intimate connection between the parity-pseudo-Hermiticity of the model and the mechanism which makes the "observability" (i.e., the reality) of the energies lost during a smooth change of the λ−dependent matrix elements.
The next steps of our detailed analysis of chain-models H (N ) (λ) are performed in sections 4 and 5 giving a detailed description of the spectra at N = 4 and N = 6, respectively. In section 6 some of the quantitative conclusions of this analysis are found extensible to all the dimensions N = 2J. In particular, within the "catastrophic" loss-of-the-observability scenario we stress that the flexibility offered by the J independent matrix elements in H (2J) suffices for the simulation and enumeration of the eligible level-confluences igniting the imminent quantum collapse. Section 7 is a summary where we re-emphasize an exceptional suitability of our present models for deductions of some universal and generic qualitative conjectures.
2 The loss of observability in PT −symmetric examples A deep physical appeal of the PT −symmetric quantum mechanics (PTSQM, [10]) lies in the variability of its definition of the inner product in the "physical" Hilbert space of states [11,12,13]. The idea itself is in fact not too surprising since it has been discovered, forgotten and rediscovered in field theory [14], in perturbationtheory mathematics [15,16] as well as in nuclear physics etc [11,17]. After its recent extremely successful popularization by Bender and Boettcher [3] its use helped to clarify also the stability of a quantum particle in many complex quantum potentials [18] - [21].
A mathematical core of the PTSQM formalism lies in its work with pseudo-Hermitian Hamiltonians H such that H † = P H P −1 = H. The intertwiner P should be an elementary operator which is, very often, identified with the parity [3,16].
Then, the manifestly non-Hermitian Hamiltonian H can only become selfadjoint (or, in the language of ref. [11], quasi-Hermitian) with respect to a nontrivial, ad hoc metric operator Θ = I which depends on H = H(λ) and, hence, which can also vary with the parameter λ.
The selected Θ defines the inner product in the Hilbert space of states H (physical) so that it may become singular at some "exceptional" λ [22]. This is precisely what paved the way to the practical physical applications of the conditional reality of the eigenvalues in the single-particle relativistic context of Klein-Gordon equation [23] and of Proca equation [24] as well as in the papers on quantum anomalies [25], on supersymmetric models [26] or, beyond quantum-theory context, in cosmology [27] and in magnetohydrodynamics [28].
We believe that in virtually all of these (and many other) applications the conditional character of the reality of the eigenvalues plays the key role. For this reason let us now perform a certain systematic clarification of the related phenomenological as well as purely formal possibilities. 3 The loss/emergence of observability in a matrix example The feasibility of a controlled, conditional transition to the instability characterized by the complex eigenvalues may be understood as a direct consequence of the non-Hermitian nature of the models H = H(λ) for which the parameter λ either stays inside its "allowed" physical domain D or some of the energy eigenvalues E become complex. Of course, the variations of λ may be expected to depend on time. For this reason, let us change our notation and replace λ by the more explicit symbol t.
Whenever our "time" t reaches its critical value, the related complexification transition may be most easily visualized in the schematic two-by-two model of ref. [7], At all t < 1 this matrix is real and pseudo-Hermitian with respect to certain parity matrix P, The specific parametrization of its matrix elements has been chosen as giving closed formula for the two-point spectrum, E Obviously, these energies generated by the Hamiltonian (1) remain complex (i.e., not observable) along all the negative half-axis, t ∈ (−∞, 0) (cf. Figures 1 and 2). On the contrary, the matrix H (2) (t) becomes manifestly Hermitian (we could say, "conventionally physical") at t ∈ (1, ∞).
In the middle, "unconventionally physical" interval of parameters t ∈ (0, 1), our non-Hermitian Hamiltonian with real energies remains, in the terminology of the review paper [11], quasi-Hermitian.
The "unconventional" choice of t ∈ (0, 1) leaves our matrix H (2) tractable as a valid and acceptable selfadjoint representation of an observable quantity, provided only that our Hilbert space of states H (2) is properly re-defined (cf. the extensive accounts of this attitude, say, in the reviews [4,12] or in the proceedings [29]). In this sense the message delivered by our illustrative example (1) (1) and (2) we speak about the confluence of the real energy levels followed by their subsequent complexification. This convention will be preferred in what follows although, alternatively, we could also change it and call the right-ward t−development a "big-bang-like" decomplexification of the system which stayed unobservable at the earlier times.
Our specific t−parametrization of the matrix elements is privileged because it Our two latter comments are challenging: it is not obvious what can be expected to happen at some higher dimensions in a suitable matrix generalization of H (2) . A few answers will be offered in what follows.
The first nontrivial model with four levels
The two-parametric four-by-four chain-model is just one of the special cases of the four-parametric pseudo-Hermitian Hamiltonians studied in ref. [30]. The quadruplet of the eigenenergies of matrix (3) is obtainable in closed form, and Once we set we may fix the auxiliary constants A = B = 1 and stay safely inside the quasi-Hermiticity domain D at all the sufficiently small values of t > 0. In a way illustrated by Figure 3 the t−dependence of the energies remains smooth also when we cross the separation point t = t (GM ) = ( √ 5 − 1)/2 ≈ 0.618 between the Hermitian and non-Hermitian regimes with t > t (GM ) and t < t (GM ) , respectively.
Once we decide to weaken the attraction between the central energies and choose, say, A = 2 and B = 1, the energies split in the two well-separated pairs and they remain all real whenever t (QH) > 0.3104686356, i.e., to the right from the A−dependent quasi-Hermiticity boundary. The resulting t−dependence of the spectrum is displayed in Figure 4 where the complexifications of the two different off-diagonal ma- Between the vertical lines, an anomalous parity matrix must be chosen to define the pseudo-Hermiticity, In the complementary scenario we have to weaken the attraction between the peripheral energy levels using B > A. The choice of A = 1 accompanied by the weakly enhanced B = 3/2 leads to the result depicted in Figure 5. Due to the complexification of the most strongly attracted central pair of the levels, the quasi-Hermiticity is lost for t < t (QH) = 0.2761423749. This takes place safely below the upper boundary t (SP H) = 0.5485837704 of the pseudo-Hermitian regime specified by the standard parity operator (8). The latter feature is fragile. At the larger B = 5 we get t (SP H) = 0.3582575695 which is perceivably smaller than the complexification bound t (QH) = 3/5 (cf. Figure 6).
The next, more complicated chain model with six levels
In the six-by-six model of ref. [9], with comment that after a temporary reversal of our conventional arrow of time, Figure 7 mimics an even more interesting scenario of a "big-bang" development. From this point of view there exists no observable state of our schematic system at t < 0 and just a single, fully degenerate state with E = 0 emerges at t = 0. Subsequently, teh evolution becomes characterized by a steady repulsion of the levels.
After we return to our leftwards-running time convention, the peripheral-weakening choice of a dominant C will leave both the outermost levels far away, too weakly attracted and not sufficiently participating in the overall collapsing tendency. The first complexification will involve only the inner quadruplet of the energies. It may proceed either along the two-pair-complexification pattern indicated in Figure 4 The next eligible choice of dominant A will weaken the central attraction so that the overall loss of the quasi-Hermiticity will be caused by the simultaneous pairwise mergers inside the external energy triples. The explicit decision between the two existing possibilities of these E = 0 mergers will be controlled by the detailed balance between the size of B and C -for the weaker C the pattern will resemble Figure 4.
The last possibility corresponds to the full dominance of the constant B. This weakens the attraction between the central and peripheral energy pairs. A characteristic illustration is offered by Figure 9 where the choice of A = B/2 = C = 1 is shown 6 The general chain model with N = 2J levels One could move on and construct various sample spectra, numerically, at a number of the higher even dimensions N = 2J. The same family of the matrix models as mentioned in the previous section would still suit our purpose. The three main user-friendly features of all these models can be seen • in the fact that they represent a generalization of eq. (9), • in the nontrivial fact that the same parametrization can be used, (1 − ξ n ) , ξ n = t + t 2 + . . . + t J−1 + G n t J , n = 1, 2, . . . , J • in the fairly nontrivial recommendation that g (max) n = n (N − n) (cf. [9]). The numerical experiments of the two preceding sections indicate that the energy mergers in the spectra might admit a combinatorial classification. In such a perspective, Figure 1 is still trivial since for the mere two available energy levels at J = 1 there exists just their single possible merger. Still, this example enables us introduce a new convention that the two energies will be subscripted by their respective unperturbed integer values at g 1 = 0 (giving E ±1 (t) at J = 1, etc).
In this notation, the pair of the In such an approach we ignore the transitional multiple mergers as sampled in This means that the sequence of all the connections [±(2J −1), ±n] which involve the outermost energies will again possess L terms. We arrive at the second recurrence relation, The latter two recurrences are mutually coupled. Their numerical solution is straightforward, with a sample given in Table 1. Some of its properties are really remarkable.
For example, empirically one finds out that the first eight (!) elements of the sequence P (2J) coincide with certain binomial coefficients. For the next eight elements of this sequence, moreover, one still finds another unexpected regularity in the differences Table 1).
Conclusions
We may summarize that our detailed quantitative analysis of schematic matrix models revealed interesting generic qualitative features of the important phenomenological concept of quantum instabilities.
Firstly we saw that the very possibility of the conditional, parameter-controlled emergence of the quantum collapse is closely bound to the manifestly non-Hermitian character of the underlying Hamiltonians. Indeed, these operators only rarely enable us to suppress the well known robust mathematical stability of the spectra when they are chosen as manifestly Hermitian.
Of course, the removal of the Hermiticity (in the narrow sense of the invariance with respect to the matrix transposition and complex conjugation) does not lead to any conflict with the postulates of Quantum Mechanics. On the contrary, it enables us to make the models more flexible and more amenable to a direct control of the mechanism of the complexification of the eigenvalues.
In a way related to the simplicity of our examples another key merit of them can be seen in the nontriviality of the related metric Θ = I which can (and does) vary with the parameters. As long as the flexibility of the physics is directly encoded in Θ, one can conclude that the access to the onset and/or breakdown of the observability should be mediated by the selection of the "decisive" parameters.
We did not mention many other merits of our models (like, e.g., the particular advantages of their PT −symmetry, etc) because the arguments in this direction may be found elsewhere [4]. For compensation, let us finally note that the present outline of some properties of the quasi-Hermiticity domains could be understood, in some sense, as the first steps towards the formulation of a certain quantum analogue of the Thom's theory of catastrophes [31]. | 3,714.2 | 2007-04-28T00:00:00.000 | [
"Physics"
] |
Moduli Space of Bi-Invariant Metrics
In this work, we focus on describing the space of bi-invariant metrics in a Lie group up to isometry. I.e, that is, metrics invariant under both left and right translations. We show that $\mathfrak{BI}$, the moduli space of bi-invariant metrics, is an orbifold. Moreover we give an explicit description of this orbifold, and of $\mathfrak{EBI}$, the space of bi-invariant metrics equivalent under isometries and scalar multiplies.
Introduction
In the study of Riemannian manifolds, it is of interest to show how complicated is the collection of Riemannian geometries satisfying a certain property.For example, given a manifold M we could explore the class of all Riemannian geometries defined on M , such that the curvature induced by these geometries has a fixed lower bound; we could also consider the class of all geometries that turn M into an Einstein manifold.Moreover, one can ask if given two geometries with the desired property, we can deform one into the other via geometries satisfying the given property.This can be quantified in a precise way, by considering the moduli space of all Riemannian metrics satisfying the desired property.This set consist of all the Riemannian metrics up to isometry, equipped with the C ∞ -Whitney topology (see [13]).
For a Lie group G, a natural interesting family of Riemannian geometries consist of the ones that have G contained in their isometry group, i.e. left or right invariant Riemannian metrics.A smaller subfamily consist on the Riemannian geometries that are both left and right invariant, that is bi-invariant metrics.These Riemannian metrics satisfy non-negative lower curvature bounds (see [7]), and have important applications when studying spaces with a lower curvature bound (see [9]).
In this work we study the topology of the moduli space of bi-invariant metrics of a compact Lie group, obtaining the following theorem., are semi-simple subalgebras, such that any two factors of s are non isomorphic to each other, each b i decomposes as the sum of m i isomorphic factors, no simple factor of s is isomorphic to any simple factor of any b i , and a is an abelian subalgebra.Then the moduli space of bi-invairant metrics BI(g) is a contractible orbifold homeomorphic to Here BI(s) is the space of all possible bi-invariant metrics on s, and SP mi (R) is the m i th symmetric product of R.
As mentioned before bi-invariant metrics are useful in the construction of manifolds with lower curvature bounds.For example given a Lie group acting by isometries, Cheeger [3] gave a deformation procedure that will preserve non-negative curvature bounds, by fixing a bi-invariant metric.With this deformation one can construct new spaces with positive Ricci curvature [10].Moreover, for closed manifolds of cohomogeneity one, Grove and Ziller used a fixed bi-invariant metric to show the existence of invariant Riemannian metrics with non-negative curvature [4].
Thus it is relevant to study how bi-invariant metrics relate to each other.Due to the strong geometric properties of these metrics, it is natural that the homotopy information of the moduli space of bi-invariant metrics is simple.This is in contrast to the moduli-space of left invariant metrics as observed by Kodama, Takahara and Tamaru [6].
Nonetheless, we believe it is useful to have a full description of the topology of this moduli space.In particular observe that the conclusion in Theorem 1.0.1 states that the moduli space of bi-invariant metrics of a compact Lie group is an orbifold, and moreover, we can read the orbifold stratification from the Lie algebra decomposition.
It is important to note that the characterization of Theorem 1.0.1 only relies on the group admitting a bi-invariant metric and the decomposition of the semisimple factor in the Lie algebra, not on the compactness of the Lie group.The compactness stated in Theorem 1.0.1 is stated to guarantee the existence of a biinvariant metric.We recall that there are examples of non-compact and non-abelian simple Lie groups, such as SL(2, C), that do not admit bi-invariant metrics.
We start in Section 2 by collecting results that characterize Lie groups and Lie algebras that possess bi-invariant metrics.Specifically, we discover that the decomposition of these Lie algebras yields comprehensive insights into the behavior of the automorphism group on the space of bi-invariant metrics.This enables us to describe the space of bi-invariant metrics in Section 3. We do this using the factorization of the Lie algebra and a factorization of the action of the automorphism group of the Lie algebra.Likewise, we employ the same procedure to obtain the space of equivalent bi-invariant metrics, encompassing scalar multiples as well.In Section 4 we present explicitly the moduli spaces of bi-invariant metrics for Lie groups of dimension at most 6.We end by restating in Section 5 some results from [7].
to inner products on the Lie algebra g.Thus, the space M(g) (defined in [6] by Kodama, Takahara and Tamaru) consisting of inner products on the Lie algebra g corresponds to the space of left-invariant metrics on G.
2.1.Bi-invariant metrics.A Riemannian metric on G is called bi-invariant if it is invariant under both left and right translation.
We begin by recalling some facts about the adjoint representation.Each g ∈ G defines an inner automorphism ψ g : G → G, ψ g (x) = gxg −1 .Using this, we can define a group homomorphism Ad : G → Aut(g), called the adjoint representation of G, where Ad(g) : g → g is the differential of ψ g at the identity element of G.The differential of Ad : G → Aut(g) gives us a representation between the Lie algebras ad : g → Der(g), and thus we obtain the following commutative diagram.
ad exp e
Ad
Where "e" denotes the exponential map of Aut(g).Next, we state a couple results regarding bi-invariant metrics that can be found in [7].
Lemma 2.1.1.A left-invariant metric in a Lie group G is right-invariant if and only if Ad(g) : g → g is an isometry for any element g ∈ G.
For compact groups we have the following theorem that can be found in [2].Theorem 2.1.4.Any compact Lie group G admits a bi-invariant metric.
For a Lie algebra with a bi-invariant metric we have the following splitting theorem that can be found in [7].
Splitting Theorem 2.1.5.Let g be a Lie algebra with a bi-invariant metric.Then we have g = a 1 ⊕• • •⊕a k an orthogonal direct sum of simple ideals and commutative ideals without proper ideals, where the simply connected Lie group G associated with g can be expressed as the product A 1 × • • • × A k of normal subgroups.Furthermore, for each A i we have two options: (1) If a i is commutative, then it has dimension 1 and A i ∼ = R.
(2) If a i is non-commutative, then the center of a i must be trivial, A i has strictly positive Ricci curvature and A i is compact.
Remark 2.1.6.With the Splitting Theorem 2.1.5,we can say that any Lie algebra with a bi-invariant metric decomposes as g = s ⊕ Z(g), where s is a semi-simple Lie algebra.
For a connected Lie group that admits a bi-invariant metric, Milnor [7] gives the following description.Lemma 2.1.7.A connected Lie group admits a bi-invariant metric if and only if it is isomorphic as a group to the cartesian product K × R l , where K is compact.
2.2.
Curvature of Bi-invariant metrics.Given a left-invariant metric on G we have its associated Levi-Civita connection.We consider the curvature tensor R of the left invariant metric, and define the following curvature operator: for x, y ∈ g we define κ(x, y) = ⟨R xy (x), y⟩.For a bi-invariant metric, Milnor [7] shows that this curvature operator is nonnegative.
Moduli Space of Bi-Invariant Metrics
In this section, we examine the moduli space of bi-invariant metrics of a Lie group G.An advantage of this approach is the fact that to find left invariant metrics with non-negative sectional and scalar curvatures on a Lie group G, under certain conditions (see [7, p.297]), it will be sufficient to find a subgroup with a bi-invariant metric.In this way, knowing how many non isomorphic bi-invariant metrics the subgroup has, we have a rough measure of how many, non isomorphic left invariant metrics with non-negative sectional and scalar curvatures the group G possesses.
3.1.Isometry Classes of Bi-Invariant Metrics.We denote the collection of inner products on a Lie algebra g by M(g).Identifying an inner product on M(g) with a matrix, we endow M(g) with the relative topology as a subspace of a matrix space.Definition 3.1.1.For a Lie algebra g, we define the set of left invariant metrics that are bi-invariant in g, as follows We want to study the moduli space of bi-invariant metrics BI(g) consisting of the isometry classes of BI(g) and the space EBI(g) of conformally equivalent classes BI(g).To do this we introduce the following definition.
Lie algebras with inner products.We say that: (1) They are isometric if there exists a Lie algebra isomorphism ϕ : (2) They are conformally equivalent, if there exists a Lie algebra isomorphism ϕ : g 1 → g 2 and a real number λ > 0 such that It is clear that both are equivalence relations on BI(g).Now we are going to prove that an equivalence class of a bi-invariant metric contains only bi-invariant metrics.
Definition 3.1.4.For a Lie algebra g with a bi-invariant metric we define: (1) BI(g) as the space of isometry classes of BI(g) (2) EBI(g) as the space of conformally equivalent classes of BI(g) We conclude from the definition of both equivalence relations the following corollary.
Corollary 3.1.5.For a Lie algebra g with a bi-invariant metric, we have: (1) BI(g) is the orbit space BI(g)/Aut(g) under the left action (2) EBI(g) is the orbit space BI(g)/R × Aut(g) where The simplest case is when we have an abelian Lie algebra.Corollary 3.1.7.Let g an abelian Lie algebra, then we have that BI Proof.In an abelian Lie algebra any metric is bi-invariant, in addition we have GL(g) = Aut(g).This implies that two metrics in g are always related under an automorphism, thus Simplifying to the compact and semi-simple case.Using the Splitting Theorem 2.1.5and Lemma 3.2.1 we can describe the group Aut(g) for a Lie algebra with a bi-invariant metric.From now on, every time we write g = s ⊕ Z(g) we refer to a Lie algebra that admits a bi-invariant metric as indicated in Remark 2.1.6.
Before starting the study of Aut(g), we are going to prove the following lemma.
where each a i is simple, so a i is non abelian and contains no nonzero proper ideals.This implies that [a i , a i ] = a i .Therefore given x ∈ g we have For a Lie algebra with a bi-invariant metric g ∼ = s ⊕ Z(g) we have Aut(g) ∼ = Aut(s) ⊕ Aut(Z(g)).
Proof.For ϕ ∈ Aut(g), we will see that ϕ(s) ⊂ s and ϕ(Z(g)) ⊂ Z(g).Take x ∈ s, so by Lemma 3.2.1 we know that For the Lie algebra g ∼ = s ⊕ Z(g), the spaces BI(g) and EBI(g) are homeomorphic to BI(s) and EBI(s).
Proof.Let us consider a metric ⟨•, •⟩ g on g ∼ = s⊕Z(g).This metric can be expressed as a sum of metrics ⟨ Thus we can identify the moduli space Since Z(g) is abelian, by Corollary 3.1.7we have In conclusion we have that for g ∼ = s ⊕ Z(g) the moduli spaces BI(g) and EBI(g) are homeomorphic to BI(s) and EBI(s) respectively.□ With this lemma, we can conclude that the study of moduli spaces BI(g) and EBI(g) reduces to the case of compact and semi-simple Lie groups.We know from Theorem 2.1.4that any compact Lie group admits a bi-invariant metric.The following lemma will be useful for examining what happens when the group is compact and simple.Lemma 3.2.4.If we have a compact and simple Lie group, then the bi-invariant metric is unique up to a positive scalar multiple.
The reader interested in a proof, may consult [2].Remark 3.2.5.In [2, Theorem 2.35], it is verified that the Killing form, B(X, Y ) = tz(ad(X)ad(Y )) is Ad invariant.Furthermore, if the group is compact, connected, and semi-simple, then B is negative definite and −B plays the role of a bi-invariant metric.In the case of Lemma 3.2.4,the unique metric up to scalar multiple is represented by the Killing form.That is, for any bi-invariant metric ⟨•, •⟩ on g there exists α > 0 such that −αB = ⟨•, •⟩.
Then, ⟨•, •⟩ restricted to each a i is bi-invariant in a i , and for all u ∈ a i , v ∈ a j with i ̸ = j, it holds that ⟨u, v⟩ = 0 Proof.Let's recall that the Lie bracket in s decomposes as [ where each [•, •] i is the bracket of the summand a i .Now, let x, y, and z be in This implies that ⟨•, •⟩ restricted to a i is skew-adjoint, and by Lemma 2.1.2,it is bi-invariant in a i .
Finally, let u ∈ a i , v ∈ a j with i ̸ = j.Using Lemma 3.2.1,since a i is simple, we have As previously mentioned in Lemma 3.2.3, the study of our moduli spaces is reduced to the case of compact, semi-simple groups.The following result gives us a description of the space BI(s), when s semi-simple.Theorem 3.2.7.If S is a compact, connected, and semi-simple Lie group with Lie algebra decomposition s = a 1 ⊕ • • • ⊕ a k , where a i is simple, it holds that BI(s) can be identified with we know from Lemma 3.2.4 that each a i admits a unique bi-invariant metric up to positive scalar multiples, and without loss of generality by Remark 3.2.5 we can assume that it is given by the Killing form −B i .In this way, for each α i > 0, the metric As a result, ⟨•, •⟩ s decomposes uniquely as a bi-invariant metric in each summand, so we have that Thus, we can identify BI(s) with {(α 1 , ..., α k )|α i > 0}.□ Remark 3.2.8.From Theorem 3.2.7,we can conclude that in a compact and semisimple group S with s = a Remark 3.2.9.Note that in this case, the space BI(s) = {(α 1 , ..., α k )|α i > 0} has the structure of a group with the product of real numbers entry by entry.This fact has strong consequences later.
3.3.
Action of the automorphisms group on BI.By Lemma 2.1.1 we recall that a bi-invariant metric on a Lie group G is invariant under Ad(g) for all g ∈ G. Thus any metric in BI(g) remains fixed under the action (Ad(g This leads us to studying the action of the automorphisms of g that are distinct from the ones given as Ad(g).In order to give a complete description of the moduli spaces of bi-invariant metrics, we need to know more about the group of automorphisms of a semi-simple Lie algebra.Definition 3.3.1.For G a connected Lie group, we define the group of inner automorphisms of the Lie algebra g as It is easy to verify that Inn(g) does not depend on G.In [12, Chapter II] we can find the following result.Remark 3.3.3.The inner automorphisms of g are induced by inner automorphisms of G, since for each g ∈ G, Ad(g) is the differential of some inner automorphism of G.When g is semi-simple, then Inn(g) = Aut 0 (g), that is, the connected component of the identity in Aut(g) is comprised solely of internal automorphisms.
In this way, we know that if S is compact and semi-simple then for its Lie algebra s, the action of Inn(s) on BI(s) is trivial, so we must only study how the outer automorphisms of the Lie algebra act on BI(s).Definition 3.3.4.For a Lie algebra g, we define the group of outer automorphisms as Out(g) = Aut(g)/Inn(g).
We recall that for a Lie group G, the quotient G/G 0 is a group where each element corresponds to a connected component of G.In [8, Corollary 2] it is verified that for a connected and semi-simple Lie group we have that Aut(G)/Aut 0 (G) is finite.In this way, we conclude that if s is a semi-simple Lie algebra, then Aut(s) has a finite number of connected components and therefore Out(s) is a finite group.Proof.Take the bi-invariant metric given by −B, where B is the Killing form of g.Let ϕ ∈ Out(g).As the metric ϕ * (−B) is bi-invariant, then by Lemma 3.2.4ϕ * (−B) = α(−B) for some α > 0. Since Out(g) has finite order, then ϕ n = Id for some n ∈ N, and thus (ϕ n ) * (−B) = α n (−B) = −B.This implies that α = 1 and therefore the action of Out(g) is trivial on BI(g).□ Remark 3.3.6.We have that for a compact and simple group G, the action of Aut(g) on BI(g) is trivial.With this we conclude that BI(g) = BI(g) = R + and EBI(g) = {⟨•, •⟩ 0 }.Now we study the semi-simple case.To do this, we review the following results.
Proposition 3.3.7.If u is an ideal in a Lie algebra g and ϕ ∈ Aut(g), then ϕ(u) is an ideal in g.
Proof.Let u ∈ u and x ∈ g.We observe that Thus ϕ(u) is an ideal in g. □ Corollary 3.3.8.Let s = a 1 ⊕• • •⊕a k be a semi-simple Lie algebra and ϕ ∈ Aut(g).
Then ϕ(a i ) = a j for some j.
Proof.We know that ϕ(a i ) is a simple ideal of g.Furthermore, for all j, ϕ(a i ) ∩ a j is an ideal of a j .But since a j is simple, then ϕ(a i ) ∩ a j = 0 or ϕ(a i ) ∩ a j = a j .In the second case, we have that a j ⊂ ϕ(a i ) but since ϕ(a i ) is simple, then ϕ(a i ) = a j .Also, if ϕ(a i ) ∩ a j = 0 for all j, then ϕ(a i ) = 0 which is a contradiction.□ The next step will be to study the semi-simple case where all the factors are non-isomorphic to each other.Proof.Let ϕ ∈ Aut(s).Since all the summands are not isomorphic to each other by the previous corollary we have that ϕ(a i ) = a i .Thus we conclude that ϕ = ϕ 1 + • • • + ϕ k , where each ϕ i ∈ Aut(a i ), which finishes the proof.□ Lemma 3.3.10.Let G be a compact, connected, and semi-simple Lie group with Lie algebra g = a 1 ⊕ • • • ⊕ a k , where the summands are not isomorphic to each other and are simple.Then the action of Out(g) on BI(g) is trivial.
) be the bi-invariant metric given by the sum of the Killing forms B i in each a i .We define the mapping From the previous lemma, we know that In this way, we conclude that In Remark 3.2.9we had identified BI(g) with the multiplicative group of vectors with k positive real entries.From what we have seen above, we conclude that Φ is a group homomorphism and the image of Out(g) is a finite subgroup of BI(g).But this group does not have any non-trivial finite subgroups, so we conclude that Φ is trivial and the action of Out(g) on BI(g) is also trivial.□ This small detail allows us to relate Aut(g) to the symmetric group of permutations.We start with the following definition.Definition 3.4.2.Given a set X and S n the symmetric group of permutations of n elements, we define the left action of S n on In conclusion, we have that each automorphism of h 1 ⊕ • • • ⊕ h k is given by automorphims ϕ i ∈ Aut(h i ) for i = 1, . . ., k, and a permutation σ ∈ S k , where it holds that, if x i ∈ h i , then ϕ i (x i ) ∈ h σ(i) .Now we will see what happens when each h i is the Lie algebra of a simple and compact group.Theorem 3.4.4.Let H be a connected, simple, and compact Lie group and consider the product of H k-times, G = H × • • • × H. Then the action of Aut(g) on BI(g) coincides with the action of the symmetric group S k .Observe that the inner product ⟨•, •⟩ can be decomposed as a sum of inner products: Proof.Let ϕ ∈ Aut(g) and let us see how this automorphism acts on a bi-invariant metric ⟨•, •⟩ 0 ∈ BI(g).We know that ⟨ , where B is the Killing form on H and each α i > 0. Using Observation 3.2.8we know that the decomposition h ⊕ • • • ⊕ h is orthogonal for any bi-invariant metric.From this we obtain But as we saw in Lemma 3.3.5 the action of the automorphism group of a compact and simple group on the space BI(h) is trivial.With this we have that Thus we have that the action of Aut(g) on BI(g) is given by the action of the symmetric group S k .□ Definition 3.4.5.We consider the action of the symmetric group of permutations where X is a topological space, given by (σ, (x 1 , ..., x n )) → (x σ(1) , ..., x σ(n) ).
We define the nth symmetric product of X as SP n (X) = X n /S n .
By [1], when M is a differentiable manifold, then SP n (M ) has an orbifold structure, and if M = R then SP n (R) is homeomorphic to the product R×(R + ∪{0}) n−1 .Theorem 3.4.6.If H is a compact, connected, simple Lie group and G is the product of H k-copies of H, i.e.G = H × • • • × H, then the moduli space BI(g) is homeomorphic to the kth symmetric product of R, With this we conclude that the isometry class of a bi-invariant metric is the set [(α 1 , ..., α k )] = {(α σ(1) , ..., α σ(k) )|σ ∈ S k }.It follows that □ Remark 3.4.7.In this case, BJ(g) is homeomorphic to R × (R + ∪ {0}) k−1 , which is homotopic to a point and thus contractible.Remark 3.4.8.In the case of EBI(g), recall that by Remark 3.1.6we have that the actions of Aut(g) and R × on BI(g) commute, so we can first consider the space BI(g)/R × and later factor it with Aut(g).Since BI(g) is formed by vectors of strictly positive k real numbers, we can think of the space BI(g)/R × as S k−1 + = S k−1 ∩ BI(g), and in this way EBI(g) = S k−1 + /Aut(g).Having said this, we can state the following corollary.given by and each b i is semi-simple and decomposes as the sum of m i isomorphic factors but none of these factors is isomorphic to any a j .Then BI(g) is homeomorphic to Proof.Let ϕ ∈ Aut(g).By Corollary 3.3.8,we have ϕ = ψ + φ 1 + • • • + φ l where ψ ∈ Aut(s) and each φ i ∈ Aut(b i ).We know that the action of ψ on BI(s) is trivial and the action of φ i on BI(b i ) is through a permutation σ ∈ S mi .Therefore, upon factoring BI(g) with Aut(g), we have that BI(g) is homeomorphic to the product And for the space EBI(g) we have the following theorem.
Theorem 3.5.2.Let G be a connected, compact, and semi-simple Lie group with and each b i is semi-simple and decomposes into a sum of m i isomorphic factors, but none of these factors are isomorphic to any a j .Then EBI(g) is homeomorphic to In both cases, the spaces BI(g) and EBI(g) are products of contractible spaces and therefore they are contractible as well.Also note that for a Lie group G admitting a bi-invariant metric, the description of these spaces depends on the decomposition of the Lie algebra into semi-simple components, and in particular on the number of simple components which are isomorphic.
Figure 1.Since BI(so(4)) = (R + ) 2 , each metric is represented by a pair of positive real numbers.Therefore, we can visualize the space BI(so(4)) as the positive quadrant of R 2 , where each point (α 1 , α 2 ) represents a bi-invariant metric.Taking the quotient of BI(so(4)) by S 2 , each (α 1 , α 2 ) is identified with (α 2 , α 1 ).This implies that BI(so(4)) is the shaded area with the boundary given by the line y = x.The dotted line passing through the shaded area represents an element of EBI(so(4)), so this space is represented by the circular segment extending from the x-axis to the line x = y.
Curvature
Using the language developed so far, we can restate some results from Milnor's previous work in [7] as follows.Consequently, we have the following corollary.
Theorem 3 . 3 . 2 .
Let G be a connected semi-simple Lie group.Then the connected component of the identity of the automorphism group Aut 0 (G) in Aut(G) coincides with Inn(G).
Lemma 3 . 3 . 5 .
If G is a compact and simple Lie group then the action of Out(g) on BI(g) is trivial.
3. 4 .Theorem 3 . 4 . 1 .
Description of BI and EBI.With what has been developed so far, we can state the following theorem.For a compact, connected, and semi-simple Lie group G with g = a 1 ⊕ • • • ⊕ a k where the summands are simple and pairwise non-isomorphic, it follows that BI(g) = BI(g).Furthermore, EBI(g) = BI(g)/R × corresponds to the set of vectors with k − 1 strictly positive real number entries.Now we are going to study the case G = H × • • • × H, where H is a compact and simple Lie group, with Lie algebra | 6,081.6 | 2023-05-23T00:00:00.000 | [
"Mathematics",
"Physics"
] |
Semaphorin 3C and Its Receptors in Cancer and Cancer Stem-Like Cells
Neurodevelopmental programs are frequently dysregulated in cancer. Semaphorins are a large family of guidance cues that direct neuronal network formation and are also implicated in cancer. Semaphorins have two kinds of receptors, neuropilins and plexins. Besides their role in development, semaphorin signaling may promote or suppress tumors depending on their context. Sema3C is a secreted semaphorin that plays an important role in the maintenance of cancer stem-like cells, promotes migration and invasion, and may facilitate angiogenesis. Therapeutic strategies that inhibit Sema3C signaling may improve cancer control. This review will summarize the current research on the Sema3C pathway and its potential as a therapeutic target.
Introduction
The connection between neural networks and cancer has long been recognized. In almost all solid tumors, perineural invasion is recognized as an important adverse prognostic feature. This suggests that cancers have evolved mechanisms to grow and spread along nerves or recruit nerves along which they can proliferate and migrate. More recent data reveal that neural stimulation can trigger a release of neurotransmitters that contribute to the growth, differentiation, and proliferation of tumor cells and cancer stem-like cells (CSCs) [1]. Indeed, many neural developmental programs are hijacked by cancer cells to promote their own growth, survival, and invasion. These programs include axonal guidance proteins and their receptors, notably the Eph/ephrin [2], Slit/Robo [3], neurotrophin [4], Netrin/DCC/UNC5 [5,6], and Semaphorin/Neuropilin/Plexin families of proteins [7][8][9][10][11][12][13].
Intriguingly, these axonal guidance systems not only promote cancer growth themselves, but also promote migration and angiogenesis. Considering that in development, the nervous system and blood vessels grow together in parallel, it is not surprising that these neurodevelopmental programs also help to shape the vasculature and that the nervous and cardiovascular systems share signaling pathways. In this review, we will discuss the role of Semaphorin 3C (Sema3C) as an example of one axonal guidance cue that drives cancer progression.
The Semaphorin protein family is a large family of axon guidance molecules that was first found to help shape the developing nervous system [14]. The proper wiring of the nervous system requires that axons be guided to their targets with high precision to form proper synapses. Semaphorins serve as cues for axons to navigate through their environment, with some semaphorins serving as attractants and others serving as repellants. Proteins in the semaphorin family all share a common sema domain and a plexin-semaphorin-integrin (PSI) domain. There are more than 20 semaphorins divided into 8 classes that are found in invertebrates, vertebrates, and viruses [13,15]. These semaphorins include transmembrane proteins, membrane-anchored proteins, and secreted proteins. In general, semaphorins dimerize and bind to a pair of neuropilins which recruit a pair of plexin co-receptors. There are two neuropilins, Nrp1 and Nrp2, and nine plexin co-receptors that are divided into four classes. Sema3C binds to Nrp1 and Nrp2 with similar affinity [7]. Semaphorin 3E (Sema3E) is the only semaphorin that has been shown to bind directly to a plexin receptor and activate signaling independent of the neuropilins [16]. Considering the numerous combinations of semaphorins, neuropilins, and plexins, a large number of cellular responses can be generated. Sema3C and its receptors are frequently overexpressed in cancer and are associated with invasion and metastasis. In this review, we will introduce the role of Sema3C signaling in development and focus on the role of Sema3C and its receptors in oncogenesis and in CSCs.
Sema3C Function in Development
Class 3 semaphorins are secreted proteins whose spatial distribution across gradients can counterbalance each other to fine tune cellular responses [17]. In neurodevelopment, opposing gradients of Sema3C and Sema3A expression help to guide axons to their target [18]. Sema3C functions as an axon attractant in migrating cortical axons, whereas Sema3A functions as a repellant [18]. Sema3C also promotes axonal growth of dopaminergic neurons in humans and rats [19,20], is essential for cortical and hippocampal neuron polarization and migration [21][22][23], and helps guide motor neurons to their targets [24,25], revealing a diverse range of neuronal projections that Sema3C regulates.
Sema3C is also implicated in the development of the enteric nervous system. Loss of function mutations of Sema3C are found in patients with Hirschsprung disease, a congenital disease in which the enteric nervous system fails to form in parts of the intestine [26]. In Crohn's disease, which is an inflammatory bowel disease, increased Sema3C expression in intestinal crypts correlates with a reduction in mucosal sympathetic nerve fibers [27]. The loss of sympathetic nerves was thought to promote inflammation.
Interestingly, Sema3C knockout mice do not appear to have defects in their nervous system [28]. Instead, these mice exhibit cardiovascular deficits, including cardiac outflow tract and aortic arch defects. Knockout mice are cyanotic and die within 24 h after birth. The failure of cardiac neural crest cells to migrate properly results in improper septation of the cardiac outflow tract [28]. The penetrance of the phenotype is largely dependent on the strain of mouse, with the CD1 background strain exhibiting the greatest penetrance [28].
The mechanisms behind these developmental abnormalities have been examined using tissue-specific and ligand-specific mouse mutants. Neural crest cells secrete Sema3C which binds to Nrp1 in endothelial cells to promote endothelial-to-mesenchymal transition, and these dedifferentiated cells and neural crest cells participate in septation of the outflow tract [29,30]. Others have shown that expression of attractive cues from Sema3C and PlexinD1 and repulsive cues from Sema3D, Sema6A, Sema6B, and PlexinA1 guide the migration of cardiac neural crest cells [31,32]. Further supporting a role for Sema3C in cardiac development comes from genetic analysis of patients with persistent truncus arteriosus (PTA). These patients have mutations in the transcription factor GATA6, which is unable to transactivate Sema3C and its receptor PlexinA2 [31,33]. Other transcription factors, including Foxc1, Foxc2, and Tbx1, participate in cardiac outflow tract development by regulating Sema3C expression [34][35][36]. Mutation of PlexinD1 is also found in truncus arteriosus [36].
Sema3C and Sema3A also play important roles in the development of many other organs. In lung development, Sema3C stimulates lung branching morphogenesis in the central lung, whereas Sema3A reduces branching in the distal mesenchyme in the lung periphery, thereby guiding proper branching [37]. Interestingly, in a rat bronchopulmonary dysplasia model, Sema3C treatment reduced inflammation and apoptosis to maintain alveolar and lung vascular growth, suggesting that Sema3C may facilitate lung repair [38]. In kidney development, Sema3C and Sema3A similarly help to shape renal ureteric bud development and formation of the glomerular filtration unit [39,40]. Sema3C promotes ureteric bud and glomerular endothelial cell morphogenesis, whereas Sema3A negatively regulates these processes. Thus, the spatial distribution and gradients of expression of Sema3C and Sema3A across an organ regulate proper tissue patterning. The roles of Sema3C in development are summarized in Figure 1.
Sema3C in Cancer and Cancer Stem-Like Cells
Sema3C plays an oncogenic role in many different types of cancers. Sema3C is an indicator of poor prognosis and progression of glioblastoma, prostate cancer, breast cancer, liver cancer, gastric cancer, pancreatic cancer, and lung cancer [11,[41][42][43][44][45][46][47][48]. Whether other semaphorins cooperate with or compete with Sema3C in cancer is not well understood.
In gliomas, Sema3C and its receptors are overexpressed in human glioma cell lines [49]. Sema3C is overexpressed in 85% of glioblastoma and its receptors PlexinA2 and PlexinD1 are detected in all glioblastoma specimens analyzed [50]. Analysis of human brain tumor samples revealed that Sema3C protein levels are markedly increased in glioblastoma (grade IV) compared to grade I-III astrocytomas, and this increase in Sema3C expression was associated with shorter survival time [51]. Additionally, Sema3C has been found to be one of the top 20 most frequently altered genes in glioblastoma [52]. These genetic changes include amplification and missense mutations [52].
Cancer stem-like cells (CSCs) promote cancer progression due to their capacity for self-renewal and invasion and intrinsic ability to repair DNA damage [53][54][55]. In glioblastoma, Sema3C functions as a survival and invasion cue for glioma stem-like cells (GSCs) [50]. Sema3C and its receptors PlexinA2 and PlexinD1 are coordinately expressed by GSCs, and together, they form a feed-forward autocrine and paracrine loop to stimulate GSC survival and invasion [50]. Sema3C binding to the Nrp1/PlexinA2/PlexinD1 receptor complex was found to activate Rac1/NF-κB signaling to promote the survival and migration of GSCs [50]. In GSCs, both PlexinA2 and PlexinD1 co-receptors are needed to transduce Sema3C signals as knockdown of either one of the plexins induces apoptosis. In GSCs, signaling through two different types of plexins in the receptor complex, as opposed to the more usual plexin homodimer, adds an additional layer of complexity to semaphorin signaling.
In glioblastoma, Sema3C was selectively expressed in GSCs but not in their counterpart neural progenitor cells or non-stem tumor cells [50]. Forced differentiation of GSCs resulted in loss of Sema3C expression. This suggests that GSCs have independently evolved a way to reactivate Sema3C expression, and upon differentiation, Sema3C expression is turned off. These studies demonstrate a key role for Sema3C in maintaining GSCs and identify Sema3C as an important therapeutic target in glioblastoma [50].
In prostate cancer, increased expression of Sema3C strongly correlates with biochemical recurrence [56] and castration resistance [48]. Overexpression of Sema3C in prostate cancer cell lines enhances invasion [57] and facilitates stem cell marker expression and tumorsphere formation, suggesting a role for Sema3C in maintaining prostate CSCs [47]. Further stem cell assays are needed to confirm the function of Sema3C in prostate CSCs. Sema3C overexpression contributes to resistance to androgen deprivation therapy through activation of multiple growth factor receptors, including epidermal growth factor receptor (EGFR), ErbB2, and Met through PlexinB1 [48]. Expression of Sema3C in prostate cancer is induced by the androgen receptor and GATA2 and negatively regulated by FOXA1 [46]. Increased expression of Sema3C in prostate cancer tissues can also be attributed to hypomethylation of its promoter [58]. These studies support a role for Sema3C in prostate cancer progression and potentially in prostate CSC maintenance.
In breast cancer, Sema3C appears to promote tumor progression. Sema3C is expressed more highly in triple-negative and Her2-positive breast cancer, two aggressive subtypes that are highly metastatic [45]. In breast cancer cell lines, Sema3C depletion reduces cell proliferation and migration [59,60]. Cleavage of Sema3C by the metalloproteinase ADAMTS1 facilitates its release from the extracellular matrix to bind its receptors on cancer cells to promote cell migration [61]. Another report suggests that furin cleavage activates Sema3C [62]. In support of this, a furin-resistant form of Sema3C impairs lymphangiogenesis and reduces metastasis [62]. Because furin is expressed in many tumors, Sema3C is likely processed into its active form to promote tumor progression.
Neuroblastoma is a pediatric cancer derived from sympatho-adrenal neural crest cells. In contrast to epithelial cancers, Sema3C serves as a cohesion cue in neuroblastoma [63]. Downregulation of either Sema3C or its receptor PlexinA4 induces neuroblastoma dissemination [63]. Signaling was dependent on Nrp1 and Nrp2, as inhibition of both Nrp receptors was required to promote metastasis. Together, these studies indicate that Sema3C functions in a cell-type and context-dependent manner.
Crosstalk between tumor cells and blood vessels mediated by Sema3C may also modulate tumor progression. The role of Sema3C in modulating angiogenesis is incompletely understood. Blood vessels are needed to support growing tumors, and Sema3C, Nrp1, and PlexinD1 play important roles in shaping the vasculature in development [64]. Knockout mice of Sema3C, Nrp1, and PlexinD1 exhibit similar cardiovascular defects [28,64,65]. In tumors, the role of Sema3C is less clear, with some reports supporting that Sema3C promotes angiogenesis and other reports suggesting that it inhibits angiogenesis. In glioblastoma, Sema3C-positive cells were found in the perivascular niche where GSCs are known to reside [50]. Whether Sema3C secreted by GSCs recruited endothelial cells, which express Sema3C receptors, to their vicinity is not clear. In breast cancer, Sema3C expression correlates with increased microvessel density, but in oral cancer, Sema3C levels inversely correlates with microvessel density [45]. As discussed above, expression of a furin-resistant mutant of Sema3C inhibits lymphangiogenesis and reduces metastatic spread of a triple-negative breast cancer cell line [62]. Exogenous administration of either wild-type or furin-resistant Sema3C can reduce pathologic neoangiogenesis in the retina by inducing apoptosis of endothelial cells in immature microvessels [66,67].
The mechanism by which Sema3C regulates angiogenesis is complex. Sema3C binding to Nrp1 or Nrp2 and PlexinD1 on endothelial cells can promote angiogenesis [64]. Nrp1 can also interact with vascular endothelial growth factor receptor (VEGFR) and modulate its responsiveness to vascular endothelial growth factor (VEGF) [68,69]. Sema3C binding to Nrp1 or Nrp2 may reduce the binding of VEGF to the VEGFR/Nrp1 receptor complex, thereby serving as a competitive inhibitor of VEGF [66]. Class 3 semaphorins have also been found to modulate the immune microenvironment (reviewed in [70]), but the contribution of Sema3C in mediating this process is unclear. More studies are needed to dissect the role of Sema3C in modulating the tumor microenvironment.
Sema3C is identified as a drug resistance gene in cancer cell lines [71]. Sema3C is overexpressed in cisplatin-resistant ovarian cancer cell lines. Sema3C confers resistance not just to chemotherapy but also X-ray and UV irradiation. These data suggest that inhibition of Sema3C signaling may sensitize cancer cells to cytotoxic therapy. It would be interesting to explore the role of Sema3C in therapeutic resistance in other cancer types. The roles of Sema3C signaling in cancer progression are summarize in Figure 2.
Sema3C Receptors in Carcinogenesis
Both neuropilins and plexins have been implicated in cancer (reviewed in [7,8,[72][73][74]). As Sema3C signaling has been implicated in GSC maintenance, we will focus our discussion on its receptors in glioblastoma. Elevated Nrp1 expression is a risk factor for glioblastoma recurrence and shorter patient survival [75]. Disruption of Nrp1 signaling by siRNA knockdown significantly inhibits proliferation of glioma cell lines [76] phenocopying Sema3C knockdown.
The role of Nrp2 in glioblastoma is controversial. One study supports that Nrp2 expression is low in both low-and high-grade gliomas [86] while another study suggests that Nrp2 is expressed highly in 37% of glioblastoma and that its concomitant overexpression with VEGF-C correlates with poor patient survival [87]. The extent to which Sema3C or other ligands that bind Nrp1 and Nrp2 modulate response to chemotherapy and anti-angiogenic therapy warrants further investigation.
The role of plexins in glioblastoma has not been fully investigated. In GSCs, Sema3C requires both PlexinA2 and PlexinD1 to transduce Sema3C pro-survival signaling [50]. PlexinD1 is also expressed in tumor-associated blood vessels [88], and it is possible that secretion of Sema3C from GSCs in the perivascular niche can recruit and communicate with endothelial cells. Indeed, secretion of soluble factors, such as VEGF, from GSCs promotes angiogenesis [89], and GSC-endothelial cell crosstalk reinforces the stem cell phenotype and resistance to radiotherapy [90].
Therapeutic Strategies Targeting Sema3C and Its Receptors
In many tumors, Sema3C appears to promote cancers and in particular CSC survival. In glioblastoma, Sema3C does not appear to be expressed in neural progenitor cells, yet is highly expressed in GSCs [50]. Conversely, PlexinA2 and PlexinD1 are used by neural progenitor cells, likely as receptors for other semaphorins, and knockdown of PlexinA2 or PlexinD1 in these cells induces apoptosis. Together, these data suggest that targeting Sema3C in glioblastoma may have a favorable therapeutic ratio. In GSCs, Rac1 appears to be a critical signaling hub activated by Sema3C to promote survival and invasion [50]. GSCs are more sensitive to Rac1 inhibition compared to neural progenitor cells [50], suggesting that targeting Rac1 may also have a large therapeutic window. Additionally, as Sema3C signaling promotes GSC survival through NF-κB pro-survival signaling, antagonizing Sema3C or Rac1 may also sensitize GSCs to radiation and chemotherapy.
In prostate cancer, Sema3C appears to modulate multiple mitogenic pathways [48]. Combined targeting of Sema3C and these other pathways may reduce resistance mechanisms. Given the role of Sema3C in androgen resistance, targeting Sema3C offers promise for patients with castration-resistant prostate cancer, who have few therapeutic options. As seen in neuroblastoma, Sema3C may be tumor-suppressive and suppress metastasis. Therefore, strategies that inhibit Sema3C signaling will need to be examined in the proper context.
Targeting neuropilins may also have therapeutic value and has been the subject of excellent reviews [72,91]. Treatment of animal models of glioblastoma and breast cancer with peptides encoding the neuropilin transmembrane domain that interfere with receptor dimerization resulted in reduced tumor burden, metastasis, and angiogenesis [92,93]. Further work on antagonizing Sema3C or its receptors in the context of anti-angiogenic therapy and other targeted therapies is needed.
The quest to develop small molecule inhibitors targeting Sema3C or its receptors has not yet been fruitful [73]. However, specific inhibitors of Sema3A, xanthofulvin and vinaxanthone, have been isolated from the broth of fermented fungus [94][95][96]. Synthesis of these compounds and their derivatives has shown promise in facilitating axonal regeneration after injury in model organisms [97,98]. Similar approaches may be used to find inhibitors of Sema3C signaling. Strategies that take into consideration cell type and cellular context are needed to translate anti-Sema3C therapies into the clinic.
Future Perspectives
The current literature supports an oncogenic role of Sema3C and its receptors in most malignancies. Despite the data collected from patients and animal models, there are still some important questions to be answered. What is the extent of crosstalk between Sema3C signaling and other oncogenic pathways in cancer cells themselves, blood vessels, immune cells, and other stromal cells? How does Sema3C maintain the stemness of GSCs and other CSCs? How does Sema3C signaling contribute to therapeutic resistance? What is the role of Sema3C in guiding tumor angiogenesis and creation of the perivascular niche? A more comprehensive understanding of Sema3C signaling in cancer will guide us in integrating anti-Sema3C therapies into clinical care. | 4,057.2 | 2018-04-08T00:00:00.000 | [
"Biology"
] |
Laser Scanning Based Surface Flatness Measurement Using Flat Mirrors for Enhancing Scan Coverage Range
: Surface flatness is an important indicator for the quality assessment of concrete surfaces during and after slab construction in the construction industry. Thanks to its speed and accuracy, terrestrial laser scanning (TLS) has been popularly used for surface flatness inspection of concrete slabs. However, the current TLS based approach for surface flatness inspection has two primary limitations associated with scan range and occluded area. First, the areas far away from the TLS normally suffer from inaccurate measurement caused by low scan density and high incident angle of laser beams. Second, physical barriers such as interior walls cause occluded areas where the TLS is not able to scan for surface flatness inspection. To address these limitations, this study presents a new method that employs flat mirrors to increase the measurement range with acceptable measurement accuracy and make possible the scanning of occluded areas even when the TLS is out of sight. To validate the proposed method, experiments on two laboratory-scale specimens are conducted, and the results show that the proposed approach can enlarge the scan range from 5 m to 10 m. In addition, the proposed method is able to address the occlusion problem of the previous methods by changing the laser beam direction. Based on these results, it is expected that the proposed technique has the potential for accurate and efficient surface flatness inspection in the construction industry.
Introduction
Surface flatness is essential for dimensional quality inspection of construction elements and floors both during and after the manufacturing and construction stages in the construction industry [1][2][3][4].This is because concrete surfaces with unacceptable deviations exceeding their specific tolerances may adversely affect both aesthetic and functional performances of the structure [3,5,6].In addition, non-flat concrete surfaces may result in a poor connection between adjacent construction elements, leading to long term structural problems [3].For these reasons, it is necessary to conduct a surface flatness inspection to evaluate the flatness quality of the target structure.Currently, the surface flatness of concrete surfaces is commonly measured using straightedges or the F-numbers method in the construction industry [2,7].As for the straightedge approach, inspectors use a 10 ft (3 m) long straightedge to assess surface flatness based on certain geometry patterns such as grid patterns to define the height deviation between the target surface and the straightedge [2].The F-numbers method measures the elevation differences between a pair of points sampled on the surface using inclinometers or longitudinal differential floor profilometers [7,8].However, these two traditional flatness assessment methods are performed manually or by contact-type devices, which are time-consuming, labor-intensive and prone to human errors.As an alternative, terrestrial laser scanner (TLS) has been widely used and considered as a promising 3D data acquisition technology for concrete surface inspection because of the nature of noncontact and accurate measurement [9][10][11][12][13][14][15][16][17].Nevertheless, there are still limitations according to previous studies in order to ensure the feasibility of the TLS-based surface flatness inspection.To be specific, one issue is that areas far away from the TLS normally suffer from inaccurate measurement caused by low scan density and high incident angle of laser beams.Another issue is caused by physical barriers such as interior walls, which may generate occluded areas where the TLS is not able to scan for surface flatness inspection.
To address these limitations of the current laser scanning approaches, this study presents a mirror-aided laser scanning technique.This paper is organized as follows: First, research background, including current practices of surface flatness inspection and a literature review are presented in Section 2. Two hypotheses that the proposed method would bring about alternatives for current limitations are introduced in Section 3, followed by the experimental design for validating the hypotheses in Section 4. Key results and related discussions are presented in Sections 5 and 6.Finally, this paper ends with a summary and suggestions for further work in Section 7.
Research Background 2.1. Current Practices for Surface Flatness Inspection
Surface flatness, often called surface regularity, presents the deviations in elevation of the surfaces [1], as shown in Figure 1.There are two common methods widely used for surface flatness inspection, which are the straightedge method [2] and the F-numbers method [2,7].
filometers [7,8].However, these two traditional flatness assessment methods are performed manually or by contact-type devices, which are time-consuming, labor-intensive and prone to human errors.As an alternative, terrestrial laser scanner (TLS) has been widely used and considered as a promising 3D data acquisition technology for concrete surface inspection because of the nature of noncontact and accurate measurement [9][10][11][12][13][14][15][16][17].Nevertheless, there are still limitations according to previous studies in order to ensure the feasibility of the TLS-based surface flatness inspection.To be specific, one issue is that areas far away from the TLS normally suffer from inaccurate measurement caused by low scan density and high incident angle of laser beams.Another issue is caused by physical barriers such as interior walls, which may generate occluded areas where the TLS is not able to scan for surface flatness inspection.
To address these limitations of the current laser scanning approaches, this study presents a mirror-aided laser scanning technique.This paper is organized as follows: First, research background, including current practices of surface flatness inspection and a literature review are presented in Section 2. Two hypotheses that the proposed method would bring about alternatives for current limitations are introduced in Section 3, followed by the experimental design for validating the hypotheses in Section 4. Key results and related discussions are presented in Section5 and Section 6.Finally, this paper ends with a summary and suggestions for further work in Section 7.
Current Practices for Surface Flatness Inspection
Surface flatness, often called surface regularity, presents the deviations in elevation of the surfaces [1], as shown in Figure 1.There are two common methods widely used for surface flatness inspection, which are the straightedge method [2] and the F-numbers method [2,7].As for the straightedge method [2], the primary steps are as follows.First, surveyors are required to randomly locate a straightedge with a length of 10 ft (3 m) at different locations on the surface.Then, the deviations of elevation between the straightedge and the surface are measured at the picked points using a stainless-steel slip.The deviations are then compared to the as-designed elevations to evaluate the surface flatness.Table 1 shows the tolerances of deviations in elevation using the straightedge method for concrete slabs specified in American Concrete Institute (ACI) 117 [2].For example, the tolerance of 6 mm is specified for a concrete slab classified as "flat".However, the straightedge method is performed manually and requires direct contact with the target surface, which is timeconsuming and error-prone.In addition, there is no specification on the method of how to locate the straightedge on the surface, resulting in difficulties in practically implementing the method.
Non-flat
On the other hand, the F-numbers method [2,7] contains two ratings according to the ASTM E 1155 [7], which are floor flatness (FF) numbers and floor levelness (FL) numbers.The FF numbers measure the degree to which a surface approximates a plane, whereas the As for the straightedge method [2], the primary steps are as follows.First, surveyors are required to randomly locate a straightedge with a length of 10 ft (3 m) at different locations on the surface.Then, the deviations of elevation between the straightedge and the surface are measured at the picked points using a stainless-steel slip.The deviations are then compared to the as-designed elevations to evaluate the surface flatness.Table 1 shows the tolerances of deviations in elevation using the straightedge method for concrete slabs specified in American Concrete Institute (ACI) 117 [2].For example, the tolerance of 6 mm is specified for a concrete slab classified as "flat".However, the straightedge method is performed manually and requires direct contact with the target surface, which is time-consuming and error-prone.In addition, there is no specification on the method of how to locate the straightedge on the surface, resulting in difficulties in practically implementing the method.
On the other hand, the F-numbers method [2,7] contains two ratings according to the ASTM E 1155 [7], which are floor flatness (F F ) numbers and floor levelness (F L ) numbers.The F F numbers measure the degree to which a surface approximates a plane, whereas the F L numbers depict the conformity of the floor surface to the intended slop indicated in the design documents.The F F numbers method is discussed in the following since it measures the degree of flatness for a surface.Figure 2 illustrates the calculation of F F numbers for surface flatness inspection [2].First, sample measurement lines on the surface are created.Second, each sample measurement line is sampled into 300-mm long intervals, and the cut points are called "sample reading points".Third, the elevations of the sample points of the sample measurement lines are calculated.Note that the elevation of the sample points is presented by the height (z coordinate) of each scan point collected on the test surface.Finally, the F F numbers, statistical numbers taking into account curvatures between all the sample points, are computed.Table 1 shows the flatness tolerances of concrete slabs specified in ACI 117 [2] for different flatness levels.Normally, a higher F F number indicates a flatter surface.For example, a floor classified as "very flat" has an F F number ranging from 35 to 45, while the F F number of a floor regarded as "Conventional" is smaller than 20.Compared to the straightedge method, the F F numbers method is advantageous in two aspects.First, the sample points are distributed throughout the whole surface as specified in the F F numbers method, thereby covering and reflecting the elevation deviations globally.In addition, the interval of 300 mm between two sample points is much smaller compared to that of the straightedge method, resulting in a more accurate surface flatness measurement.For these reasons, the F F numbers method is selected as the measurement of surface flatness in this study and the detailed steps are presented in Section 4. FL numbers depict the conformity of the floor surface to the intended slop indicated in the design documents.The FF numbers method is discussed in the following since it measures the degree of flatness for a surface.Figure 2 illustrates the calculation of FF numbers for surface flatness inspection [2].First, sample measurement lines on the surface are created.Second, each sample measurement line is sampled into 300-mm long intervals, and the cut points are called "sample reading points".Third, the elevations of the sample points of the sample measurement lines are calculated.Note that the elevation of the sample points is presented by the height (z coordinate) of each scan point collected on the test surface.Finally, the FF numbers, statistical numbers taking into account curvatures between all the sample points, are computed.Table 1 shows the flatness tolerances of concrete slabs specified in ACI 117 [2] for different flatness levels.Normally, a higher FF number indicates a flatter surface.For example, a floor classified as "very flat" has an FF number ranging from 35 to 45, while the FF number of a floor regarded as "Conventional" is smaller than 20.Compared to the straightedge method, the FF numbers method is advantageous in two aspects.First, the sample points are distributed throughout the whole surface as specified in the FF numbers method, thereby covering and reflecting the elevation deviations globally.In addition, the interval of 300 mm between two sample points is much smaller compared to that of the straightedge method, resulting in a more accurate surface flatness measurement.For these reasons, the FF numbers method is selected as the measurement of surface flatness in this study and the detailed steps are presented in Section 4. Commercial TLSs normally have a measurement accuracy of around ±3 mm within the measurement distance of 20 m [18].Therefore, by adopting commercial TLSs, the laser scanning approach would not provide an accurate and robust flatness inspection for the flatness types of "very flat" and "super flat" due to the corresponding tolerances of deviations in elevation for the types being less than 6 mm according to Table 1.Therefore, one surface flatness type in this study, "conventional," which has a deviation of 13 mm in elevation and FF number of less than 20, is used to evaluate the applicability of the proposed mirror-aided approach.In addition, considering the measurement accuracy (±3 mm) of the TLS used in this study, the estimation error of 20%, which is corresponding to 2.6 mm in deviations of elevation for the "conventional" surface type, is used as the threshold to evaluate the effectiveness of the proposed method.In other words, it is classified as an "accurate" measurement if an estimation error is less than 20% compared to Commercial TLSs normally have a measurement accuracy of around ±3 mm within the measurement distance of 20 m [18].Therefore, by adopting commercial TLSs, the laser scanning approach would not provide an accurate and robust flatness inspection for the flatness types of "very flat" and "super flat" due to the corresponding tolerances of deviations in elevation for the types being less than 6 mm according to Table 1.Therefore, one surface flatness type in this study, "conventional," which has a deviation of 13 mm in elevation and F F number of less than 20, is used to evaluate the applicability of the proposed mirror-aided approach.In addition, considering the measurement accuracy (±3 mm) of the TLS used in this study, the estimation error of 20%, which is corresponding to 2.6 mm in deviations of elevation for the "conventional" surface type, is used as the threshold to evaluate the effectiveness of the proposed method.In other words, it is classified as an "accurate" measurement if an estimation error is less than 20% compared to the ground-truth F F number.
Surface Flatness Inspection Methods Using TLS
Several studies using TLS for the surface flatness inspection on floors and concrete slabs have been introduced.The previous studies can be divided into two categories: (1) surface flatness inspection following some guidelines such as the F-numbers method [19,20] and (2) surface flatness inspection focusing on visual representation without following the standard documents to reflect surface flatness conditions [5,6,8,14,21].
As for the former type of studies using the guidelines, Bosché and Guenet [20] proposed an approach that compares an as-built model generated from a TLS with the asdesigned model to inspect the surface flatness.In the study, the scanned data were first acquired and aligned with the as-designed model using the features of orthogonal distance and surface normal similarity.Then, the surface flatness was computed using the straightedge method.The results showed a better performance of the TLS-based inspection in efficiency, reporting up to 50% inspection time compared to the manual-based method.Wang et al. [19] proposed a TLS-based surface flatness inspection approach for precast components based on the F F numbers method.Experimental tests on two lab-scale specimens showed an estimation error of less than 8% in F F numbers measurement.It was also found that the surface distortion, including warp and bowing, can deteriorate the accuracy of measurement of F F numbers for surface flatness inspection.From the studies, it can be found that the TLS-based surface flatness inspection presents an efficient and accurate surface flatness inspection compared to manual-based methods.However, these studies are focused on simple structures, and there is no study investigating scan coverage of the TLS-based method.In addition, the occlusion problems caused by physical barriers such as interior walls have not been explored in these studies.
A large number of studies performing surface flatness inspection without standard guidelines have been proposed; Shih et al. [5] developed a technique to check the surface flatness of finished walls using a TLS.A plane generated from the collected point cloud data on the finished wall was first computed and set as a reference plane, and was followed by slicing the collected data per centimeter to visualize the flatness levels with different colors with respect to the reference plane.However, the method presented in the study used a one-centimeter size of slicing, which is too sparse to perform accurate surface flatness inspection on the finished wall.Li et al. [6] presented another study for slabs and floors.In the study, similar to [5], elevation deviations between each scan point and the fitted plane generated from the collected scan points were calculated, and a color-coded deviation map was then used to display the elevation deviations with different colors.Experimental studies conducted on an exterior wall panel showed that more than 80% of the scan points on the surface are within the allowable tolerance of 8 mm [22].In contrast to the previous similar study [5] that measured the surface flatness with a relatively large slicing size of a centimeter, the study uses individual scan points to perform a more dense and accurate surface flatness inspection.However, the study lacks the investigation on the maximum area that can be covered by the proposed method for surface flatness inspection.Bosche and Biotteau [8] proposed a new method that processed point cloud data of concrete slabs in the frequency domain using the continuous wavelet transform (CWT) method [23].
Comparison tests between the CWT method and the waviness index (WI) method [24] were conducted, and it was found that the CWT method offers a more precise localization of non-flat areas of the concrete slabs due to its dense 3D measurement.However, the study utilizes the CWT method over multiple one-dimensional (1D)-survey lines from the cross-section view of the concrete slabs, which may not accurately reflect the actual flatness condition of the entire floor.To address the limitation of the study in [8], Puri et al. [21] proposed an approach that uses the CWT method in the two-dimensional (2D) domain instead of using a one-dimensional (1D) domain.Based on the comparison tests, the 2D based CWT method was proved to perform better than the 1D based CWT method.However, the study only assesses the flatness accuracy without using ground-truth value; thus, the actual performance of the proposed method is not guaranteed.Lastly, Tang et al. [14] analyzed and compared three different algorithms for estimating concrete surface flatness deviations using point cloud data.Three algorithms, including range filtering, deviation filtering and sliding window, were formalized and implemented for surface flatness defect detection.The results showed that it is possible to detect surface flatness as small as 3 cm across and 1 mm thick with a scanning range of 20 m.However, the study lacks further investigation on the effects of scanning parameters, including incident angle and scanning distance, on the surface flatness inspection.
In summary, although the previous TLS-based surface flatness inspection shows potential for accurate and reliable surface flatness inspection compared to the contact-type manual practices, there are still limitations in two aspects.First, there has been no study on how far the laser scanning approach can cover the surface flatness inspection with acceptable measurement accuracy.Second, there have been few studies that discuss the limitation of occlusion problems, which inevitably occurred in the existence of physical barriers such as interior walls.To investigate such issues, this study presents a new approach that employs flat mirrors to increase the scan coverage range and enable flatness measurement even in occluded areas.
Research Hypotheses
This study proposes two hypotheses to enlarge the scanning range, increase the measurement accuracy in the long-range area far away from the TLS, and enable flatness measurement in hidden areas, as illustrated in Figure 3. Figure 3a illustrates hypothesis 1.
Here, the scanning area far away from the TLS with a light gray color has a large scanning distance and a high incident angle (θ), resulting in a relatively low flatness measurement performance.With the implementation of the mirror-aided method, the laser beam emitted from the TLS is first reflected by the mirror and then reaches the long-range scanning area, leading to a lower incident angle (β) and higher surface flatness inspection accuracy.Since the study targets the front area on the basis of the mirror, the occluded areas behind the mirror are not considered.In addition, only virtual scan points generated by the mirror are used in this study due to the existence of a high incident angle problem for actual (direct) scan points, although the area under the mirror is scanned twice.With the help of using mirrors, the areas far from the TLS can be scanned with high accuracy, indicating that the proposed method can enlarge the scan range.Note that the long rang area (marked as light gray color) is regarded as the enlarged area.
Note that there are rectangular patches attached to the surface of the flat mirror to be used for estimating the mirror plane using scan points falling onto the patches.Based on the mirror reflection principle [11], the scan points of the long-range area will be located on the virtual surface.In short, as the proposed method can have scan points with low incident angle, it can enlarge the scanning area and increase the surface flatness inspection efficiency.Figure 3b illustrates hypothesis 2 that the mirror-aided technique can address the occlusion problems caused by barriers.The construction elements such as the interior walls are likely to cause occlusion of the floor, which limits the scanning area of the TLS.Therefore, it requires multiple scans to perform the data collection, which deteriorates the surface flatness inspection efficiency.On the other hand, as the mirror can adjust the direction of the laser beam, the occlusion area of the floor can be scanned.Therefore, the mirror-aided approach can tackle the current limitation of the TLS-based surface flatness inspection.
Experimental Setup
Two experiments, named "experiment I" and "experiment II," were conducted to validate the two hypotheses.The specific objectives of the experiments are to investigate: (1) the capability of the proposed mirror-aided approach to increase the surface flatness inspection accuracy at the areas far from the TLS and (2) the possibility of scanning occluded areas with an acceptable measurement accuracy with the existence of physical barriers for surface flatness inspection.Here, since there is no need to scan a large area of the surface for validating the accuracy at the far-field area, two lab-scale specimens were used in this study.Figure 4 shows the two specimens, named "specimen I" and "specimen II,"
Experimental Configuration 4.1. Experimental Setup
Two experiments, named "experiment I" and "experiment II," were conducted to validate the two hypotheses.The specific objectives of the experiments are to investigate: (1) the capability of the proposed mirror-aided approach to increase the surface flatness inspection accuracy at the areas far from the TLS and (2) the possibility of scanning occluded areas with an acceptable measurement accuracy with the existence of physical barriers for surface flatness inspection.Here, since there is no need to scan a large area of the surface for validating the accuracy at the far-field area, two lab-scale specimens were used in this study.Figure 4 shows the two specimens, named "specimen I" and "specimen II," which are classified under the "conventional" flatness type.Table 2 illustrates the dimensions and F F numbers of the specimens.The specimens were manufactured by a 3D printer, ZRAPID iSLA880 [25], with the material of photopolymer resin.While specimen I has the dimensions of 400 mm (length) × 400 mm (width) × 10-23 mm (height) with the F F number of 10.28, specimen II has the dimensions of 400 mm × 400 mm× 20-38 mm with the F F number of 21.23.Note that according to ACI 117 [2], specimen II is designed to be flatter than specimen I.
Remote Sens. 2021, 13, x FOR PEER REVIEW 7 of 17 which are classified under the "conventional" flatness type.Table 2 illustrates the dimensions and FF numbers of the specimens.The specimens were manufactured by a 3D printer, ZRAPID iSLA880 [25], with the material of photopolymer resin.While specimen I has the dimensions of 400 mm (length) × 400 mm (width) × 10-23 mm (height) with the FF number of 10.28, specimen II has the dimensions of 400 mm × 400 mm× 20-38 mm with the FF number of 21.23.Note that according to ACI 117 [2], specimen II is designed to be flatter than specimen I.
Items
Size (Length × Width × Height) FF Numbers Specimen I 400 mm×400 mm×10-23 mm 10.28 Specimen II 400 mm×400 mm×20-38 mm 21.23 Figure 5 illustrates the test configuration of experiment I.A phase-shift TLS, FARO M70 [18], with a measurement error in distance deviation of ±3 mm within scanning distance of 20 m, was used to acquire scan points of the specimen.The height distance of the TLS was set to 1.5 m, and the scanning distance was adjusted from 2.5 m to 12.5 m with an interval of 2.5 m in order to investigate the effects of the scanning distance on the accuracy of surface flatness inspection.Moreover, a flat mirror with the dimensions of 1000 mm (length) × 1000 mm (height) was used to reflect laser beams to the specimens, and the flat mirror was located with a vertical angle of 75° to the ground for the laser beams to project a low incident angle on the surfaces of the two specimens.Furthermore, the mirror bottom line is located 0.5 m away from the backside of the specimen.In order to extract the mirror plane, two rectangular patches of 100 mm × 100 mm were attached to the upper-side region of the mirrors.In addition, two different angular resolutions of 0.036° and 0.072° were employed for the tests.Figure 5 illustrates the test configuration of experiment I.A phase-shift TLS, FARO M70 [18], with a measurement error in distance deviation of ±3 mm within scanning distance of 20 m, was used to acquire scan points of the specimen.The height distance of the TLS was set to 1.5 m, and the scanning distance was adjusted from 2.5 m to 12.5 m with an interval of 2.5 m in order to investigate the effects of the scanning distance on the accuracy of surface flatness inspection.Moreover, a flat mirror with the dimensions of 1000 mm (length) × 1000 mm (height) was used to reflect laser beams to the specimens, and the flat mirror was located with a vertical angle of 75 • to the ground for the laser beams to project a low incident angle on the surfaces of the two specimens.Furthermore, the mirror bottom line is located 0.5 m away from the backside of the specimen.In order to extract the mirror plane, two rectangular patches of 100 mm × 100 mm were attached to the upper-side region of the mirrors.In addition, two different angular resolutions of 0.036 • and 0.072 • were employed for the tests.
Figure 6 shows the test configuration of experiment II.The TLS was set with a height of 2.5 m with respect to the ground and was located with a distance of 1.8 m and 1.2 m to the mirror and the specimen, respectively.Moreover, barriers were erected between the TLS and the specimen to create an environment that ensured that the specimen was invisible from the TLS.In addition, a mirror with the size of 1000 mm × 1000 mm was located with a distance of 0.2 m behind the specimen.The mirror on the bottom line is set parallel to the specimen, positioned with a vertical angle of 70 • with respect to the ground.As with experiment I, two rectangular patches of 100 mm × 100 mm in size were attached to the upper-side region of the mirrors for mirror plane estimation, and two angular resolutions of 0.036 • and 0.072 • were used for the tests.Figure 6 shows the test configuration of experiment II.The TLS was set with a height of 2.5 m with respect to the ground and was located with a distance of 1.8 m and 1.2 m to the mirror and the specimen, respectively.Moreover, barriers were erected between the TLS and the specimen to create an environment that ensured that the specimen was invisible from the TLS.In addition, a mirror with the size of 1000 mm × 1000 mm was located with a distance of 0.2 m behind the specimen.The mirror on the bottom line is set parallel to the specimen, positioned with a vertical angle of 70° with respect to the ground.As with experiment I, two rectangular patches of 100 mm × 100 mm in size were attached to the upper-side region of the mirrors for mirror plane estimation, and two angular resolutions of 0.036° and 0.072° were used for the tests.
Data Processing
Data processing was composed of three steps: (1) data preprocessing, (2) virtual scan points transformation, and (3) surface flatness calculation.The details of each step are described in the following sections.
Data Processing
Data processing was composed of three steps: (1) data preprocessing, (2) virtual scan points transformation, and (3) surface flatness calculation.The details of each step are described in the following sections.
Data Preprocessing
This step aimed not only to remove background noise but also to extract the scan points of the specimens and rectangular patches attached to the mirror.Figure 7 shows the results.Note that there were three different types of raw scan data, including (1) background noise scan points, (2) specimen scan points and (3) rectangular patch scan points.It is worth noting that the background noise scan points contained two different noises, including ground noise and noise reflected by the mirror.To execute this data preprocessing step, the density-based spatial clustering of applications with noise (DBSCAN) algorithm [26] was first applied to the raw scan points to remove background noise based on the assumption that the scan data of the ground noise and noise reflected by the mirror were the two biggest clusters (named "1st" and "2nd" largest clusters) of the raw scan points as shown in Figure 7a.Next, after the removal of the background noise, the actual scan points and virtual scan points of the specimen were extracted and separated based on the assumptions that they become the 3rd and 4th largest scan clusters and the fitted plane of the actual scan points was nearly parallel to the ground plane.After this, the scan data sets corresponding to the rectangular patch attached to the mirror, extracted as the 5th largest cluster in the raw data, were finally classified.
Data Processing
Data processing was composed of three steps: (1) data preprocessing, (2) virtual scan points transformation, and (3) surface flatness calculation.The details of each step are described in the following sections.
Data Preprocessing
This step aimed not only to remove background noise but also to extract the scan points of the specimens and rectangular patches attached to the mirror.Figure 7 shows the results.Note that there were three different types of raw scan data, including (1) background noise scan points, (2) specimen scan points and (3) rectangular patch scan points.It is worth noting that the background noise scan points contained two different noises, including ground noise and noise reflected by the mirror.To execute this data preprocessing step, the density-based spatial clustering of applications with noise (DBSCAN) algorithm [26] was first applied to the raw scan points to remove background noise based on the assumption that the scan data of the ground noise and noise reflected by the mirror were the two biggest clusters (named "1st" and "2nd" largest clusters) of the raw scan points as shown in Figure 7a.Next, after the removal of the background noise, the actual scan points and virtual scan points of the specimen were extracted and separated based on the assumptions that they become the 3rd and 4th largest scan clusters and the fitted plane of the actual scan points was nearly parallel to the ground plane.After this, the scan data sets corresponding to the rectangular patch attached to the mirror, extracted as the 5th largest cluster in the raw data, were finally classified.
Virtual Scan Points Transformation
This step aimed to transform the virtual scan points of the surface of the specimen to the position of the actual scan points.Here, the virtual scan points were transformed to the position of the actual scan points, which was parallel to the ground plane to facilitate
Virtual Scan Points Transformation
This step aimed to transform the virtual scan points of the surface of the specimen to the position of the actual scan points.Here, the virtual scan points were transformed to the position of the actual scan points, which was parallel to the ground plane to facilitate the surface flatness inspection.Note that the virtual scan points collected from the mirror were compared with the actual scan points collected directly from the TLS to validate the effectiveness of the mirror-aided approach.
Due to the incompleteness of the DBSCAN algorithm, the extracted virtual scan points of the specimen contain outliers caused by the mixed pixels [27].To remove the mixed-pixel outliers, the Random sample consensus (RANSAC) algorithm [28] was used, as shown in Figure 8a.Then, a mirror plane was estimated based on the scan points of the rectangular patches using the least fitting algorithm [29].Once the mirror plane was generated, the virtual scan points of the top surface were finally transformed to the location of the actual scan points; this was based on the fact that the coordinates of the virtual and actual scan points were symmetric to each other with respect to the mirror plane.For each virtual scan point v (x 1 , y 1 , z 1 ), the 3D coordinates of the transformed scan point v (x 2 , y 2 , z 2 ) was calculated using Equation ( 1).Note that A, B, C and D are the coefficients of the mirror plane and the transformed virtual scan points are shown in Figure 8b.
Virtual Scan Points Transformation
This step aimed to transform the virtual scan points of the surface of the specimen to the position of the actual scan points.Here, the virtual scan points were transformed to the position of the actual scan points, which was parallel to the ground plane to facilitate the surface flatness inspection.Note that the virtual scan points collected from the mirror were compared with the actual scan points collected directly from the TLS to validate the effectiveness of the mirror-aided approach.
Due to the incompleteness of the DBSCAN algorithm, the extracted virtual scan points of the specimen contain outliers caused by the mixed pixels [27].To remove the mixed-pixel outliers, the Random sample consensus (RANSAC) algorithm [28] was used, as shown in Figure 8a.Then, a mirror plane was estimated based on the scan points of the rectangular patches using the least fitting algorithm [29].Once the mirror plane was generated, the virtual scan points of the top surface were finally transformed to the location of the actual scan points; this was based on the fact that the coordinates of the virtual and actual scan points were symmetric to each other with respect to the mirror plane.For each virtual scan point ( , , ) , the 3D coordinates of the transformed scan point ′ ( , , ) was calculated using Equation ( 1).Note that , , and are the coefficients of the mirror plane and the transformed virtual scan points are shown in Figure 8b.
Surface Flatness Calculation
Once the virtual scan points transformation was conducted, the flatness of the specimen surface was then calculated using the F F numbers method, as illustrated in Figure 2. The detailed procedure of computing the F F numbers is presented in ASTM E 1155 [7].
Results
In order to investigate the feasibility of the proposed method based on the test results of experiment I, discrepancies between the ground-truth F F numbers and the estimated F F numbers from the TLS are presented in Table 3.Note that as mentioned earlier in Section 2, the estimation error of 20%, which is equal to 2.6 mm in deviations of elevation, was used as the threshold in this study to assess the measurement performance of surface flatness.There are three distinctive findings as follows: First, virtual scan points collected from the mirror has a high estimation accuracy compared to actual scan points for both specimens I and II. Figure 9 shows the comparison results.It can be observed that the combination of long scanning range and large angular resolution yields high estimation errors that are larger than 20% in most cases.On the other hand, the F F numbers estimation error for specimen I and specimen II are 20.5% and 12.4% on average within the distance of 10 m using the virtual scan points.This is because that the capability of the mirror can decrease the incident angle of laser beams to the ground by adjusting the vertical mirror angle.Therefore, scan density is increased due to the low incident angle of laser beams, resulting in the acquisition of large numbers of scan points.Therefore, it can be concluded that the proposed mirror-aided method can increase the flatness measurement accuracy in large distances.Average estimation error for Specimen I VirtualScanPoint@0.036°VirtualScanPoint@0.072°ActualScanPoint<EMAIL_ADDRESS>error of 20% Estimation error of 20% age error (%) Second, scan density largely affects the flatness measurement performance.Figure 10 shows the effect of scan density on the F F number estimation errors.Note that the data density is defined as the number of scan points falling in the unit area of cm 2 .One particular result shows that the estimation error is decreased from 211.1% to 3.9% as the scan density increases from 0.3 pts/cm 2 to 87.7 pts/cm 2 .In addition, one noticeable observation is that the F F numbers are unable to be computed in cases of the scan density being less than 0.2 pts/cm 2 .Considering the tolerance of 20% relative error set in this study, an accurate flatness measurement is assured when the target surface has a scan density of 6.7 pts /cm 2 .Therefore, the scan density should be checked to ensure an accurate surface flatness inspection.VirtualScanPoint@0.036°VirtualScanPoint@0.072°ActualScanPoint<EMAIL_ADDRESS>error of 20% Estimation error of 20% Average error (%) Third, specimen II achieves more accurate F F number estimation results than specimen I in most cases, as shown in Figure 11, indicating that the mirror-aided approach is more robust for the flatness inspection of flatter surface.This phenomenon is caused by the fact that specimen I with a non-flat surface, as can be seen in Figure 4, is more likely to suffer from the effect of the high incident angle compared to specimen II with a relatively flatter surface.Normally, as common concrete surfaces of offices and laboratories are classified into "conventional", which have similar F F numbers as specimen II within the range from 20 to 25 [2], the proposed mirror-aided approach is expected to be effective for floor surface flatness inspection.
Regarding the test results of experiment II, Table 4 shows the mirror-aided F F number estimation errors under varying angular resolutions.The average errors for specimen I and specimen II in percentage were 14.3% and 11.1%, respectively, demonstrating the applicability of the proposed mirror-aided method for hypothesis 2. This positive outcome is attributed to the fact that the mirror changes the laser beam direction to ensure that the occluded areas by the barrier are scanned.Similar to the results of experiment I, the F F numbers estimation accuracy increased as the angular resolution decreases for both specimen I and specimen II due to the scan density effect.In addition, specimen II offers a more accurate F F numbers estimation because the flatter surface of specimen II is more robust to the incident angle influence compared to the non-flatter surface of specimen I, which is similar to the results of experiment I.In summary, the mirror-aided approach was able to address the occlusion problem caused by construction elements such as interior walls.
Mirror Location and Mirror Size for Performing Surface Flatness Inspection
When applying the mirror-aided approach on sites, it is required to determine t mirror position and mirror.From the results stated in experiment I, a scan density of 6 pts/cm 2 was required to guarantee a successful flatness inspection.In addition, based Figure 9a, the acceptable scanning range of TLS without the mirrors was limited to 5 while the mirror-aided approach was able to enlarge the scanning range up to 10 m. O issue when using the proposed method was that a large-scale mirror was necessary scan the range between 5 m and 10 m.However, large-scale mirrors are fragile and cu bersome, so their usage is not optimal in manufacturing or construction environments.tackle this limitation, a small-scale mirror that could rotate along the vertical axis cou be used, as shown in Figure 12.Based on the geometrical relationship model develop in [31], the optimal mirror position, mirror size and mirror rotation angle could be d termined to cover the scan range between 5 m and 10 m.
Table 6 shows the results of the determination of the mirror rotation angle for t scan area between 5 m and 10 m. Figure 12 also shows the results in a side view.Assumi the height distance of the TLS as 1.5 m, a flat mirror with the dimensions of 1800 m (length) was selected and positioned on the rotating axis for mirror rotation.Two mirr positions were determined at 7.5 m and 10 m away from the TLS, respectively, to cov the areas from 5 m to 10 m.Here, the positions of the two mirrors were 1.0 m and 2.0 above the surface, respectively.As for the mirror located at 7.8 m away from the TLS mirror angle of 68° was determined to cover the area from 5.0 m to 7.8 m with a sc density of 8.2 pts/cm 2 on average.Moreover, as for the mirror located at 10 m away fro the TLS, a rotation angle of 55 o was computed as the optimal angle to had a scan dens of 11.1 pts/cm 2 .With the two mirrors, the scan area within the of 10 m could be fu
Discussion
To further identify the effectiveness of the proposed method, further studies were conducted in two aspects, which are (1) investigation on the performance of combined scan points of actual and virtual scan points, and (2) determination of mirror location and mirror size for optimal surface flatness inspection.
Performance Comparison with Combined Scan Points
The actual scan points, which have a large incident angle and low scan density, can be merged with the virtual scan points to increase the scan density.For this reason, the actual scan points and virtual scan points are aligned together using an iterative closest point ICP)-based algorithm [30].Table 5 and Figure 11 shows the comparison results of scan density and F F number estimation error of specimen I and II, respectively, with varying angular resolutions and scanning distances.A distinctive trend shows that using the virtual scan points exclusively shows the best performance of flatness inspection for the long-range area among the three different types of scan points, although the combined scan points have the largest scan density of 34.0 pts/cm 2 and 33.6 pts/cm 2 for specimen I and specimen II, respectively.This phenomenon is attributed to the alignment errors of the ICP algorithm that deteriorates the F F numbers estimation accuracy of the combined scan points.Therefore, it is suggested for construction engineers and inspectors to conduct surface flatness inspection directly using virtual scan points instead of combined scan points, although the combined scan points have a higher scan density.When applying the mirror-aided approach on sites, it is required to determine the mirror position and mirror.From the results stated in experiment I, a scan density of 6.7 pts/cm 2 was required to guarantee a flatness inspection.In addition, based on Figure 9a, the acceptable scanning range of TLS without the mirrors was limited to 5 m, while the mirror-aided approach was able to enlarge the scanning range up to 10 m.One issue when using the proposed method was that a large-scale mirror was necessary to scan the range between 5 m and 10 m.However, large-scale mirrors are fragile and cumbersome, so their usage is not optimal in manufacturing or construction environments.To tackle this limitation, a small-scale mirror that could rotate along the vertical axis could be used, as shown in Figure 12.Based on the geometrical relationship model developed in [31], the optimal mirror position, mirror size and mirror rotation angle R could be determined to cover the scan range between 5 m and 10 m.
Table 6 shows the results of the determination of the mirror rotation angle for the scan area between 5 m and 10 m. Figure 12 also shows the results in a side view.Assuming the height distance of the TLS as 1.5 m, a flat mirror with the dimensions of 1800 mm (length) was selected and positioned on the rotating axis for mirror rotation.Two mirror positions were determined at 7.5 m and 10 m away from the TLS, respectively, to cover the areas from 5 m to 10 m.Here, the positions of the two mirrors were 1.0 m and 2.0 m above the surface, respectively.As for the mirror located at 7.8 m away from the TLS, a mirror angle of 68 • was determined to cover the area from 5.0 m to 7.8 m with a scan density of 8.2 pts/cm 2 on average.Moreover, as for the mirror located at 10 m away from the TLS, a rotation angle of 55 • was computed as the optimal angle to had a scan density of 11.1 pts/cm 2 .With the two mirrors, the scan area within the range of 10 m could be fully covered.In summary, the proposed mirror-aided approach has the potential for an accurate surface flatness inspection in the long-scan range.Even though the preparation of multiple mirrors is necessary for certain cases, the time cost is relatively small compared to the traditional method that inevitably involves several changes in scanner locations and selecting scan parameters for each scan position.In addition, registration errors that are normally problematic during the point cloud merging process did not occur in the proposed method due to the lack of a need to change the scanner location.Hence, the proposed scan planning is systemic optimization-oriented, which aims to ensure efficient data acquisition instead of generating extra work.
Conclusions
This study presents a mirror-aided technique for surface flatness inspection to address the low accuracy of the scanning area far from TLS and occlusion problems caused by barriers.In this study, two hypotheses are proposed for the mirror-aided surface flatness inspection method.First, the mirror-aided approach can increase the scan coverage range and increase the surface flatness inspection efficiency.Second, the mirror-aided approach can measure the flatness of the floors occluded by construction elements based on the mirror reflection principle with one single scan, resulting in efficient surface flatness inspection.To validate the two hypotheses, two experiments are conducted on two laboratory-scale specimens.The validation results indicate that the mirror-aided approach can adjust the incident angle of laser beams to address the low measurement caused by the low incident angle, enlarging the scan range from 5 m to 10 m.In addition, the mirroraided approach is able to address the occlusion problem caused by construction elements such as interior walls with a measurement accuracy of more than 80% on the scanning
Conclusions
This study presents a mirror-aided technique for surface flatness inspection to address the low accuracy of the scanning area far from TLS and occlusion problems caused by barriers.In this study, two hypotheses are proposed for the mirror-aided surface flatness inspection method.First, the mirror-aided approach can increase the scan coverage range and increase the surface flatness inspection efficiency.Second, the mirror-aided approach can measure the flatness of the floors occluded by construction elements based on the mirror reflection principle with one single scan, resulting in efficient surface flatness inspection.To validate the two hypotheses, two experiments are conducted on two laboratory-scale specimens.The validation results indicate that the mirror-aided approach can adjust the incident angle of laser beams to address the low measurement caused by the low incident angle, enlarging the scan range from 5 m to 10 m.In addition, the mirror-aided approach is able to address the occlusion problem caused by construction elements such as interior walls with a measurement accuracy of more than 80% on the scanning area for surface flatness inspection.From the results, the proposed technique has the potential for accurate and efficient surface flatness inspections in the construction industry.As this study focuses on surface flatness measurement, the size of the target scene is regarded as the major concern instead of the shape and complexity.Since this method can cover the surface up to a scan range of 10 m from the TLS from the test results, the proposed mirror-aided method is suitable for small and medium-size projects such as surfaces of interior rooms.There are two main applications in real projects using the proposed surface flatness inspection method.First, for those surface areas with an edge line of less than 5 m, the proposed method can improve the accuracy of surface flatness measurement by setting flat mirrors near the far-edge (5 m from the TLS) line because our method using the mirrors can reflect laser beams to have lower incident angles.Second, in the case of more than 5 m, but less than 10 m scan range from the TLS, the proposed mirror-aided approach is able to enlarge the scanning range from 5 m to 10 m, which minimizes the number of scans.The contributions of the proposed technique are: (1) develop a mirror-aided technique that increases the measurement accuracy in a long-range area far away from the TLS and enable flatness measurement in hidden areas caused by physical barriers such as interior walls, and (2) validate the applicability of the mirror-aided flatness inspection technique through validation experiments.
However, limitations remain for further study in the near future.First, the proposed mirror-aided flatness inspection method is validated on lab-scale specimens.Further validations on the field-scale are required in order to increase the applicability of the proposed technique, which involves the utilization of the rotating mirror system and an appropriate selection of the mirror location.Second, regarding the difficulty in finding the right place for placing the rotating mirrors in the dynamic and clustered environment, this study assumes that the space to be scanned is clean and spacious enough to place the rotating mirrors.Therefore, further investigation on the real applicability of dynamic and clustered scenarios such as construction sites and industrial plants can be a future direction.In addition, the proposed mirror-aided approach can be applied to the 3D reconstruction of small-scale structures with complex geometries to minimize the number of scans.
Figure 1 .
Figure 1.Illustration of flat and non-flat surfaces in the cross-section view of a slab.
Figure 1 .
Figure 1.Illustration of flat and non-flat surfaces in the cross-section view of a slab.
Figure 2 .
Figure 2. Illustration of determining FF numbers on a slab.
Figure 2 .
Figure 2. Illustration of determining F F numbers on a slab.
Remote Sens. 2021, 13, x FOR PEER REVIEW 6 of 17mirror-aided approach can tackle the current limitation of the TLS-based surface flatness inspection.
Figure 3 .
Figure 3. Illustration of the two hypotheses: (a) hypothesis 1-the proposed mirror-aided technique may increase scan coverage and inspection accuracy; and (b) hypothesis 2-the proposed mirror-aided technique may enable scanning of occlusion areas.
Figure 3 .
Figure 3. Illustration of the two hypotheses: (a) hypothesis 1-the proposed mirror-aided technique may increase scan coverage and inspection accuracy; and (b) hypothesis 2-the proposed mirroraided technique may enable scanning of occlusion areas.
Figure 4 .
Figure 4. Two test specimens for validation: (a) specimen I with the FF number of 10.28 and (b) specimen II with the FF number of 21.23.
Figure 4 .
Figure 4. Two test specimens for validation: (a) specimen I with the F F number of 10.28 and (b) specimen II with the F F number of 21.23.
Figure 5 .
Figure 5. Test configuration of experiment I (a) side view of the test set up and (b) 3D view of the setup.
Figure 5 .Figure 6 .
Figure 5. Test configuration of experiment I (a) side view of the test set up and (b) 3D view of the setup.Remote Sens. 2021, 13, x FOR PEER REVIEW 9 of 17
Figure 6 .
Figure 6.Test configuration of experiment II: (a) the top view of the setup and (b) a photo of the setup.
Figure 6 .
Figure 6.Test configuration of experiment II: (a) the top view of the setup and (b) a photo of the setup.
Figure 7 .
Figure 7. Data preprocessing results: (a) implementation result of the density-based spatial clustering of applications with noise (DBSCAN) on the raw scan points; and (b) extraction and separation of scan points corresponding to the specimen and the rectangular patch attached to the mirror.
Figure 7 .
Figure 7. Data preprocessing results: (a) implementation result of the density-based spatial clustering of applications with noise (DBSCAN) on the raw scan points; and (b) extraction and separation of scan points corresponding to the specimen and the rectangular patch attached to the mirror.
Figure 7 .
Figure 7. Data preprocessing results: (a) implementation result of the density-based spatial clustering of applications with noise (DBSCAN) on the raw scan points; and (b) extraction and separation of scan points corresponding to the specimen and the rectangular patch attached to the mirror.
Figure 8 .
Figure 8. Transformation of the virtual scan points of the top surface on the specimen: (a) removal of the mixed-pixel outliers using the RANSAC algorithm and (b) transformation of the virtual scan points to the position of the actual scan points of the top surface of the specimen.
Figure 8 .
Figure 8. Transformation of the virtual scan points of the top surface on the specimen: (a) removal of the mixed-pixel outliers using the RANSAC algorithm and (b) transformation of the virtual scan points to the position of the actual scan points of the top surface of the specimen.
Figure 9 .
Figure 9. FF number estimation errors under various angular resolutions and scanning distances: (a) average estimation error for specimen I and (b) average estimation error for specimen II.
Figure 9 .
Figure 9. F F number estimation errors under various angular resolutions and scanning distances: (a) average estimation error for specimen I and (b) average estimation error for specimen II.
Figure 9 .Figure 10 .
Figure 9. FF number estimation errors under various angular resolutions and scanning distances: (a) average estimation error for specimen I and (b) average estimation error for specimen II.
20 20 Figure 10 .
Figure 10.Estimation errors of F F numbers with varying scan densities in specimen I and II: (a) average estimation error from actual scan points and (b) average estimation error from virtual scan points.
Figure 11 .
Figure 11.Comparison of FF estimation errors among three types of scan points, including combined scan points, virtual scan points and actual scan points.
Figure 11 .
Figure 11.Comparison of F F estimation errors among three types of scan points, including combined scan points, virtual scan points and actual scan points.
Figure 12 .
Figure 12.Determination of the mirror position, mirror size and mirror rotation angles for mirror-aided approach.
Figure 12 .Table 6 .
Figure 12.Determination of the mirror position, mirror size and mirror rotation angles for mirror-aided approach.
Table 1 .
[2] numbers, its deviations of elevation and thresholds used for validation for 5 different types of concrete slabs specified in ACI 117[2].
Table 2 .
Dimensions and FF numbers of the specimens.
Table 2 .
Dimensions and F F numbers of the specimens.
Table 3 .
Estimation errors for F F numbers under varying angular resolutions and scanning distances.
Table 3 .
Estimation errors for FF numbers under varying angular resolutions and scanning distances.
Table 4 .
F F number estimation errors under varying angular resolutions for specimens with occlusion problem.
Table 5 .
Scan density varying angular resolutions with different scanning distances of combined scan points, virtual scan points and actual scan points.
Table 6 .
Determination of the mirror rotation angle for the scan area to be enlarged. | 12,899.6 | 2021-02-15T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
ON BERNSTEIN–KANTOROVICH INVARIANCE PRINCIPLE IN H ¨OLDER SPACES AND WEIGHTED SCAN STATISTICS
. Let ξ n be the polygonal line partial sums process built on i.i.d. centered random variables X i , i ≥ 1. The Bernstein-Kantorovich theorem states the equivalence between the finiteness of E | X 1 | max(2 ,r ) and the joint weak convergence in C [0 , 1] of n − 1 / 2 ξ n to a Brownian motion W with the moments convergence of E (cid:107) n − 1 / 2 ξ n (cid:107) r ∞ to E (cid:107) W (cid:107) r ∞ . For 0 < α < 1 / 2 and p ( α ) = (1 / 2 − α ) − 1 , we prove that the joint convergence in the separable H¨older space H oα of n − 1 / 2 ξ n to W jointly with the one of E (cid:107) n − 1 / 2 ξ n (cid:107) rα to E (cid:107) W (cid:107) rα holds if and only if P ( | X 1 | > t ) = o ( t − p ( α ) ) when r < p ( α ) or E | X 1 | r < ∞ when r ≥ p ( α ). As an application we show that for every α < 1 / 2, all the α -H¨olderian moments of the polygonal uniform quantile process converge to the corresponding ones of a Brownian bridge. We also obtain the asymptotic behavior of the r th moments of some α -H¨olderian weighted scan statistics where the natural border for α is 1 / 2 − 1 /p when E | X 1 | p < ∞ . In the case where the X i ’s are p regularly varying, we can complete these results for α > 1 / 2 − 1 /p with an appropriate normalization.
Introduction
Let (Z n ) n≥1 be a sequence of random elements in some separable metric space S endowed with its Borel σ-field S .Let Z be a random element in S. Assume for notational simplicity that Z and the Z n 's are all defined on the same probability space (Ω, F , P).Then Z n converges in distribution to Z, denoted by n converges weakly to µ = P •Z −1 .This means that for every continuous bounded function f : S → R, (1.1) Relaxing the boundedness assumption on f in (1.1) leads to the classical question of convergence of moments.When S is a separable Banach space with norm , one is interested in extending the convergence in (1.1) to the case of functions satisfying for some positive constants c 1 , c 2 , r, (1. 2) It is well known that this extension is valid if and only if ( Z n r ) n≥1 is uniformly integrable (see [3], Thm.5.4) that is lim Let us note that if ( Z n r ) n≥1 is uniformly integrable, necessarily In this paper we focus on the convergence of moments in the functional central limit theorem.Let (X i ) i≥1 be an i.i.d.sequence of real valued random variables with null expectation and variance one if they exist, S n := X 1 + • • • + X n and ξ n the random polygonal line with vertices (k/n, S k ), k = 0, 1, . . ., n : From Bernstein theorem [2] it is known that for r > 0 the joint convergence where G is a Gaussian N(0, 1) random variable, is equivalent to the finiteness of E |X 1 | max{2,r} .Note also that in the case where r = 2 the convergence of the corresponding moment is trivial and that for 0 < r < 2 the convergence of E n −1/2 S n r follows immediately from E X 2 1 < ∞ by uniform integrability of n −1 S 2 n , n ≥ 1 .Let us denote by W a standard Brownian motion viewed as a random element in the space C[0, 1] of continuous functions x : [0, 1] → R endowed with the uniform norm x ∞ = sup{|x(t)|, t ∈ [0, 1]}.The classical Donsker-Prokhorov theorem provides the equivalence : For r > 0, the Bernstein-Kantorovich functional central limit theorem (see [11], Thm.11.2.1, p. 219) provides the equivalence between E |X 1 | max{2,r} < ∞ and the joint convergence It turns out that the condition E |X 1 | r < ∞ for some r > 2 provides also the convergence in distribution of n −1/2 ξ n to W in a stronger topology than the C[0, 1]'s one.Define for 0 ≤ α < 1 the Hölder space H o α [0, 1] as the set of functions x : [0, 1] → R such that endowed with the norm which makes it a separable Banach space (isomorphic to C[0, 1] in the special case α = 0).Let α ∈ (0, 1/2) and p(α) = (1/2 − α) −1 .By the necessary and sufficient condition for Lamperti's Hölderian invariance principle [12,13], we know that n −1/2 ξ n converges in distribution in the space H o α [0, 1] to the standard Brownian motion if and only if P(|X 1 | > t) = o(t −p(α) ) when t tends to infinity.When E |X 1 | p(α) < ∞, this condition is satisfied.Our first result extends the Bernstein-Kantorovich functional central limit theorem to the spaces Then the joint convergence holds if and only if It is worth noticing here that (1.9) is equivalent to the convergence of the distribution of n −1/2 ξ n to the one of W with respect to the Wasserstein distance of order r associated to the norm .α , i.e. with the mass transportation cost function c(x, y) = x − y r α , see Section 3.2 for details.An immediate consequence of Theorem 1.1 is that (1.11) (1.12) Among the various functionals f (n −1/2 ξ n ) where f satisfies a condition like (1.12) are the (powers of the) following weighted scan type statistics : We refer to [1,10] for valuable information about scan statistics and their applications.The following result is a corollary of more general results obtained in this paper (see Thms. 4.1 and 4.2).
and X 1 is regularly varying with exponent p then for any 0 ≤ r < p, and Y p has Fréchet distribution with exponent p.
The paper is organized as follows.Section 2 is devoted to preliminaries where uniform integrability, regularly varying random variables are discussed and necessary tools on Hölder spaces are presented.In Section 3, one proves Theorem 1.1 and some comments concerning the upper bound for admissible Hölder index in the Bernstein-Kantorovich invariance principle are presented.Convergence of moments of weighted scan statistics is considered in Section 4. The paper ends with an appendix devoted to some facts from Karamata theory.
Uniform integrability
Lemma 2.1.Let (Z n ) n≥1 be a sequence of random elements in the Banach space (S, ).For r > 0, ( Z n r ) n≥1 is uniformly integrable if and only if The proof is elementary and will be omitted.
Hölderian tools
Let D j denotes the set of dyadic numbers of level j in [0, 1], that is D 0 := {0, 1} and for j ≥ 1, The following sequential norm defined on H o α [0, 1] by is equivalent to the natural norm x α , see [5].Let us define also The Hölder norm of a polygonal line function is very easy to compute according to the following lemma for which we refer e.g. to [8] Lemma 3, where it is proved in a more general setting.Lemma 2.2.Let t 0 = 0 < t 1 < • • • < t n = 1 be a partition of [0, 1] and x be a real-valued polygonal line function on [0, 1] with vertices at the t i 's, i.e. x is continuous on [0, 1] and its restriction to each interval [t i , t i+1 ] is an affine function.Then for any 0 ≤ α < 1,
Regularly varying random variables
Throughout this paper we implicitly assume that all the random variables considered are defined on the same probability space (Ω, F , P) and we use the following notion of regularly varying random variable.
Definition 2.3.The random variable X is regularly varying with index p > 0 (denoted X ∈ RV p ) if there exists a slowly varying function L such that the distribution function F (t) = P(X ≤ t) satisfies the tail balance condition where a, b ∈ (0, 1) and a + b = 1.
We refer to [4] for an encyclopaedic treatment of regular variation.Writing L p or L o p,∞ for the sets of random variables X verifying respectively E |X| p < ∞ or lim t→∞ t p P(|X| > t) = 0, we note that RV p ⊂ L r , for 0 ≤ r < p, The next lemma plays a key role in our results on the scan statistics built on RV p random variables.Its proof is detailed in the Appendix.
i) For any 0 < s < p, for n large enough, uniformly in y ∈ [1, ∞).ii) For any s > p, ) for n large enough, uniformly in y ∈ [1, ∞), where In this section first we prove Theorem 1.1 and then discuss some aspects of Bernstein-Kantorovich theorem in Hölder framevork.
Proof of Theorem 1.1
The necessity of the X 1 's integrability conditions for the joint convergence (1.9) is easily seen.Indeed when r < p(α), (1.10) follows from the first convergence in (1.9), see [12].When r ≥ p(α), we note that giving the necessity of (1.11).Now let us prove that the integrability conditions (1.10) or (1.11) are sufficient for the joint convergence (1.9).By the Hölderian invariance principle [12], (1.10) or a fortiori (1.11) and by continuous mapping that n −1/2 ξ n α converges in distribution to W α .It remains to check the uniform integrability of the sequence n −1/2 ξ n r α n≥1 .It is enough to consider the case where r ≤ p(α) only.Indeed, if r > p(α), then we choose β = 1/2 − 1/r (so that r = p(β)) and notice that uniform integrability of the sequence Now to prove the uniform integrability of the sequence ( n −1/2 ξ n r α ) n≥1 we can obviously replace α by an equivalent norm.The choice of seq α seems more convenient here.So we have to prove that for r ≤ p(α), The following proof of (3.1) is essentially common to the cases r < p(α) and r = p(α) except for some nuance in the exploitation of the integrability of X 1 .From now on, we write p for p(α).
The first task in establishing (3.1) is to obtain a good estimate for Write for simplicity t k,j = k2 −j , k = 0, 1, . . ., 2 j , j = 1, 2, . . .and t k = t k,j whenever the context dispels any doubt on the value of j.It is easily seen that for any x ∈ H α such that x(0) = 0, From this we deduce that with where log denotes the logarithm with basis 2 (log 2 = 1).
In the first case we have If t k and t k+1 are in consecutive intervals, noticing that the slope of each of the two involved segments of the polygonal line is bounded in absolute value by n max 1≤i≤n |X i |, we get With both cases taken into account we obtain Noting that for j > log n, 2 j(−1+α) n 1/2 < n −1/2+α = n −1/p , this leads to To control the contribution of P 2 (n, t) when estimating the integral in (3.1), we note that for every n ≥ 1, In the case where r = p, as E ds is supposed finite, we can bound the right hand side of (3.7) by ∞ a ps p−1 P(|X 1 | > s) ds uniformly in n ≥ 1.In the case where r < p, the hypothese (1.10) implies that for some constant K depending only on the distribution of X 1 .Hence Gathering both cases we obtain that for r ≤ p, where In P 1,2 (n, t), max 0≤j≤log n 2 jα ≤ n α , so Using (3.10) we obtain that for r ≤ p, To estimate P 1,1 (n, t), we use a truncation method.Define for t > 0 and 0 < δ ≤ 1, Let S u k and S u k be the random variables obtained by replacing X i with respectively X i in S u k or with X i in S u k .We introduce also First, since on the event {max 1≤i≤n |X i | ≤ δtn 1/p }, S u k = S u k for every k, we note that (3.17) for t large enough, uniformly in n ≥ 1.By a Fubini argument, Now, in the case where r < p, using (3.8) we obtain Therefore (3.18) is satisfied for every t > t 0 not depending on n, since we can choose
.19)
The same holds in the case where r = p, replacing (3.8) by Markov's inequality and K by E |X 1 | p .Now it only remains to deal with sup n≥1 ∞ a rt r−1 P 1,1 (n, t, δ) dt.For any q > p, we have (3.20) Next we bound up E |S u k+1 − S u k | q by using the Rosenthal inequality: were C q is a universal constant, i.e. not depending on the distribution of the X i 's.As the X i s are i.i.d. and Going back to (3.20) with this bound, we obtain With all these partial estimates, the upper bound obtained for P 1,1 (n, t, δ) becomes where It is worth recalling here that E | X 1 | q depends on n, δ and t.As the first term in the upper bound (3.21) neither depends on n or on δ and goes to 0 as a tends to infinity, it remains only to investigate the asymptotic behavior of sup n≥1 I r,q (a, n) when a tends to infinity.To transform I r,q (a, n), we use the fact that if Y is a positive random variable and f a C 1 non decreasing function on [0, ∞) with f (0) = 0, then by the Fubini-Tonelli theorem, for any positive constant c, Exchanging the order of integrations in J r,q (a, n) gives where We bound J for r ≤ p, using (3.8), agreeing for simplicity that K = E |X 1 | p when r = p.This gives J ≤ K δan 1/p 0 s q−p−1 ds = K q − p a q−p δ q−p n q/p−1 . (3.23) For J , the same method would lead to a divergent integral in the special case r = p, so we restrict the use of (3.8) to the case where r < p.This gives (3.24) Going back to I r,q (a, n) and accounting (3.22)-(3.24)we obtain (recalling that δ ≤ 1) In the case r = p, bounding the integral in (3.24) by 1 p E |X 1 | p = K/p, we obtain Recapitulating all the estimate proposed throughout the proof, we see that for every a > t 0 (δ) defined by (3.19) and every n ≥ 1, We note in passing that in this case we did not really need the freedom to tune the value of δ, the simple choice δ = 1 would have done the job as well.
In the special case where r = p, we have only to modify the treatment of the last term in the bound (3.27).As q > p, for any ε > 0, we can fix a δ > 0 such that C 2 (p, q)(1 − p/q) −2 Kδ q−p < ε.Accounting (3.11), (3.13), (3.10) and (3.26), there is some a 1 depending on ε, p, q and on the distribution of X 1 , such that for every a ≥ a 1 and every n ≥ 1, As ε was arbitrary, the uniform convergence (3.1) is established and the proof is complete.
Comments
If we fix p > 2 and consider X 1 ∈ L p , then the best possible Hölderian index corresponding to the p'th moments convergence is α = α(p) := 1/2 − 1/p as shows the following result.
. By looking at the increments of n −1/2 ξ n between k/n and (k + 1)/n, 0 ≤ k < n, we see that which can be recast as It is well known that when the X i 's are i.i.d., Now choose for |X 1 | the distribution given by where (S, d) is a separable metric space, P (P1,P2) denotes the set of all probabilities on the Borel σ-field of S × S with given marginals P 1 , P 2 and c(x, y) = H(d(x, y)) where H(0) = 0, H is non decreasing on [0, ∞) and satisfies the Orlicz condition sup t>0 H(2t)/H(t) < ∞.It is known, see Theorem 11.1.1 in [11] that if for some a ∈ S, S c(x, a)P n ( dx) < ∞ for every n ≥ 1, then lim n→∞ A c (P n , P 0 ) = 0 if and only if for some (and therefore for any) b ∈ S.
Let us denote by A α r the Kantorovich functional obtained by choosing ).We observe that (A α r ) 1/r is the Wasserstein distance W r associated to the space H o α [0, 1].Write P n for the distribution of n −1/2 ξ n and P 0 for the Wiener measure.Then (3.32) can be rewritten as From this point of view, Theorem 1.1 means that the convergence of A α r (P n , P 0 ) to 0 is equivalent to the moment condition (1.10) or (1.11) according to r < p(α) or r ≥ p(α).Similarly, from the Bernstein Kantorovich invariance principle in C[0, 1], the convergence of A 0 r (P n , P 0 ) to 0 is equivalent to E |X 1 | max(r,2) < ∞.As already hinted in the introduction, we see that starting from the classical Donsker-Prokhorov invariance principle in C[0, 1] (A 0 2 (P n , P 0 ) → 0 iff E X 2 1 < ∞) and looking for a stronger convergence in the framework of C[0, 1] (A 0 p (P n , P 0 ) → 0) at the price of a stronger moment assumption , we obtain a similar convergence (A α p (P n , P 0 ) → 0) with a stronger topological path's space.
An application to uniform quantile processes
As a corollary of Theorem 1.1, we look now at the convergence of moments for the uniform quantile process.For the weak-Hölderian convergence of the uniform quantile process we refer to [7].
Let U 1 , . . ., U n be a sample of i.i.d.random variables uniformly distributed on [0, 1].We denote by U n:i the order statistics of the sample: which are distinct with probability one.For notational convenience, put The polygonal uniform quantile process χ pg n is the random polygonal line on [0, 1] which is affine on each [u n:i−1 , u n:i ], i = 1, . . ., n + 1 and satisfies As a corollary of Theorem 10 in [7], for any 0 < α < 1/2, χ pg n converges weakly in H o α [0, 1] to the Brownian bridge B. Theorem 1.1 enables us to complete this convergence by the following convergence of moments.Corollary 3.2.Let χ pg n be the polygonal uniform quantile process defined above.Then for every 0 ≤ α < 1/2, and every r > 0, where B is the Brownian bridge on [0, 1].
Proof.We recall the distributional equality (see e.g.[15]) where S k = X 1 + • • • + X k and the X k 's are i.i.d 1-exponential random variables.Following [7], introduce the polygonal process ζ n which is affine on each interval [u n:i−1 , u n:i ], i = 1, . . ., n + 1 and such that Putting X i = X i − E X i (note that E X 2 1 = 1) and S k = S k − E S k , we consider also the normalized partial sums polygonal process Ξ n built on the S k 's, i.e. the random polygonal line with vertices (k/n, n −1/2 S k ), k = 0, 1, . . ., n.As shown in the proof of Theorem 10 in [7], To obtain (3.39) with any s > r, we just note the following facts.First, by elementary computation, Next, since X 1 has finite moments of every order, E Ξ n+1 2s α converges to E W 2s α by Theorem 1.1.
Weighted scan statistics
In this section we consider several weighted scan type statistics.For α ≥ 0, define and where and the convergence (4.1) follows from Theorem 1.1.
To prove (4.2), we use the representation which is explained in details in [14].Here the functional g is defined by By the Hölderian invariance principle [12], continuous mapping and Slutsky lemma, (4.3) provides the convergence in distribution of g n (n −1/2 ξ n ) to g(W ).Then in view of (4.4), Theorem 1.1 gives (4.2) since g(W ) = T α (B).
where b n is defined by (1.13) and Y p has the Fréchet distribution with exponent p.
Proof.From [9] we know, that Hence, in order to prove convergence of moments we need to check uniform integrability of (b −r n M r n,α ) for each 0 < r < p. Actually it is enough to prove that for each 0 < r < p, And to establish (4.7) it is clearly sufficient to prove that for some positive constant c and some integer n 0 (possibly depending on r), By Lemma 4.3, Markov and Doob inequalities with q > p, It is worth noticing here that for s > 0, recalling that log n denotes the dyadic logarithm: 2 log n = n.Moreover we can always choose q such that q > max(2, p) and 1 + (α − 1/2)q > 0. | 5,292 | 2020-01-01T00:00:00.000 | [
"Mathematics"
] |
Increased Dendritic Branching of and Reduced δ-GABAA Receptor Expression on Parvalbumin-Positive Interneurons Increase Inhibitory Currents and Reduce Synaptic Plasticity at Puberty in Female Mouse CA1 Hippocampus
Parvalbumin positive (PV+) interneurons play a pivotal role in cognition and are known to be regulated developmentally and by ovarian hormones. The onset of puberty represents the end of a period of optimal learning when impairments in synaptic plasticity are observed in the CA1 hippocampus of female mice. Therefore, we tested whether the synaptic inhibitory current generated by PV+ interneurons is increased at puberty and contributes to these deficits in synaptic plasticity. To this end, the spontaneous inhibitory postsynaptic current (sIPSC) was recorded using whole-cell patch-clamp techniques from CA1 pyramidal cells in the hippocampal slice before (PND 28–32) and after the onset of puberty in female mice (~PND 35–44, assessed by vaginal opening). sIPSC frequency and amplitude were significantly increased at puberty, but these measures were reduced by 1 μM DAMGO [1 μM, (D-Ala2, N-MePhe4, Gly-ol)-enkephalin], which silences PV+ activity via μ-opioid receptor targets. At puberty, dendritic branching of PV+ interneurons in GAD67-GFP mice was increased, while expression of the δ subunit of the GABAA receptor (GABAR) on these interneurons decreased. Both frequency and amplitude of sIPSCs were significantly increased in pre-pubertal mice with reduced δ expression, suggesting a possible mechanism. Theta burst induction of long-term potentiation (LTP), an in vitro model of learning, is impaired at puberty but was restored to optimal levels by DAMGO administration, implicating inhibition via PV+ interneurons as one cause. Administration of the neurosteroid/stress steroid THP (30 nM, 3α-OH, 5α-pregnan-20-one) had no effect on sIPSCs. These findings suggest that phasic inhibition generated by PV+ interneurons is increased at puberty when it contributes to impairments in synaptic plasticity. These results may have relevance for the changes in cognitive function reported during early adolescence.
Parvalbumin positive (PV+) interneurons play a pivotal role in cognition and are known to be regulated developmentally and by ovarian hormones. The onset of puberty represents the end of a period of optimal learning when impairments in synaptic plasticity are observed in the CA1 hippocampus of female mice. Therefore, we tested whether the synaptic inhibitory current generated by PV+ interneurons is increased at puberty and contributes to these deficits in synaptic plasticity. To this end, the spontaneous inhibitory postsynaptic current (sIPSC) was recorded using whole-cell patch-clamp techniques from CA1 pyramidal cells in the hippocampal slice before (PND 28-32) and after the onset of puberty in female mice , assessed by vaginal opening). sIPSC frequency and amplitude were significantly increased at puberty, but these measures were reduced by 1 µM DAMGO [1 µM, (D-Ala 2 , N-MePhe 4 , Gly-ol)-enkephalin], which silences PV+ activity via µ-opioid receptor targets. At puberty, dendritic branching of PV+ interneurons in GAD67-GFP mice was increased, while expression of the δ subunit of the GABA A receptor (GABAR) on these interneurons decreased. Both frequency and amplitude of sIPSCs were significantly increased in pre-pubertal mice with reduced δ expression, suggesting a possible mechanism. Theta burst induction of long-term potentiation (LTP), an in vitro model of learning, is impaired at puberty but was restored to optimal levels by DAMGO administration, implicating inhibition via PV+ interneurons as one cause. Administration of the neurosteroid/stress steroid THP (30 nM, 3α-OH, 5α-pregnan-20-one) had no effect on sIPSCs. These findings suggest that phasic inhibition generated by PV+ interneurons is increased at puberty when it contributes
INTRODUCTION
The onset of puberty is a transition phase associated with many changes in behavior including an altered cognitive ability. Decreases in learning potential are reported during adolescence, where puberty onset may represent the end of a critical period for optimal learning for some types of basic tasks (Johnson and Newport, 1989;Subrahmanyam and Greenfield, 1994), an effect which can be more pronounced in females (Hassler, 1991). Synaptic plasticity assessed using long-term potentiation (LTP), an in vitro model of learning, is also impaired at puberty in the female mouse (Shen et al., 2010). These changes in cognition cannot readily be explained by the fluctuations in pubertal hormones at this time (Smith, 2013). However, cognition is impacted by the degree of inhibitory tone in areas that underlie learning, such as the CA1 hippocampus. Synaptic inhibition in the hippocampus is provided by a wide array of GABAergic interneurons which serve to sculpt neuronal circuits. Of these, the parvalbumincontaining (PV+) interneurons, which generate 40 Hz gamma oscillations in addition to providing potent inhibition to the circuit, play an important role in cognition (Buzsáki and Wang, 2012). This article will test whether PV+ interneurongenerated synaptic current increases at puberty and if this increase in inhibitory current contributes to the impairment in synaptic plasticity observed at this time. Our previous findings showed that the increase in tonic inhibitory current generated by extrasynaptic α4βδ GABARs at puberty (Shen et al., 2007) plays a role in impairing synaptic plasticity and spatial learning.
PV expression increases during development in the CA1 hippocampus (Wu et al., 2014) and medial prefrontal cortex (mPFC; Caballero et al., 2014a). However, PV+ interneuron number is not increased (Baker et al., 2017), implicating growth and proliferation of PV+ neurites in mPFC during adolescence. It is not known if the dendrite branching of PV+ interneurons in the CA1 hippocampus is altered by puberty onset.
The activity of PV+ interneurons is affected by ovarian hormones during pregnancy, raising the possibility that ovarian hormonal changes at puberty onset in the female mouse may have a similar effect. PV+ interneurons in both CA1 and CA3 hippocampus express the δ GABA A receptor (GABAR) subunit (Ferando and Mody, 2013), which co-expresses with α1 rather than α4 (Glykys et al., 2007), a more common expression pattern (Wei et al., 2003). δ expression on the PV+ interneurons is decreased during pregnancy (Ferando and Mody, 2013). This study will examine if similar changes in δ expression on PV+ interneurons occur at puberty as one factor which may alter the output from these inhibitory interneurons.
Therefore, this study examined changes in the phasic current generated by PV+ interneurons in the CA1 hippocampus before and after the onset of puberty in female mice. Two potential mechanisms for changes in PV+ interneuron-generated current were also examined during adolescence: dendrite branching of PV+ interneurons and δ GABAR expression on these neurons. sIPSC characteristics were examined in mice with reduced δ expression to determine the role of δ-GABARs in regulating PV+ interneuron activity. The impact of alterations in PV+ interneuron-generated inhibition on the induction of LTP was also tested. Also, we examined the effect of the positive GABA modulator/stress steroid THP (3α-OH, 5α-pregnan-20one; Purdy et al., 1991) on the phasic current. The findings from this study will determine whether the phasic current impairs synaptic plasticity and the response to a stress hormone during adolescence like the tonic current (Shen et al., 2007).
Animal Subjects
Female wild-type C57BL/6 mice (Jackson Labs, Bar Harbor, ME, USA), GAD67-GFP mice, and GABAR δ+/− mice were used. Hemizygous GAD67-GFP mice (Jackson Labs, Bar Harbor, ME, USA, line G42, stock#007677, CB6-Tg (Gad1-EGFP)G42Zjh/J), which selectively express an enhanced green fluorescent protein (EGFP) in the calcium-binding protein parvalbumin (PV)expressing subclass of basket interneurons, were bred on-site to yield mice that had high expression of GFP, as revealed by genotyping results (Transnetyx, Cordova, TN, USA). Only mice with high expression of GFP were used to trace the neuronal processes; lower expressing mice were excluded from the study. δ+/− mice were used rather than δ−/− mice because of the reduced breeding capacity of the homozygous knock-out and were bred on site. Mice were used at pre-pubertal (PND 25-34) and pubertal (∼beginning at puberty onset, ∼PND 35, confirmed by vaginal opening, -PND 44) ages. Mice were housed in a reverse light:dark cycle (12:12) and euthanized in the light part of the cycle 1 h before dark onset. This study was carried out following the principles of the National Institute of Health Office of Laboratory Animal Welfare (OLAW) and recommendations of the SUNY Downstate Institutional Animal Care and Use Committee (IACUC). The protocol was approved by the SUNY Downstate IACUC before the study was initiated.
Morphological Analysis
PV+ interneurons were visually identified in CA1 using confocal microscopy, based on their green fluorescence, which in these transgenic mice is selective for PV+ interneurons. Only mice with high expression of GFP, as revealed by genotyping results, were used for the study. Other reports show that an increased GFP signal in these mice allows for accurate analysis of dendritic branching (Curto et al., 2019). Imaged sections were 100 µm thick, and cells were only analyzed if the soma was located centrally, at roughly 50 µm (±5 µm). On average, 3-5 neurons from both hemispheres of the dorsal hippocampus were included per animal, with 4-6 animals in each group (25 neurons per group). Image stacks (50; 1-2 µm thick) of the neurons that met the criteria were recorded with an Olympus FluoView TM FV1000 confocal inverted microscope with objective UPLSAPO 40× NA:1:35 (Olympus, Tokyo, Japan). Neuron tracing was performed using the semi-automated, ''Simple neurite tracer'' plugin (Longair et al., 2011), for the open-source program, Fiji (Schindelin et al., 2012) an automated program for direct analysis of fluorescent images. Sholl analysis was conducted using the ''3DSholl analysis'' plugin 1 , based on the number of intersections between dendrites and the surface of spheres with a radius increment of 10 µm (Ferreira et al., 2014). Dendritic branching was evaluated for primary, secondary, tertiary, and higher (quaternary and quinary) levels.
Immunohistochemistry (Afroz et al., 2016) The sections were blocked in 0.01 M PBS supplemented with 10% donkey serum, 0.4% Triton and 0.05% sodium azide for 2 h at room temperature. Then, sections were incubated with rabbit anti-GABRD (ab110014, Abcam, 1:200) for 24 h at 4 • C. After washing, sections were incubated with secondary antibody (Alexa fluor 568 donkey anti-rabbit) overnight at 4 • C. For the negative control, the primary antibody was omitted. All slices were mounted on slides with ProLong Gold Antifade Reagent. Images were taken with an Olympus FluoView TM FV1000 confocal inverted microscope with objective UPLSAPO 60× NA:1:35 (Olympus, Tokyo, Japan) after adjusting the laser intensity to minimize background. Images were analyzed for luminosity at 568 λ (reflecting GABAR δ expression) using the region of interest (ROI) program of Fiji (Image J) software (NIH). PV+ interneurons were identified by their GFP signal (488 λ). All images presented are optimized to an equivalent degree.
Whole-Cell Patch-Clamp Electrophysiology (Shen et al., 2007) CA1 hippocampal pyramidal cells were visualized using a Leica differential interference contrast (DIC)-infrared upright microscope. Whole-cell patch-clamp recordings were carried out in voltage-clamp mode at a holding potential of −50 or −60 mV at room temperature (Axopatch 200B amplifier, Axon Instruments, 20 kHz sampling frequency, 2 kHz 4-pole Bessel filter) and pClamp 9.2 software. Patch pipets were fabricated to yield open tip resistances of 2-4 M . Electrode capacitance and series resistance were monitored and compensated; access resistance was monitored throughout the experiment, and cells discarded if the access resistance >10 M . Five millimolar QX-314 was added to the pipet solution to block voltage-gated Na+ channels and GABA B receptor-activated K+ channels (Nathan et al., 1990;Otis et al., 1993). The bath contained 2 mM kynurenic acid and 1 µM strychnine to pharmacologically isolate the GABAergic current.
sIPSC Analysis sIPSCs were detected using a threshold delimited event detection sub-routine in pClamp11.0.3. Only data with a stable baseline were included in the analysis. Synaptic events (200-300) were averaged for each treatment condition and the current amplitude, half-width, and sIPSC frequency assessed. In some cases, histograms were constructed for sIPSC peak amplitude across pubertal groups.
Long-Term Potentiation (LTP; Shen et al., 2010)
The CA1 hippocampal pyramidal cell layer was visualized using an upright microscope. Field excitatory postsynaptic potentials (fEPSPs) were recorded extracellularly from the stratum radiatum of CA1 hippocampus using an aCSF-filled glass micropipette (1-5 m ) in response to stimulation of the Schaffer collateral-commissural pathway using epoxylite insulated platinum-iridium parallel (150 µm distance) bipolar electrodes (57 mm long, 50-100 K impedance, FHC, Bowdoin, ME, USA). The intensity of the stimulation was adjusted to produce 50% of the maximal response. LTP was induced using theta-burst stimulation (TBS, 8-10 trains of four pulses at 100 Hz, delivered at 200 ms intervals, repeated 3× at 30 s intervals) which is a physiological stimulation pattern. fEPSP responses were recorded at 30 s intervals with an Axoprobe-1A amplifier and pClamp 10.1 for 20 min. before and 120 min. after TBS (producing 1-4 mV EPSPs). In some cases, the µ-opioid agonist DAMGO (1 µM) was bath applied before LTP induction to block PV+ interneurons (Gulyás et al., 2010).
Statistics
All data are represented by the mean ± standard error of the mean (SEM). Initially, the Kolmogorov-Smirnov test was used to confirm that all data followed a normal distribution, and then the specific comparisons (see below) were undertaken. For all statistical tests, the level of significance was determined to be P < 0.05.
Synaptic Current
Planned comparisons for changes in GABA-gated current before and after drug treatment for the same cell were analyzed using a paired t-test. sIPSC comparisons from pre-pubertal and pubertal groups were tested using the student's t-test.
LTP
A statistically significant potentiation at 120 min. in the LTP study was determined by averaging the final 20 values for EPSP slope after TBS compared with the average pre-TBS values using the paired t-test. Comparisons of these final 20 slope values between experimental groups were accomplished with a student's t-test.
Dendritic Branching
For each assessment, the means of 3-6 neurons from each mouse were used. For the Scholl analysis, branching assessed at 10 µm intervals was compared in pre-pubertal vs. pubertal groups using the student's t-test. Also, branching of primary, secondary, tertiary, and quaternary dendrites were assessed separately comparing pre-pubertal vs. pubertal groups using the student's t-test.
Immunohistochemistry
Mean values of δ luminosity/mouse were averaged from six ROI's per mouse, and then mean group values averaged for pre-pubertal and pubertal mice. A one-tailed t-test was used to test the hypothesis that δ expression is decreased at puberty.
Inhibitory Synaptic Current Is Increased at Puberty in CA1 Hippocampus
We recorded the pharmacologically isolated GABAergic synaptic current using whole-cell patch-clamp techniques in the hippocampal slice before and after the onset of puberty. The frequency of spontaneous inhibitory postsynaptic currents (sIPSCs) was increased by more than 2-fold at puberty compared to pre-pubertal values (P = 0.0021, Figure 1). In addition, the mean peak amplitude of these currents was also increased by ∼60% at puberty (P < 0.03), unaccompanied by significant changes in the sIPSC half-width. These data suggest that sIPSCs, which reflect compound action potential-dependent and action potential-independent synaptic currents, are increased at puberty in the CA1 hippocampus of the female mouse.
Pharmacological Identification of Interneuron Sub-type
There is a diverse group of interneurons localized to the CA1 hippocampus, which is characterized by unique intracellular markers such as parvalbumin (PV) or cholecystokinin (CCK). We used a pharmacological approach to distinguish between these interneuron populations for their role in generating sIPSCs at puberty. sIPSCs which are generated from PV+ interneurons can be blocked with 1 µM DAMGO, while those generated by CCK-containing interneurons are significantly reduced with 1 µM WINN 55, 212-2 (WINN; Hájos et al., 2000;Glickfeld et al., 2008;Gulyás et al., 2010;Katona et al., 1999). We employed this strategy to determine the origin of sIPSCs which are increased in frequency in the pubertal hippocampus. To this end, the drugs were bath applied in different recordings of sIPSCs and results compared with the pre-drug control period. DAMGO, but not WINN, reduced sIPSC frequency by 25% (P = 0.042, Figure 2) while reducing sIPSC amplitude by 40% (P < 0.00001). These results suggest that PV+ interneurons contribute significantly more GABAergic input during the pubertal period than the corresponding contribution from CCK-containing interneurons.
Expression of the GABAR δ Subunit Is Decreased at Puberty on PV+ Interneurons in CA1 Hippocampus
Recent studies have reported the expression of α1βδ GABARs on PV+ interneurons in both CA1 and CA3 hippocampus (Ferando and Mody, 2013). Expression of these receptors is decreased by ovarian hormones which are increased during pregnancy (Ferando and Mody, 2013). For the present study, we tested whether the ovarian hormonal changes at puberty would also decrease δ expression on PV+ interneurons in the CA1 hippocampus of the female mouse. To this end, immunohistochemical techniques were used to assess the staining of the GABAR δ subunit on PV+ interneurons from GAD67-GFP mice. PV+ interneurons were identified by their selective GFP stain. δ expression was significantly decreased at puberty compared to pre-puberty (Figure 3, 667.1 ± 43.7, pre-pub vs. 536.15 ± 37, pub, P = 0.026) suggesting that tonic inhibition of these interneurons is reduced at puberty. A negative control is presented showing no staining with the secondary antibody when the primary antibody is omitted (Figure 3B).
sIPSC Frequency and Amplitude Are Increased Before Puberty in Mice With Reduced δ Expression
Because the high levels of GABAR δ subunit expression on PV+ interneurons decrease at puberty in association with increases in sIPSC frequency and amplitude, we tested whether reducing δ expression would increase sIPSC frequency and amplitude in pre-pubertal CA1 hippocampus. To this end, sIPSCs were recorded from CA1 hippocampal pyramidal cells from wild-type and δ+/− mice. In the pre-pubertal mouse hippocampus with reduced δ expression, sIPSC frequency and amplitude were increased almost 2-fold compared to wild-type (Figure 4, P < 0.00028, and P < 0.04, respectively). In contrast, sIPSC half-width was not significantly different. Both amplitude and half-width were not significantly different from pubertal values, although frequency was significantly lower (t (19) = 2.47, P = 0.023). These data suggest that reduced δ expression on PV+ interneurons can increase sIPSC frequency and amplitude on CA1 hippocampal pyramidal cells recorded before puberty.
Dendritic Branching of PV+ Interneurons Is Increased at Puberty
One potential mechanism for the increase in sIPSC frequency attributable to PV+ interneurons at puberty would be an increase in inputs from these interneurons. Therefore, we examined whether the dendritic branching of PV+ interneurons is altered during puberty. To this end, confocal microscopy was used to capture images of PV+ interneurons taken from GAD67-GFP mice at the prepubertal and pubertal stages. Scholl analysis was used to quantify branching across the length of the dendrite. Dendritic branching was significantly increased at puberty compared to pre-puberty when assessed 80 (P = 0.0142), 90 (P = 0.014), 100 (P = 0.005), and 110 (P = 0.00159) µm from the cell body ( Figure 5). When different classes of dendrites were evaluated, pubertal branching was significantly greater (Figure 5) for secondary (P = 0.034), tertiary (P = 0.0024), and quaternary/quinary dendrites (P = 0.00227), but not for primary dendrites. Total branches (P = 0.0017) and dendrite length (P = 0.012) were also greater at puberty compared to pre-puberty.
The Neurosteroid THP Does Not Alter sIPSCs at Puberty in CA1 Hippocampus
THP is a metabolite of the ovarian hormone progesterone and is also formed and released in the brain after prolonged stress (Purdy et al., 1991) when it typically enhances inhibition (Bitran et al., 1999;Stell et al., 2003). Its effects on the tonic current generated by extrasynaptic α4βδ GABARs are polaritydependent, where it decreases outward current but increases inward current (Shen et al., 2007). Thus, we tested whether this steroid has effects on the phasic GABAergic current at puberty and further whether polarity-dependent effects could be observed. To this end, the direction of the phasic current was reversed by altering [Cl − ] I so that THP effects could be tested on both outward (E Cl = −70 mV) and inward (E Cl = −30 mV) current. Unlike its effects on the tonic current, THP did not significantly alter any sIPSC parameter for current in either direction (Figure 6).
Silencing PV+ Interneurons Restores Induction of LTP Using Theta-Burst Stimulation at Puberty
Induction of LTP by TBS is impaired at puberty despite robust LTP induction observed pre-pubertally (Shen et al., 2010).
In this study, we tested whether increased phasic inhibition from the PV+ interneurons contributes to the impairment in TBS-induction of LTP at puberty. To this end, TBS of the Schaffer collaterals was used to induce LTP in the CA1 hippocampal slice (Figure 7). As noted previously, this method was unsuccessful in inducing LTP in pubertal slices (Shen et al., 2010). However, bath application of 1 µM DAMGO to silence the PV+ interneurons restored LTP induction (Figure 7, P = 0.0016), resulting in an EPSP slope 150% of control, significantly greater than pubertal values (P = 0.00597). These findings suggest that phasic inhibition from PV+ interneurons, which increases at puberty, plays a role in the deficits in synaptic plasticity observed at this time.
DISCUSSION
The results from the present study demonstrate a significant increase in inhibitory current recorded from CA1 pyramidal neurons at the onset of puberty in female mice. PV+ interneurons contributed significantly to this increase in the phasic current. These interneurons, identified by their selective GFP expression in transgenic mice, displayed greater dendritic branching at puberty and decreased expression of the GABAR δ subunit, reflecting disinhibition of these neurons. The pubertal increase in phasic inhibition via PV+ interneurons impaired synaptic plasticity, assessed by the induction of LTP. DAMGO blocks GABA release from PV+ fast-spiking basket cells (Gulyás et al., 2010) via the µ-opioid receptors which express on these interneurons (Svoboda et al., 1999). Although the original article showing this effect (Gulyás et al., 2010) used carbachol-activated PV+ interneurons, the reduction in tonic inhibition of the PV+ interneurons due to the decrease in α1βδ GABARs would similarly activate these interneurons. However, the effect of DAMGO is not entirely specific, as this drug has a small inhibitory effect on regular spiking basket cells (Glickfeld et al., 2008) and can also inhibit ivy and neurogliaform cells (Krook-Magnuson et al., 2011;Harris et al., 2018). All of these interneurons are slow firing and exert relatively slower effects than the PV+ interneurons (Krook-Magnuson et al., 2011;Overstreet-Wadiche and McBain, 2015). Therefore, the majority of the higher frequency IPSCs which are blocked by DAMGO would be expected to originate in the PV+ interneurons. DAMGO reduced by about half the increased IPSC frequency seen at puberty but reduced more completely the increased IPSC amplitude seen at puberty. These findings suggest that other interneurons may also contribute to the pubertal increase in IPSC frequency, while the larger amplitude IPSCs are likely generated by the PV+ interneurons.
The increase in dendritic branching of PV+ interneurons at puberty could result in increased inhibitory synaptic input to the pyramidal cells and may underlie the observed increase in phasic current at puberty, although other possibilities such as presynaptic effects and altered PV+ activity, as discussed below, may play a role. Increases in PV expression have been reported in the hippocampus across development (Wu et al., 2014) unaccompanied by increases in interneuron numbers (Honeycutt et al., 2016), although puberty has not been investigated until the present study. Our results showing an increase in dendritic branching at puberty may underlie these findings. Similar findings in mPFC suggest that increased PV expression (Caballero et al., 2014b;Caballero and Tseng, 2016) unaccompanied by changes in interneuron density (Baker et al., 2017) are correlated with increases in synaptic current (Caballero et al., 2014b) and may also reflect an increase in dendritic branching. Recent studies suggest that GABAergic input to hippocampal interneurons is depolarizing, but becomes a shunting inhibition at high conductances (Song et al., 2011). α1βδ GABARs express close to the soma of PV+ interneurons (Glykys et al., 2007) where GABAR inhibition is shunting (Elgueta and Bartos, 2019) unlike the hyperpolarizing inhibition on the distal dendrites which may have more of a role in reducing sub-threshold events. The high circulating levels of THP before puberty (Mannan and O'Shaughnessy, 1988) would amplify inhibition generated by these α1βδ GABARs, where up to ∼20-fold increases have been reported (Zheleznova et al., 2008), and would ensure that α1βδ GABARs generate a tonic inhibition to be effective at silencing PV+ interneurons. At puberty, α1βδ GABAR expression on these interneurons decreases as do hippocampal levels of THP (∼60%; Shen et al., 2007). We can extrapolate from our previous findings (Shen et al., 2007) to estimate that the 20% decrease in δ expression at puberty in the present study would decrease charge transfer by 2.3% given the low open probability of α1βδ GABARs in the absence of neurosteroid (Bianchi et al., 2002;Zheleznova et al., 2008). However, because neurosteroid levels decrease at puberty from high pre-pubertal levels which would amplify tonic inhibition before puberty, the decrease in charge transfer could be as much as 78%. These events would disinhibit interneuron firing, resulting in increased phasic inhibition of the pyramidal cells.
Our current findings support a role for α1βδ GABARs on PV+ interneurons in regulating interneuron activity because sIPSC frequency and amplitude were increased to pubertal levels in pre-pubertal mice with reduced δ expression. This finding confirms the functional significance of the inhibition generated by the α1βδ GABARs which express on the PV+ interneurons. Although δ GABARs can contribute to the phasic current of the principal cells (Sun et al., 2018), this may be a minimal effect compared to their effect on interneuron activity. Taken together, these data provide additional evidence that decreases in δ expression at puberty disinhibit PV+ interneurons and likely contribute to the pubertal increase in sIPSC frequency and amplitude.
The decrease in δ expression on, and thus decrease in tonic inhibition of, PV+ interneurons at puberty may be a mechanism for the profuse dendritic branching observed at this time, as dendritic growth is dependent upon NMDA receptor-induced activity (Sepulveda et al., 2010). Our previous findings show that in the CA3 hippocampus, knock-out of α4βδ GABARs increases the dendritic branching of CA3 pyramidal cells (Parato et al., 2019), suggesting a role for these receptors in limiting dendritic arborization. Other factors that may contribute include the increase in 17β-estradiol (E2) around the time of puberty (Ahima et al., 1997), as E2 has been shown to increase dendritic branching of hippocampal neurons (Audesirk et al., 2003). There are multiple potential mechanisms for this effect, including E2's effect to increase levels of brain-derived neurotrophic factor (BDNF; Scharfman et al., 2007) which on its own can directly increase dendritic growth and arborization in many CNS areas (Horch and Katz, 2002), including the hippocampus (Cheung et al., 2007), an effect mediated by cyclin-dependent kinase 5 (Cdk5). The effects of BDNF are rapid, with significant branching seen after 48 h (Sanchez et al., 2006). E2 has also been shown to increase dendritic branching in the hippocampus via nitric oxide (Audesirk et al., 2003).
Increased branching of PV+ interneurons in the CA1 hippocampus at puberty would enhance synaptic contacts with the principal cells and effectively increase inhibitory tone. In the present study, we show that this increase in inhibition at puberty is one factor leading to impairment of synaptic plasticity at puberty, when induction of LTP using TBS is mostly unsuccessful (Shen et al., 2010). Silencing PV+ interneurons with DAMGO restored TBS-induced LTP, suggesting that the increase in phasic inhibition mediated by these interneurons plays a role in impairing synaptic plasticity at puberty. Recent studies (Camiré et al., 2018;Topolnik and Camiré, 2019) suggest that TBS can also impact the plasticity of PV+ interneurons. In young (PND 13-21) mice, sub-threshold TBS induced LTP and supra-threshold TBS induced LTD in PV+ interneurons, which reverted to LTP after blockade of supralinear Ca++ signals. Because these mice have very low expression of δ-GABARs (Laurie et al., 1992) it is possible that at puberty when δ expression is increased, TBS may increase synaptic strength of PV+ interneurons as one additional factor to impair synaptic plasticity.
In addition, based on previous reports (Ferando and Mody, 2015) showing that reduced α1βδ expression increases the frequency of gamma oscillations, the pubertal decrease in α1βδ expression would be expected to result in a similar change in gamma frequency, which may also be detrimental to cognition, in vivo (Buzsáki and Watson, 2012).
Our previous study showed that the tonic inhibition generated by α4βδ GABARs, which emerge on the dendritic spines of CA1 pyramidal cells at puberty (Shen et al., 2010) from nearly undetectable levels before puberty (Shen et al., 2007), also play a critical role in the impairment of TBS-induced LTP at puberty. Knock-out of these receptors restores LTP induction, suggesting that increases in both tonic and phasic inhibition at puberty contribute to the deficits in synaptic plasticity at this time. The increase in interneuron activity at puberty could also enhance tonic inhibition via GABA spillover. Other studies have shown impairments in spike-timing-dependent plasticity at puberty that was due to increased GABAergic inhibition (Meredith et al., 2003). Knock-out of α4βδ GABARs also restores optimal levels of spatial learning at puberty (Shen et al., 2010(Shen et al., , 2017, when this type of learning is impaired compared to pre-puberty. Independent of puberty, the inhibitory current generated by α5-GABARs hinders synaptic plasticity in the CA1 hippocampus, by raising the threshold for induction of LTP and impairing certain types of hippocampal-dependent memory in vivo (Martin et al., 2010). In addition to their extrasynaptic localization, recent studies have shown that α5-GABARs also express sub-synaptically where they are the target of somatostatin-containing interneurons on pyramidal cells (Schulz et al., 2018) as well as on somatostatin-containing interneurons, where they disinhibit the circuit (Magnin et al., 2019). In the former case, these outwardly rectifying receptors efficiently reduce NMDAR-generated potentials. Although, they may play a role in synaptic plasticity under some conditions, blockade of α5-GABARs at puberty does not restore LTP induction using TBS (Shen et al., 2017).
In human studies, the onset of puberty is known as a critical time-point when certain types of learning are sub-optimal compared to the rapid learning observed before puberty (Gur et al., 2012). Post-pubertal impairments are reported for learning a second language, music training and certain visuospatial tasks (Pepin and Dorval, 1986;Johnson and Newport, 1989;Subrahmanyam and Greenfield, 1994;Shavalier, 2004;Bailey and Penhune, 2012), which are more pronounced in individuals with learning disabilities (Wright and Zecker, 2004) and also exhibit gender differences (Gur et al., 2012). Increased GABAergic tone, mediated by both phasic and tonic components, may underlie some of these developmentally-related learning deficits.
The expression of α1βδ GABARs is regulated by ovarian hormones. Recent reports have shown that δ expression decreases during late pregnancy (Ferando and Mody, 2013) when circulating levels of 17β-estradiol (E2) and THP are high and starting to decline, respectively. δ expression returns to pre-pregnancy levels post-partum (Ferando and Mody, 2013) suggesting a tight coupling between ovarian hormones and δ expression. Ovarian hormonal changes at puberty have a similar pattern, where circulating E2 levels increase 2-fold 5 days before the onset of puberty (Ahima et al., 1997) and THP levels rise 6-8-fold 2 weeks before the onset of puberty (Mannan and O'Shaughnessy, 1988;Fadalti et al., 1999), before declining at the time of vaginal opening (Shen et al., 2007), the physical manifestation of puberty onset. These hormonal changes likely trigger the decrease in α1βδ GABAR expression on PV+ interneurons.
In the present study, a physiological concentration of the neurosteroid THP (30 nM; Smith et al., 2006) had no significant effect on the phasic current, assessed for both inward and outward current, which is consistent with other studies showing no effect of a similar neurosteroid, THDOC (3α, 21dihydroxy-5α-pregnan-20-one), on the phasic current below a concentration of 100 nM (Stell et al., 2003). This is in contrast to our previous findings showing a robust effect of this steroid on the tonic current generated by pubertal α4βδ GABARs. δcontaining GABARs is a sensitive target for the neurosteroids THP and THDOC (Belelli et al., 1996;Brown et al., 2002;Bianchi and Macdonald, 2003;Stell et al., 2003). However, the effect of THP at these receptors is polarity-dependent, such that 30 nM THP increases inward current (outward Cl-flux) and decreases outward current (inward Cl-flux; Shen et al., 2007;Gong and Smith, 2014) independent of Goldman rectification. This effect of the steroid was prevented by mutation of the positively charged residue arginine 353 to neutral glutamine in the intracellular transmembrane (TM)3-TM4 loop (Shen et al., 2007), suggesting this may be a re-entrant loop as has been reported for other GABARs (O'Toole and Jenkins, 2011). Neurosteroids enhance δ GABAR desensitization, which is greater for outward current (Bianchi et al., 2002) and may explain their polarity-dependent modulation of current generated by this receptor. Polaritydependent effects are also reported for certain anesthetics (etomidate, propofol, and isoflurane) and benzodiazepines, at α1β2γ2 GABARs which have greater effects on inward current (Mellor and Randall, 1998;O'Toole and Jenkins, 2012).
In dentate gyrus granule cells where GABAergic current is depolarizing but a shunting inhibition (Staley and Mody, 1992), THDOC enhances this inhibition (Stell et al., 2003). At puberty, the GABAergic current of CA1 hippocampal pyramidal cells is outward (Shen et al., 2007). THP reduces this inhibitory tonic current at this time, increasing neuronal excitability, and increasing anxiety-like behavior (Shen et al., 2007) in contrast to its typical anxiety-reducing effect (Bitran et al., 1999).
The results from the present study suggest that synaptic GABAergic inhibition is increased at puberty in the CA1 hippocampus, both as a result of an increase in dendritic branching as well as disinhibition of the PV+ interneurons. Because tonic inhibition is also increased at this time (Shen et al., 2007), these findings suggest that the onset of puberty would represent a time of dampened hippocampal circuit activity. In humans, slower reaction times are reported for the match-tosample task (McGivern et al., 2002) among others (Feenstra et al., 2012) in early adolescence compared with late adolescence and adulthood, in addition to sub-optimal learning. The findings from the present study may, at least in part, underlie the relative slowing down of cognitive function during early adolescence.
DATA AVAILABILITY STATEMENT
The datasets generated for this study are available on request to the corresponding author.
ETHICS STATEMENT
The animal study was reviewed and approved by SUNY Downstate Institutional Animal Care and Use Committee.
AUTHOR CONTRIBUTIONS
HS conducted the electrophysiology experiments. LK conducted the immunohistochemical and Scholl analysis studies, analyzed the data, and contributed to the manuscript. SS designed the experiments, analyzed the electrophysiology experiments, made the figures, and wrote the article.
FUNDING
This work was supported by National Institute of Health/National Institute of Mental Health (NIH/NIMH) grants R01-MH100561 and R01-MH115900 to SS. Both of these grants were used to pay salaries, purchase supplies and animals as well as publication costs. | 7,994 | 2020-07-09T00:00:00.000 | [
"Biology"
] |
Experimental setup for probing electron-induced chemistry in liquid micro-jets
We present an experimental setup for probing chemical changes in liquids induced by electron collisions. The setup utilizes a custom-designed electron gun that irradiates a liquid microjet with an electron beam of tunable energy. Products of the electron-induced reactions are analyzed ex-situ. The microjet system enables re-circulation of the liquid and thus multiple irradiation of the same sample. As a proof-of-principle experiment, an aqueous solution of TRIS (2-Amino-2-(hydroxymethyl)propane-1,3-diol) was irradiated by 300 eV electron beam. Optical UV–VIS analysis shows that the electron impact on the liquid surface leads to the production of OH radicals in the solution which are efficiently scavenged by TRIS.
Introduction
In the collision community, the term electron-induced chemistry is typically understood as changes in the structure of molecules upon their interaction with free (ballistic) electrons [1].A number of elementary processes can trigger such chemical changes: dissociative ionization, dissociative electron attachment, or neutral dissociation (which starts with a vertical electronic excitation).The cross sections for these initial reactions strongly depend on the energy of incident electrons: the dissociative electron attachment is a resonant process typically operative in the energy range of up to 10 eV, while the ionization and excitation peak around 100 eV electron energy and then slowly drop down to keV region (where other processes such as core-excitation open up).The outcome of chemical reactions is thus sensitive to the energy of ballistic electrons.There are in principle two experimental approaches to probe such electron-induced chemistry: (i) those where the reactions are induced by an electron beam with well-defined energy and (ii) those where a broad distribution of electrons is present, e.g.plasma or high-energy irradiation experiments (where electrons are produced as secondary electrons).
Probing interactions with a well-defined electron beam is straightforward in experiments that probe samples in the form of molecular beams in a vacuum [1,2].Products of such electron-molecule (or electron-cluster) reactions are typically analyzed by mass spectrometry or by various imaging methods.A lot of attention has also been dedicated to probing the electron-induced chemistry in the solid-state samples which are typically cryogenically frozen on a solid substrate [3].The reaction products in a solid state are either monitored in situ, by various surface-science techniques, or the samples are removed from the vacuum and analyzed by analytical methods [4].
A common denominator in all the above-mentioned experimental methods is that the samples are compatible with high vacuum and thus can be irradiated by a controlled electron beam.This is generally not the case with samples that are volatile liquids, e.g.aqueous solutions.A liquid's vapor pressure usually destroys the vacuum.Probing electroninduced chemistry in liquids has thus been limited to experiments where the electrons are created directly in the sample as secondary particles, e.g.various pulsed-radiolysis methods.These can be performed in a pump-probe manner and thus provide information about the ultrafast dynamics, however, control over the kinetic energy of electrons, inducing chemical reactions, is usually lacking in such experiments.
The liquid microjet technique has overcome the difficulty of introducing volatile liquids into the vacuum.Originally, it has been developed to study the evaporation processes of liquids in vacuum [5][6][7].Nowadays, it is used in many fields of research concerning liquid-phase phenomena, namely any kind of x-ray absorption [8] and photo-electron spectroscopy [9,10] microjets have also been used as targets for molecular collisions [11,12] and collisions with fast ions [13].An extension of the microjet technique, flat jet [8,14], has been used in, e.g.high-harmonic generation [15], in imaging diffusion controlled chemical luminescence reactions [16], or in molecular beam scattering experiments [17].
In this paper, we present a new experimental setup in which an electron beam interacts with a liquid microjet.To our knowledge, the only previous setup combining electrons and microjets is that of Tian and co-workers [18,19] which monitors ions created at the interface by a time-of-flight technique.The prospects of extracting collision cross sections from electronmicrojet scattering have been explored by Muccignat et al [20] We are interested in chemical changes which are induced by electron impact in the liquid itself.We have developed a setup that enables recirculating of the liquid samples for multiple irradiations by electrons.The samples are then analyzed offline by standard chemical analysis techniques.We also present the first results on the electron-induced transformation of an aqueous solution of tris(hydroxymethyl)aminomethane (TRIS).
Effect of the gas-phase molecules evaporated from the microjet
The first issue which should be considered when judging the prospects of electron beam interactions with a liquid jet surface is the transmission of electrons through the cloud of evaporated molecules surrounding the jet.The scattering of electrons on the gas phase molecules could in principle lead to a strong attenuation of the incident beam.
To estimate this effect, we consider a microjet of pure H 2 O with a radius R jet .The density of the gas-phase molecules depends on the distance r from the jet center.Due to cylindrical symmetry this dependence can be approximated as n(r) = n 0 R jet r where n 0 is the gas number density at the surface.The electron with energy ε approaching the jet from a distance R gun (e.g. the distance between the jet and the exit hole of a differentially pumped electron gun) will statistically undergo N col collisions with gas-phase H 2 O molecules: Here, σ(ε) is the cross section for electron-H 2 O collisions.We approximate it by the total scattering cross-section recommended by Itikawa and Mason [21], other recommended cross sections [22,23] yield very similar results.Figure 1 shows the N col (ε) as obtained from equation (1) assuming n 0 corresponding to the saturated vapor pressure of 6.1 mbar [24] and R gun = 4 cm, for two different jet radii.The total e-H 2 O scattering is a strongly varying function of the electron energy and so is the resulting number of collisions.The electron will undergo on average less than one gas-phase collision for energies larger than approximately 100 eV (for the jet of 30 µm diameter) or 300 eV (for the jet of 50 µm diameter).
It should be noted that there is an inherent uncertainty in this simple model.It mainly concerns the value of n 0 , the gas density at the water interface.The saturated vapor pressure is a sensitive function of water temperature.Due to evaporative cooling, the temperature of the jet decreases with the increasing downstream distance from the nozzle.The value of 6.1 mbar corresponds to water at 272 K and is the same as was used in similar numerical estimates by Faubel and coworkers [9,24].At 292 K, the vapor pressure of water is 23.4 mbar.Using n 0 corresponding to such pressure would increase N col by a factor of 3.6 compared to that plotted in figure 1.The saturated vapor pressure approximation might be, however, an inappropriate representation of n 0 at a liquid microjet surface: the apparent surface temperature determined from measurements of the velocity distribution of evaporated molecules is much lower than the freezing point of waterbelow 240 K [24].It is likely that the evaporation from the jet is probably far from equilibrium [7].The lower apparent temperature means lower n 0 and it would correspondingly lower the N col .In spite of the uncertainty in the n 0 determination, the dominant factor for N col is a strong dependence of σ(ε) on the electron energy ε.
This fact guided the design of the present setup.Even though the collisions of low-energy electrons (e.g., below 10 eV) would be attractive since they could enable the direct formation of electronic resonances or a selective population of optically-forbidden electronic states in liquids, such slow electrons would not get to the surface through the vapor layer (at least in case of water and aqueous solutions).We will thus use incident energies higher than 100 eV.Even though such electrons can lead to a number of secondary processes in water and we thus need to compromise the aspect of chemical control by fine-tuning the incident electron energy, a high incident energy is necessary to reach the liquid surface.
Overview
The setup consists of three vacuum chambers: (i) interaction chamber, (ii) cold trap chamber, and (iii) electron gun chamber.The overall scheme is shown in figure 2. A photograph of the interaction chamber is shown in figure 3.
In detail: the interaction chamber is a cubical vacuum chamber (203 mm side length) with six CF160 ports.It is located at the center of the setup.A turbo molecular pump with a pumping speed of 685 l/s for N 2 (Pfeiffer HiPace 700) is mounted from the top.It is backed by a scroll pump (Leybold Scrollvac 15 plus).
The electron gun chamber is a cross (279 mm in length) with two CF160 ports and two CF40 ports.The chamber is differentially pumped by a small turbo molecular pump with a pumping speed of 67 l/s for N 2 (Pfeiffer HiPace 80), mounted from the top and backed by a dry multi-stage root pump (Adixen ACP 15).
Another cross chamber (273 mm in length), with two CF160 ports and two CF100 ports, is connected to the interaction chamber.Two liquid nitrogen (LN 2 ) traps were mounted on this chamber to freeze the residual liquid vapor inside the interaction chamber to obtain a better vacuum.It turned out that the operation of only one LN 2 trap is sufficient, using the second one did not improve the background pressure during the jet operation.
Electron gun and electronics
We designed and built a non-monochromated electron gun based on an earlier design by Milosavljević et al [25], with a number of modifications that will be described in this section.The scheme of the electron gun, together with the electron trajectories simulated in SIMION8.1, is shown in figure 4. The cylindrical electrodes in the gun are made from 316 stainless steel and the insulating parts are made from machineable ceramics.The electrode spacings and alignment are done with rubidium balls of 1.5 mm and 2 mm diameters.The main difference from the original design is that we used an ES-526 Yttria-coated Iridium disc from Kimball Physics, specially optimized for increased electron emission in harsh vacuum conditions.Other modifications include 2-axis flat deflector electrodes and the addition of a 2nd focusing electrode S1 before deflectors, for triple focusing.Moreover, we reduced the overall dimensions and changed the aspect ratios of all electrodes, and implemented a two-stage vacuum section with electrical feed-throughs to accommodate the differential pumping requirements.The latter is for keeping an electron source in the best possible vacuum conditions, in order to ensure a long lifetime.
We have also designed and built a microcontroller-based high-voltage electronics power supply to fully operate the electron gun remotely via a serial RS232 port communication with a PC.For data acquisition and electron gun control, we used a PC software called ELS (electron spectrometer processor), originally written by Allan [26].We upgraded ELS with new modules and serial routines to facilitate the automatic tuning of all electrodes, scanning of electron energy and deflector positions, as well as the recording of the electron beam current in the Faraday Cup 1 and 2 using a Keithley 6485 picoampermeter.The gun's effective operating energy range is (100-700) eV.It should be noted that the electron gun is not shielded electrostatically or magnetically since it was designed to operate at energies above 100 eV, where we do not expect any major issues due to small constant residual electric or magnetic fields in the interaction chamber.
As recommended by Kimball Physics the filament power supply was operated in a constant voltage mode, while we kept the filament current relatively low around 3.7 A to ensure lower initial electron energy spread caused by thermal emission.By using a separate setup we measured the elastically scattered electrons from a water micro-jet and perform the energy calibration of the electron gun and measure the energy resolution.The measured energy resolution at full width at half maximum (FWHM) of the elastic peak is about 400 meV at 600 eV incident electron energy.The absolute energy scale uncertainty is of ±0.5 eV in the full operating range.At 600 eV electron energy, the estimated focused beam spot size from simulation in SIMION at a focal distance of Faraday Cup 1, or 30 mm from the last electrode M (ground), is about 1 mm.This value is in good agreement with an experimentally obtained electron beam profile measured by the perpendicular movement of a Faraday Cup 1 across the electron beam.It should be noted that the distance of the liquid-jet nozzle is 17 mm from the electrode M and the optimal voltage for a focusing electrode V is selected in the optimization procedure as described in section 3.5.
Recirculating liquid microjet
The main component of the setup is the high-vacuum compatible recirculating liquid micro-jet system.Its schematic diagram is shown in figure 5.The liquid micro-jet is produced by passing the desired liquid sample at high pressure through a nozzle kept inside the vacuum chamber.The liquid jet is exposed to vacuum (background pressure in the range of 10 −5 mbar with the jet) on a 3-5 mm length (adjustable by an XYZ-positioning stage described below), captured by a catcher unit and ultimately collected in a glass sample collector bottle kept outside of the vacuum chamber.It is possible either to reuse the collected sample for repeating the irradiation or to perform an ex-situ analysis of the sample.During these experiments, nozzles with either 30 or 50 µm inner diameter were used.The nozzles were prepared in-house by cutting fused silica capillaries with the chosen inner diameter and 150-micron outer diameter into about 5-7 mm length and then glueing with epoxy to a metallic pipe (either stainless steel or titanium) with a 1/16 inch outer and 200-micron inner diameters.
In the current configuration, it is possible to switch between up to four different solvent samples during the experiment with the help of a solvent selector.The liquid samples are kept at atmospheric pressure either at room temperature or in a cold bath depending on the stability of the sample.The liquid samples are passed through a vacuum degassing unit (DeltaChrom, model VD 060) to remove any solvated gas from the liquid and then connected to a four-channel sample selector unit.The output of the sample selector is connected with a high-performance liquid chromatography (HPLC) pump (Watrex P102).The outlet of the HPLC pump is connected with the nozzle using 1/16 inch outer diameter PEEK tubing.The nozzle is mounted on a vacuum-compatible triaxial positioning stage (Mechonics MS15 xyz-mounted, vacuum edition, denoted as 'Small XYZ manipulator' in figures 2 and 3) inside the vacuum chamber.
To collect the liquid jet after the electron beam irradiation, a copper catcher with a 1/4 inch outer diameter, around 20 mm length, and a conical-shaped top part with a 500-micron opening is placed 3-6 mm downstream from the nozzle.During the experiments, the catcher is heated between 55 • and 75 • C to prevent the freezing of the liquid.The copper catcher is connected with a 1/4 inch PEEK tube which ultimately leads outside of the vacuum chamber.The PEEK tube is connected to The Nozzle, Catcher unit, Faraday Cup units 1 and 2, along with all electrical feedthroughs are mounted on a linear translation X,Y-manipulator (UHV Design Ltd XY aligner with +/-10 mm of X and Y motion, denoted as 'Big XY manipulator' in figure 2) using a custom-made CF63 port so that the entire liquid jet assembly can be moved with respect to the incident electron beam.The liquid jet is produced by flowing the liquid sample with 0.75 ml min −1 to 1.2 ml min −1 flow rate depending on the property of the liquid.A pressure of around 65-110 bar is required in the nozzle tubing depending on the flow rate and the liquid.It was observed that it is important to maintain a minimum flow rate and a minimum pressure in the nozzle tubing to have a working jet and efficient collection system.If the flow rate is too low then the liquid starts flowing from the catcher unit back to the interaction region where it ultimately freezes.For the present configuration, it was observed that with pure water at least a flow rate of 0.65 ml min −1 is required to prevent the backflow of the liquid.Keeping the liquid jet and collection unit running is intricate and depends on the physical properties of the liquid in interest.The liquid jet and the recirculating system can be started on and stopped in a high vacuum (the background pressure before turning on the jet is in the 10 −8 mbar range) with a 1-propanol sample.However, it is not possible to start the liquid jet with water in a high vacuum.During the present experiments, the liquid jet was always started in a high vacuum with 1-propanol and later switched to the sample of interest.Before stopping the jet, it was always switched to 1-propanol.On the collection unit side, it is possible to switch between two different collection bottles without breaking the continuity of the experiment.In this way, it is possible to reuse the collected sample or to perform an ex-situ analysis of the collected sample while the experiment is still running.It is also possible to switch between different samples or different experimental conditions and to collect the different sample products separately without having to stop the experiments in between.
Sample analysis
For the current experiments, an ex-situ UV-VIS spectrophotometer from Shimadzu (Model UV-1800) was used.The spectrophotometer is capable of measuring photo-absorption in the wavelength range from 190 nm to 1100 nm.
Optimization of the setup
To optimize the setup we monitored the scattered electrons from the liquid jet using a molybdenum plate, hereafter denoted as Faraday Cup 2. It is placed inside the vacuum chamber in a mutually perpendicular direction with respect to both the liquid jet and the electron beam direction.The Faraday Cup 2 was kept at 27 V floating potential using batteries to measure the current of the scattered electrons using a Keithley 6485 picoampermeter.The incident electron beam is scanned using the deflectors along the horizontal and vertical directions and the scattered electron current is measured on the Faraday cup 2. The 2D map of the scattered electron current, measured in Faraday Cup 2, as a function of the deflection voltages is shown in figure 6.The voltage on the focusing electrode V was adjusted to obtain the sharpest 2D map image.
The primary electron current measured in the Faraday Cup 1 can be set from a few pA to hundreds of nA by adjusting the filament current.During the present experiments, it was kept at around 10 nA in order to minimize electron beam instability and other issues due to charging and pressure variation with the liquid jet being started/stopped in a vacuum.
Results: electron-induced chemistry in aqueous solutions of TRIS
In the pilot experiments, we probed the electron irradiation of the aqueous solution of TRIS, (HOCH 2 ) 3 CNH 2 , 2-Amino-2-(hydroxymethyl)propane-1,3-diol (the structure is shown in the inset of figure 7).TRIS is one of the most common buffers in the biology/biochemistry laboratory practice (buffer solutions are used as a means of keeping pH at a nearly constant value, and the buffer range for TRIS, pH 7-9, coincides with the physiological pH typical of most living organisms).Additionally, TRIS is an effective OH radical scavenger [27,28] and it efficiently protects DNA from high linear-energy transfer radiation [27].It is thus used, for example, as a stabilizer in plasmid-DNA experiments.
Recently, Roush et al [29] suggested that TRIS can be used as an intrinsic hydroxyl radical dosimeter.They induced oxidation by the exposure of the aqueous solutions of TRIS with hydrogen peroxide to a pulsed KrF excimer laser and measured the ultraviolet absorbance of the irradiated samples.The laser irradiation caused a substantial absorbance increase in the wavelength region from 250 to 310 nm where a new band appeared with a maximum of around 265 nm (figure S1 in [29]).The band was attributed to a rather complex chemical reaction of TRIS with OH radicals produced from the H 2 O 2 photolysis (figure S3 in [29]).This work inspired us to probe the transformation of TRIS solutions with electron impact.
The TRIS sample, solid at room temperature, was commercially purchased from Sigma-Aldrich with a stated purity of 99.9%.The current experiments were performed with a 19.7 mM solution of TRIS in water.The solution is then passed through the liquid microjet into the reaction chamber where it was irradiated with an electron beam of 300 eV incident energy.
Figure 7 shows the UV-VIS spectra.The 'Control (Outside, no irradiation)' is the sample of the solution which was left on the table.'Control (1 pass, no irradiation)' is the sample that was passed through the liquid microjet setup with the filament on, however, with the electron beam blocked by one of the electron gun lenses (zero electron beam current obtained on the Faraday Cup).This sample probes a possible effect of photon-induced chemistry due to light from the filament, thermal chemistry, or any other effects caused by running the solution through the microjet system.Finally, samples '1 pass', '3 passes' and '4 passes' denote how many time the samples were passed through the microjet system while being irradiated by a 300 eV electron beam.The average electron beam current measured on the Faraday cup was about 5 nA.
The main absorbance peak of the pure TRIS aqueous solution is at 200 nm [29].Upon electron irradiation, a weak but clearly distinguishable new band appears in figure 7 with a maximum of around 270 nm.This is in perfect agreement with the findings of Roush et al [29] who attributed this band to the reaction of TRIS with OH radicals.Therefore, we conclude that electrons are able to reach the liquid surface.Upon their interaction with the water solvent, OH radicals are formed and then scavenged by TRIS molecules.
Clearly, the intensity of the 270 nm band does not grow linearly with the number of passes through the jet.Several effects can influence the final concentration of the TRIS+OH product.First, the UV-VIS analysis is performed offline, where all the samples are first collected and then sent for analysis (the spectrometer is located in a different laboratory than the microjet setup).Second, the concentration of TRIS in the solution itself is not constant: upon passing through a vacuum, the jet evaporates.The vast majority of the evaporated molecules are water molecules and the concentration of TRIS thus increases with the number of passes through the vacuum.We estimated this increase from the value of absorbance at 210 nm (the absorbance was first calibrated with test solutions of increasing TRIS concentration which were analyzed on the same UV-VIS spectrometer).While the initial concentration of TRIS solution was 19.7 mM, after 1 pass it was 20.4 mM, while after 4 passes it reached 24.7 mM.Further effects related to the concentration of the irradiation products can be related to e.g.incomplete mixing of the solution when reintroducing the collected irradiated sample to the reservoir to complete the desired number of passes.Also in each pass, only the interfacial layer is irradiated.
The data shown in figure 7 are from one measurement run.We repeated this experiment three times at slightly different conditions (various TRIS concentrations and electron currents) with the same results (the band at 270 nm was growing, however, this growth did not show a linear dependence on the number of passes).
Conclusions
In conclusion, we present here a setup for analyzing macroscopic reactions induced in liquid samples by the impact of an electron beam with a well-defined energy.The setup combines P Nag et al the liquid microjet nozzle with a catcher system which enables multiple irradiation passes of the same sample.The irradiation is performed using a custom-designed high-energy electron gun.The proof-of-principle experiment with the aqueous solutions of TRIS unambiguously shows that 300 eV electrons interact with the liquid surface and induce chemical changes that are macroscopically detectable by an offline UV-VIS analysis.
The setup can be in principle viewed as an extension of the increasingly used continuous-flow technology in organic synthesis [30,31].This extension can open up new avenues especially in electron-induced chemistry and electron catalysis.For example, it can be used to probe reactions which have a known outcome when they are induced by a photon absorption.For this purpose, the reaction chamber will be further equipped with light sources to enable comparison between photo-induced transformation versus electron impact induced ones.A specific example of such elementary reaction can be a cis-trans isomerization [32] or analogues of photo-redox reactions [33].Electron impact can directly excite long-lived excited states which are optically spin-forbidden.The setup thus enables to explore the concept of triplet-state initiated chemistry, which can essentially be applied to any photochemical reaction involving triplet states.
Nevertheless, we must mention about a few limitations of the setup in current configuration.It is not possible to do experiments with liquids having high viscosity as it results too much pressure (more than 250 bar) behind the nozzle to achieve workable flow rate.Also it is difficult to recirculate liquids having significantly higher vapor pressure, for example pure methanol.We are trying to gain more experience and improve the setup for future experiments with such liquids.
Figure 1 .
Figure 1.Average number of collisions N col which the electron statistically undergoes on the way to the microjet according to equation (1) for Rgun = 4 cm and n 0 corresponding to local pressure of 6.1 mbar.
Figure 2 .
Figure 2. 3D model of the reactivity experimental setup.
Figure 3 .
Figure 3. Photograph of the interaction chamber.A few electrodes from the low vacuum section of the electron gun are visible on the right side.The Faraday Cup 1 is used to measure the current of the primary electron beam, while the plate behind the Nozzle labeled as Faraday Cup 2 is used to measure the current of electrons that are scattered from the liquid jet.Graphite paint is used to cover the quartz glass Nozzle in order to minimize charging.
Figure 5 .
Figure 5. Schematic diagram of the recirculating liquid micro-jet production unit.
Figure 6 .
Figure 6.The electron scattering map as a function of the vertical and horizontal deflection of the incident electron beam, obtained in the Faraday Cup 2 at 600 eV energy for 1-propanol in the liquid microjet.
Figure 7 .
Figure 7. UV-VIS spectra of different samples.The solid lines show absorbance data (left vertical axis), the dashed lines are the same data in a magnified scale (right vertical axis).Details about the individual datasets are provided in the text. | 6,109.6 | 2023-10-10T00:00:00.000 | [
"Chemistry",
"Physics"
] |
First-principles study on the helium migration energies in B12X2 (X=O, Si, P, As) crystals for neutron absorber use
ABSTRACT Boron-carbide-based materials (B12X2) with two-atom instead of three-atom chains have better ductility, which indicates that they may be better alternatives to nuclear absorber materials than B4C. In this study, we investigated the migration energy of neutron-induced helium interstitials using density functional theory calculations. As a result, we discovered that the migration energy of helium in B12Si2 and B12O2 is lower than that in B4C, which suggests that these materials might be better in inhibiting the introduction of helium gas and subsequent volume expansion during the neutron irradiation. Moreover, we found that B12P2 and B12As2 have isotropic helium migration barriers, while B4C, B12Si2, and B12O2 exhibited a strong anisotropy in the helium migrations.
Introduction
Boron carbide (B 4 C) is the leading candidate for a neutron absorber for next-generation fast reactors, which are entrusted with the important task of providing sustainable nuclear energy [1,2]. However, due to the helium gas produced from the 10 B[n, α] 7 Li reaction, the volume swelling of the B 4 C pellets during use has been a critical safety issue for the longterm use of the control rods [3][4][5]. Moreover, the brittleness of B 4 C makes the pellets very easy to break into small fragments under the internal stress induced by the heat gradient and helium gas production. These small fragments may enter the reserved narrow gaps between the absorber pellets and cladding tube, which would directly result in an early swelling-induced cladding failure. Considering the fact that the service life of the B 4 C control rods currently employed in the Japanese experimental fast reactor JOYO is far below the expected value [4,5], it is essential to develop a more reliable neutron absorber material that can withstand prolonged use in fast reactors.
Recently, with the use of theoretical quantummechanics calculations based on the density functional theory (DFT), An et al. [6] proposed a new idea for improving the ductility of the boron carbide materials by replacing the C-B-C chain with a more weakly bonded two-atom chain. The aim of this change is to avoid partial amorphization of the B 4 C grains under stress by enhancing the slip between the boron icosahedra. A series of investigations based on DFT calculations of (B 11 C p )Si 2 , (B 10 Si 2 )Si 2 , B 12 P 2 , and B 12 O 2 revealed that these compounds with two-atom chains instead of three-atom chains are not likely to form amorphous bands under stress, which normally lead to brittle failure [6][7][8][9]. Compared to B 4 C, these materials have similar boron content and improved ductility characteristics, and are therefore expected to outperform B 4 C as neutron absorbers for fast reactors by reducing the pellet fragmentation.
Meanwhile, carbon nanotubes (CNTs) are attracting increasing attention due to their excellent abilities to catalyze the recombination of radiationinduced defects and create exhaust paths for helium and other fission gases in their aluminum composites [10]. Therefore, the CNTs are worth investigating further to see if they have an inhibiting effect on the swelling of the B 4 C pellet by efficiently capturing and exhausting the helium gas produced from the 10 B(n, α) 7 Li reaction. Moreover, CNTs could be employed as supplement materials to the carbon content in the carbon-deficient boron compounds such as (B 11 C p )Si 2 , (B 10 Si 2 )Si 2 , B 12 P 2 , and B 12 O 2 , and provide a neutron-spectrum softening effect. In addition, adding CNTs can enhance the toughness of the B 4 C pellets. Hitherto, B 4 C/CNT composites with up to 10 vol% CNT, which can effectively improve the fracture toughness of the material through a fiber bridging effect, have been reported [11].
Since the helium atoms are generated inside the grains during the neutron irradiation, the efficiency of the CNT helium-atom capturing process is largely affected by the diffusivity of the helium atoms inside the grains. The helium diffusivity in these B 12 X 2 materials is an important indicator for predicting their performances as neutron absorbers. However, there are only few studies focusing on the effects of neutron irradiation on such materials. Moreover, neutron irradiation experiments are costly and require very long time.
Therefore, in this study, we employed DFT calculations as a preliminary study in order to predict the changes in the helium diffusivity by switching the C-B-C chains to Si-Si, P-P, O-O, and As-As chains. We investigated the helium migration energy in boron-carbide-like crystals that contain two-atom chains, including B 12 O 2 , B 12 Si 2 , B 12 P 2 , and B 12 As 2 , and compare the results to the helium migration energy in B 4 C, which has been already calculated in the previous study [12]. It is considered to be the most crucial factor in distinguishing the effectiveness of excluding helium gas from the grains.
Computational details
All calculations were performed using the Cambridge Sequential Total Energy Package (CASTEP) [13,14]. The Perdew-Burke-Ernzerh variant of the generalized gradient (GGA-PBE) approximation was used to determine the exchange-correlation potentials [15]. Moreover, the electron-ion interactions were described by ultra-soft pseudopotentials [16]. Structural optimizations of the supercells were performed using a plane-wave cutoff energy of 450 eV, and the Brillouin zone was sampled with a single k-point at Γ for the B 12 Si 2 , and a 2 × 2 × 2 Monkhorst-Pack k-point mesh for B 12 O 2 , B 12 P 2 , and B 12 As 2 to reach a total energy convergence of 0.01 eV per atom. One helium atom was doped into the interstitial sites of the 2 × 2 × 2 supercells of B 12 O 2 , B 12 Si 2 , B 12 P 2 , and B 12 As 2 , respectively. The cell parameters for the supercells with a helium interstitialtype defect were uniformed with the cell parameters of the aforementioned structure models with one helium atom sitting at the center of the supercell. The lattice parameters after geometry optimizations are shown in Table 1. The lattice parameters of the B 12 Si 2 structure showed a relatively large deviation from the experimental value obtained for B 4 Si due to the lower silicon content, while the results from all other calculations were consistent with the experimental values. Note that the silicon boride compound B 6 Si is reported to have a complex orthorhombic unit cell with 281 atoms [17]. The silicon borides with rhombohedral boron-carbide-like structures are considered to have extra silicon atoms that substitute boron atoms on the icosahedra [18][19][20]. In this study, we used the simplified B 12 Si 2 structure in order to avoid unnecessary uncertainties.
The formation energies of defects were calculated using the following equation: where E perf and E def are the total energies of the perfect structure model and defective structure model, respectively, and n He and μ He are the number change and chemical potentials of the helium atoms in the structure, respectively, with μ He being calculated from an isolated helium atom. The migration energies of helium interstitials were calculated by a transition state search between the decided start and ending defect structures using the method of complete linear synchronous transit/quadratic synchronous transit (LST/QST).
Interstitial configuration of He in the B 12 X 2 materials
Due to structural similarities, the possible interstitial sites of the B 12 X 2 crystals (except for B 12 O 2 ) were roughly the same as that of the B 4 C crystal [12,24], (Figure 1). The interstitial helium atom could be located on both sides of the hexagon, which was formed by four equatorial atoms from the two icosahedra and the two atoms of the chain end, and on the extension line of the two-atom-chain, named i h and i c , respectively. The space in the middle of the two-atom-chain, named i m , was also investigated. In contrast, the helium interstitials in the B 12 O 2 crystal were slightly different, the helium atoms seem not to be able to stably sit on the sites next to the hexagon. Instead, the helium atoms moved to the locations concentrated near the center of the two-oxygen-atom chains during the process of the geometry optimization. The distances between these final locations of the helium atoms to the middle points of the chains were less than 1 Å, and the total energy differences comparing to the defect structures with a helium atom located at the middle points of the chains were as low as 0.03 eV per supercell. Therefore, we did not make a distinction between the interstitial sites crowded in the middle of the chains in the calculations. Figure 2 shows a schematic representation of the helium interstitial sites in a B 12 O 2 supercell. The sites inside the boron-icosahedra were ignored in this study because these sites have been previously reported as unfavorable for helium atoms [12,24,25]. The results from the calculations of the formation energies for the helium interstitials are listed in Table 2.
In the B 12 P 2 and B 12 As 2 crystals, the formation energies of helium interstitial atoms at the chain center sites (i m ) were approximately twice as much as those of other sites, which indicates that the i m sites in B 12 P 2 and B 12 As 2 were also unfavorable helium interstitial sites. Hence, we excluded them from the migration path calculations. In contrast, the i m sites of the helium interstitials in the B 12 Si 2 and B 12 O 2 crystals exhibited similar, relatively low formation energies to those of other sites. Moreover, without any exceptions, the lowest formation energies of the helium interstitial defects in these B 12 X 2 crystals were lower than those in the B 4 C crystal, which indicated that the helium atoms inside the B 12 X 2 crystals could migrate to the lattice at a lower temperature than those in B 4 C.
Migration paths and energy barriers
By observing the distribution of stable helium interstitial sites in the B 12 X 2 crystals, we found that there were two migration paths to be investigated in order to understand the characteristics of the helium migration in different directions. One of these paths was parallel to the (111) plane, while the other one was along the [111] direction. The two migration paths and the respective calculated energy barriers are illustrated in Figure 3. The migration energy barriers for each path are presented in Table 3. Due to the fact that according to our calculations, the helium interstitials at the i m sites Figure 1. The schematic representation of helium interstitial sites (in white balls) around the two-atom-chain sits at the center of a 2 × 2 × 2 supercell of B 12 Si 2 , B 12 P 2 , or B 12 As 2 . The unnecessary atoms were hidden for easy observation. The green balls stand for boron, the blue balls are the "X" elements [27]. in the B 12 Si 2 crystal were energetically more favorable than those at the i h sites, it was assumed the migrations between two i h sites would pass through the closest i m sites, which are not shown in Figure 3.
However, this does not affect the results from the calculations of the migration barriers. Schneider et al. [12] performed calculations on the barriers for helium migration in the B 4 C crystal, and reported the one for the path parallel to the (111) plane as 1.21 eV and that for the path along the [111] axis as 2.22 eV. The energy barrier for the helium interstitials migrating via the path parallel to the (111) plane in the B 12 O 2 crystal was calculated to be 1.15 eV, and that for the helium interstitials migrating via the path along the [111] axis in the B 12 Si 2 crystal was found to be 1.18 eV. This indicated that the helium interstitials could migrate faster in B 12 O 2 and B 12 Si 2 than in B 4 C. Furthermore, due to the strong anisotropy of the migration barriers along the various directions, microstructure optimization, i.e. the fabrication of pellets of B 12 Si 2 and B 12 O 2 with the majority of grains aligned in the same direction, could be an effective method to improve the performance of excluding helium atoms from the crystals. In addition, since the rhombohedral B 4 Si phase (the structure was simplified to B 12 Si 2 in this study) decomposes into orthorhombic B 6 Si and Si at high temperatures (1100-1390°C [26]), more experimental data are needed in order to assess the effect of this decomposition on the reactor safety when the material is employed in control rods.
In contrast to B 12 O 2 and B 12 Si 2 , the migration of helium interstitials in the B 12 P 2 and B 12 As 2 crystals was found to be isotropic. This was indicated from the fact that the helium interstitial atoms would need to overcome energy barriers of 1.65 and 1.54 eV in B 12 P 2 and B 12 As 2 , respectively, between the i h sites, in order to migrate in any direction. It is expected that the flat disk-like helium bubbles may not appear in the neutron-irradiated/helium-implanted B 12 P 2 and B 12 As 2 crystals since they should be initiated by the anisotropic helium diffusion.
Conclusions
In summary, the migration barriers of helium interstitial atoms in the B 12 X 2 crystals (X = O, Si, P, As) were studied using DFT calculations. The results from the DFT analysis indicated a low helium migration barrier in the direction along the [111] axis of B 12 Si 2 and the path parallel to the (111) plane of B 12 O 2 . Hence, it is expected that the grain-orientation-optimized pellets of B 12 Si 2 and B 12 O 2 would exhibit a better performance of excluding helium gas than that of B 4 C. In contrast to B 4 C, B 12 Si 2 , and B 12 O 2 , the helium migration barriers of B 12 P 2 and B 12 As 2 were isotropic, which could result in different morphology in the neutron-induced helium bubbles.
All in all, our work suggests that the B 12 Si 2 and B 12 O 2 crystals would potentially have a better performance in inhibiting the volume expansion due to the faster helium exhaust. However, further experiments are needed in order to perform a comprehensive evaluation of the ability of B 12 X 2 crystals to substitute B 4 C as neutron absorbers in fast reactors. | 3,487.4 | 2018-04-03T00:00:00.000 | [
"Physics",
"Materials Science"
] |
In-line evanescent-field-coupled THz bandpass mux/demux fabricated by additive layer manufacturing technology
In this research, we present the design, fabrication, and experimental validation of 3D printed bandpass filters and mux/demux elements for terahertz frequencies. The filters consist of a set of in-line polystyrene (PS) rectangular waveguides, separated by 100 μm, 200 μm, and 400 μm air gaps. The principle of operation for the proposed filters resides in coupled-mode theory. Q-factors of up to 3.4 are observed, and additionally, the experimental evidence demonstrates that the Q-factor of the filters can be improved by adding fiber elements to the design. Finally, using two independent THz broadband channels, we demonstrate the first mux/demux device based on 3D printed in-line filters for the THz range. This approach represents a fast, robust, and low-cost solution for the next generation of THz devices for communications. Published by The Optical Society under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article’s title, journal
Introduction
Additive manufacturing technology, also known as 3D printing, is a valuable technique for the fabrication of complex optical components for the terahertz (THz) range of frequencies [1,2]. The impact of such devices has spread across a wide range of areas, including skin characterisation [3], microscopy [4], polarization control [5], photonic crystals [6,7], beam formation [8,9], and THz fibers [10,11], just to name few. The success of rapid prototyping in the THz area is due to the ability to create complex structures within a few minutes, giving researchers the freedom to explore exotic geometries [12]. Due to the increasing interest in applications of THz light, devices able to manipulate this radiation are becoming in higher demand, and fortunately, 3D printing offers a solution. Bandpass filters are some of the key components needed for the manipulation of THz light. Designs to produce them hitherto incorporate fundamental principles from guided-mode resonances [13], Bragg resonators [11,14], and two-dimensional periodic structures [6] among others [1,2,15]. Coupled mode waveguides have proved useful in the microwave regime [16][17][18], and recently this technology has been demonstrated at THz frequencies [19,20]. For the microwave region, micro-strip technologies are used; however, this method is not suitable for the THz range. In this paper, coupled in-line 3D printed filters are presented as a solution for manipulating THz light. These filters give a Q-factor enhancement depending on the number of fibers concatenated in the design. Finally, we use two filters to give the first demonstration of THz multiplexing/demultiplexing using 3D printed co-directional dielectric waveguides, proving that this approach can be used not only to filter THz radiation but also to combine/separate the frequency components of two independent broadband input channels. This method, therefore, represents a valuable approach for the next generation of passive photonic devices for the THz band, making it highly attractive for research in the field of communications.
Design and fabrication
The devices were made out of fused polystyrene filaments (n PS =1.56 at 0.12 THz [19,21]) with a cross-section of 1.2 mm by 1.2 mm and were fabricated using the Ultimaker3 3D printer with a vertical resolution of 0.1 mm and a nozzle hole of 0.4 mm. The filter consists of two to five rectangular waveguides (cross-section 1.2 mm by 1.2 mm) separated laterally by 100 µm, 200 µm and 400 µm air gaps (a indicated in Fig. 1) and shifted longitudinally by 25 mm (overlapping length). A schematic diagram of the printed filter is shown in Fig. 1 along with a photograph of the printed filters and a single fiber element. The principle of operation of the dielectric filters relies on the coupling mode theory [22,23]. When two dielectric waveguides are placed in close proximity, the light propagating originally in one waveguide can leak completely to the other via mode coupling between the lowest odd and even modes [22,23]. The transcendental equations for the propagation of the even and odd modes in the (K 1 d, γd) space are given by: tanh tanh (0.5γa) = 0, (2) where d=1.2 mm is the waveguide cross-section, K 1 and γ are the propagation constant and the extinction coefficient, respectively [22,23]. We solve Eq. (1) and Eq. (2) to find the solutions for the propagation of the even and odd modes respectively in the (K 1 d, γd) space. In our design, the propagation constant for the even/odd modes β even/odd can be found using β 2 odd/even = K 2 1 −(n PS k) 2 , where k is the propagation constant in vacuum [22,23]. Then, the coupling length is defined as the distance in which the phase difference between the lowest even and odd is exactly π [22,23], i.e. L c = π β even − β odd .
From Eqs. (1) and (2) it is clear that β even/odd as well as L c depend on the separation between fibers (a) for a given frequency. In our design shown in Fig. 1, we define the overlapping length (L o ) as the longitudinal distance in which the light inside one fiber can interact with the neighbour fiber. This distance has been kept fixed to 25 mm. Only the light with a frequency that meets the condition L c (a, f ) = L o , for a given spacing (a) and frequency (f), will be propagated efficiently from the input port to the output port. This phenomenon is used for band-pass filtering. In Fig. 2(a) the coupling length for frequencies between 110 GHz to 200 GHz and separation between fibers of 0.1 mm to 0.4 mm is plotted as a surface, along with the plane at L o =25 mm (horizontal yellow plane). The intersection between the surface and the plane (white curve) gives L c (a, f ) = L o , indicating the frequencies able to propagate inside the in-line structure at different spacings. Figure 2(b) is a top view of (a) with the intersection line. This highlights that the filters have a centre frequency of operation close to 120 GHz, 160 GHz and 180 GHz for the 400 µm, 200 µm and 100 µm gaps respectively. These values were also confirmed by numerical simulations. In Fig. 3, numerical simulations for the electric field intensity are shown for the 200 µm spacing filter. The numerical results were carried out using COMSOL Multiphysics and the Electromagnetic Waves, Frequency Domain (EWFD) module. The discretization of the mesh was λ/10 in order to ensure accuracy and scattering boundary conditions were employed. The array in this simulation consists of four fibers. From the numerical results, the performance of the filter is explained as follows: only the frequency of 160 GHz fulfils the condition L c (200 µm) = L o , then, the light can be coupled within every adjacent fiber and guided to the exit of the filter as shown in Fig. 3(b). However, when the coupling length of the incident beam does not match the condition, a significant fraction of the light is leaked out at the end of every fiber. From these results, it is clear that a considerable amount of energy is lost at the end of the first fiber, while the rest of the energy is lost at every subsequent fiber. Additionally, in light of these results, it is intuitive to notice that adding more fiber steps in the design will help to remove the non-resonant frequencies more efficiently. This process increases the Q-factor of the resultant filter. To demonstrate experimental evidence of this process, the following variables are investigated: three different spacings (100 µm, 200 µm and 400 µm) and four different numbers of fibers (two, three, four and five). A total of twelve filters were studied, and the transmitted spectra are analysed. Each of the twelve fibers was printed with the same length and same cross-section and was measured individually to meaningfully compare the transmitted light through a single fiber and the filters (array of fibers).
Experimental results
For the experimental procedure, the fiber-coupled TeraSmart THz spectrometer from Menlo systems was used. The system emits short THz pulses with a temporal duration of approximately 1 ps exhibiting a frequency content up to 5 THz with a dynamic range of 90 dB. The schematic of the experimental setup is presented in Fig. 4. The THz radiation was coupled into the filters using two pairs of TPX lenses with a 1.5-inch diameter and 50 mm focal length. Additionally, two metallic apertures were used in order to hold the filter in focus and to prevent direct transmission from the transmitter to the receiver. The spectra were calculated by taking the absolute ratio of the spectra transmitted through the twelve filters FFT(E samp (t)) and the spectra transmitted through the twelve single rectangular fiber elements with the same filter length, labelled L in Fig. 1 FFT(E ref (t)) . This equation highlights the propagation losses the multi-fibers filter may have relative to the single fiber case. From Fig. 5(b), it is clear that the bandpass region becomes narrower and has a higher Q-factor as the number of fibers in the filter increases. To completely eliminate frequencies with small coupling efficiency, the addition of more fibers is necessary as the losses will be multiplied in every coupling such that the signals will become negligible at the output channel. Therefore, the higher the number of fibers, the lower the transmission of low efficiency frequencies. When five fibers are used, the transmission spectra for three different spacings (100 µm, 200 µm and 400 µm) are all higher than 90%, relative to the single fiber transmission case, as shown in Fig. 5(a). In order to quantify the improvement of the Q-factor as a function of the number of fibers, a Gaussian curve was fitted to the transmission spectrum of every filter. The Q-factor has been calculated using the formula, Q = f c ∆f , where f c is the central frequency (resonance frequency) and ∆f is the full-width-half-maximum of the bandpass region (bandwidth) of the fitted Gaussian curves. These results are shown in Fig. 6(b). The linear fits to the Q-factor as a function of the number of fibers show that for the 100 µm, 200 µm, 400 µm the increases in the Q-factor per number of fibers are 0.61, 0.62 and 0.44, respectively. In order to fabricate a bandpass filter with a narrower bandwidth region, additional fibers have to be incorporated in the design (more than five in this case). However, for this study, we were limited to a maximum of five fibers as adding more fibers would delay the THz radiation for more than 850 ps, which is the temporal limit of detection of the TeraSmart THz system. To complement this study, an additional measurement was taken for the transmitted THz light through free space, i.e. the filter shown in Fig. 4 was removed, the two TPX lenses were arranged in a confocal geometry and then the THz radiation was recorded. The ratio of the transmitted THz light for the five-fiber filters and single fiber relative to the free space transmission is shown in Fig. 6(b). This plot indicates that transmission efficiencies up to 50% are achieved for these devices relative to free space. The results also show all the losses in the device. These losses are caused by impedance mismatches, scattering, material absorption and coupling efficiencies which represent 50% of the incident E-field. Finally, in Fig. 7, experimental validation of a proposed mux/demux device is presented. The proposed mux/demux consists of two in-line bandpass filters with different spacings (100 µm and 400 µm), separated by a unique fiber used to couple in or out the THz light. A schematic diagram is shown in the insets of Fig. 7. Fig. 7. Experimental transmission spectra, compared to the air-reference, for the two filter devices acting as a: (a) demultiplexer and (b) multiplexer. The inset represents the schematic diagram of the mux/demux device as well as the experimental configuration. The spacing between fibers was 100 µm and 400 µm.
A broadband THz light beam was inputted through the single fiber, as shown in the inset Fig. 7(a). Afterwards, placing the receiver at the end of one of the two out ports, two different band regions were recorded. These bands are shown in Fig. 7(a) after passing through the 100 µm arm (red) and the 400 µm arm (blue). Conversely, moving the receiver to the single output, as shown in the inset of Fig. 7(b), while the emitter is aligned in every one of the two input ports, the two spectral regions presented in Fig. 7(b) were recorded. For the first case [ Fig. 7(a)] this new geometry performs the function of a demultiplexer (separation of frequencies), meanwhile, for the second case [ Fig. 7(b)], the device acts as a multiplexer (different frequencies combined in a single output line). Close to fifty-fifty splitting is obtained with this configuration, similar to other mux/demux approaches [20,[24][25][26][27]. Additionally, with this approach, the frequency splitting does not require movable parts [25]. Finally, in order to show insight into a practical application of the proposed printed mux/demux, two independent THz broadband input channels are aligned with the input tips of the fibers, as shown in Fig. 8(b). To the best of our knowledge, this is the first study using two independent THz broadband channels for the analysis of a 3D printed mux/demux element using in-line geometry. For this experiment, a minor modification was made to our original design of the mux/demux. This time, one of the input fibers was printed with a curved end to couple the second emitter avoiding cross-talking. It is important to highlight that curve geometry may incur in additional propagation losses. The radius of curvature used here is 17.5 mm, representing additional transmission losses around 1% [19]. A picture of the new design is shown in Fig. 8. Three measurements were taken; first blocking input A while input B is open, then opening input A and keeping input B blocked and finally, both inputs unblocked.
From the transmitted spectra, two spectral regions are evident for the (A=1, B=0) (open diamonds) and (A=0, B=1) (open squares) cases, proving efficient separation of the broadband input spectra. Furthermore, for the two inputs open case (A=1, B=1), the transmitted spectral region now covers a wider region enclosing the two previous spectral bands (dashed black line). Finally, the (A=0, B=1) and (A=1, B=0) curves were numerically added, and the result is represented by the open circles curve. The resulting curve is seen to be in good agreement with the previous result for the (A=1, B=1) case. The experimental evidence demonstrates that the proposed printed geometry effectively acts as a multiplexer of two independent input channels, and has valuable characteristics for the fabrication of the next generation of devices for THz communications.
Conclusions
The performance of a bandpass filter fabricated by 3D printing for the THz range of frequencies is presented here. The principle of operation of the filter is based on coupled-mode theory and the operation frequency of the proposed filter is highly dependent on the spacing between fibers. The experimental results indicate that the Q-factor of the filter can be improved by adding more fiber stages into the design. This is also supported by numerical calculations. Incremental improvement in the Q-factor as a function of the number of fibers was found to be 0.62 units per fiber for the 200 µm spaced filter. Furthermore, connecting two filters by a single fiber, was demonstrated to be a powerful method to multiplex and demultiplex broadband input channels. We envisage that tunable 3D printed mux/demux devices will be able to dynamically change the spacing between fibers making it possible to tune the frequency band of operation. Due to the high flexibility, robustness and low cost given by additive layer manufacturing techniques, we anticipate that this device will be highly valuable for the next generation of THz communication systems.
Funding
Royal Society Wolfson Merit Award; Engineering and Physical Sciences Research Council (EP/S021442/1).
Disclosures
The authors declare no conflicts of interest. | 3,739.4 | 2020-09-15T00:00:00.000 | [
"Physics"
] |
Distributed Consensus of Nonlinear Multi-Agent Systems on State-Controlled Switching Topologies
This paper considers the consensus problem of nonlinear multi-agent systems under switching directed topologies. Specifically, the dynamics of each agent incorporates an intrinsic nonlinear term and the interaction topology may not contain a spanning tree at any time. By designing a state-controlled switching law, we show that the multi-agent system with the neighbor-based protocol can achieve consensus if the switching topologies jointly contain a spanning tree. Moreover, an easily manageable algebraic criterion is deduced to unravel the underlying mechanisms in reaching consensus. Finally, a numerical example is exploited to illustrate the effectiveness of the developed theoretical results.
Introduction
Recent years have witnessed a growing interest in the consensus problem of multi-agent systems in system and control community.A lot of effort has been made to design distributed control law for each agent such that the system as a whole can perform complex tasks in a cooperative manner.The distinguishing feature of such control law lies in its lack of global information while aiming to cooperate with all agents.In this paper, we deal with the consensus problem of multi-agent systems with nonlinear dynamics and switching topologies jointly containing a spanning tree.
Over the past decade, many well-known results on consensus have been reported in [1][2][3][4][5][6][7], to name just a few.Based on algebraic graph theory, Olfati-Saber and Murray [1] discussed the consensus problem for networked single-integrator agents over directed fixed and switching topologies with communication time-delays.Following this work, the consensus problem has been recently investigated from various perspectives, for example, system with second-order dynamics [8,9], nonlinear agent dynamics [10,11], time-delays [12,13], quantization [14], saturation [15], etc.However, most of the aforementioned works were predominantly concerned with the multi-agent systems under fixed communication topologies.
In practical application, the interaction topology among agents may change dynamically due to the limited sensing regions of sensors or effect of obstacles.Different assumptions on the switching topologies for multi-agent systems have been explored in recent years [16][17][18].By assuming that the switching topologies keep connected or contain a spanning tree at every time, a heap of results have been reported [19][20][21][22][23].However, it is impractical to impose the connectivity condition on all possible topologies.Thus, seeking feasible while less restrictive condition on the switching topologies becomes a mix of diverse challenging, yet interesting topic.In discrete-time setting, Jadbabaie et al. [24] provided a simple consensus protocol for Vicsek's model [25], which was analyzed theoretically by exploiting properties of products of stochastic matrices under jointly connected topologies.The result was later extended in [26] to the case of directed graphs where conditions for consensus under switching interaction topologies were presented.For continuous-time systems, Hong et al. [27] proposed a local control strategy for multi-agent systems with jointly connected topologies.In [28][29][30], the switching communication topologies were assumed to be governed by continuous-time homogeneous Markov processes, whose state space corresponds to the communication patterns.The authors of [31][32][33] considered continuous-time multi-agent systems under jointly connected topologies, which had less constraints on each possible topology.However, these results are quite conservative in the sense that the underlying topology of the system switches without concerning the current states of the multi-agent systems.
Inspired by the above discussion, this paper aims to investigate the leadless consensus problem of multi-agent systems with Lipschitz nonlinear dynamics over state-controlled switching topologies.Relevant work can be found in [34], where the author studied leader-following consensus for double-integrator-based multi-agent systems under jointly connected topologies.In this paper, all possible topologies are allowed to be disconnected, and only jointly contains a spanning tree is required for the system to achieve consensus.By using the state transformation method, the consensus problem becomes a stability problem of a nonlinear switched system.Then, based on the Lyapunov stability approach, the consensus of the considered system is proved to be achieved with a prescribed consensus error.The contribution of this paper can be ascribed as follows: (1) Inspired by the stabilizing switching theory [35], we design a state-controlled switching law for the considered switching topologies.To avoid the switching signal from chattering, a new mechanism is introduced, and then, the low bound of dwell time of switching topologies is explicitly calculated.This is neglected in [34]; therefore, the controllers therein may suffer from chattering.(2) The dynamics of multi-agent system incorporates nonlinearities, which have been less reported in the literature, especially when the topologies are assumed to jointly contain a spanning tree.This paper attempts to explore the consensus of multi-agent systems with both nonlinear dynamics and state-controlled switching topologies, which thus constitutes a necessary complement to the existing literature.
The rest of the paper is organized as follows.In Section 2, some preliminaries on algebraic graph theory and model formulation are given.Sufficient conditions are given to ensure consensus of first-order multi-agent systems in Section 3. In Section 4, we give a numerical example to illustrate the proposed protocol.Conclusions are drawn in Section 5.
Throughout the paper, the following notations are adopted for the ease of presentation.R n is the n-dimensional Euclidean space and R n×n stands for the set of n × n real matrix.I n and O n are n × n identity and zero matrices, respectively.diag{x 1 , x 2 , . . ., x m } denotes the diagonal matrix with diagonal elements x 1 to x m .• refers to the Euclidean vector norm and the induced matrix norm.
Preliminaries and Problem Statement
which means node i has access to the information of j.The element α ij in A is decided by the edge between i and j, i.e., e ij ∈ E ⇔ α ij > 0; otherwise α ij = 0.The set of neighbors of node i is denoted by If there exists at least one node (called the root) having directed path to any other nodes, the digraph is said to have a spanning tree.
To depict the varying topologies, let Ḡ = {G m = (V, E m , A m )|m ∈ M} denote the collection of all possible digraphs on the same node set V = {1, . . ., n}, and M = {1, . . ., M} be the index set of possible topologies, where M is the number of possible topologies.Then, the underlying graph at time t can be denoted by G σ(t) , where σ(t) is a piecewise constant switching function defined as σ(t) : [0, +∞) → M. It is assumed that σ(t) switches finite times in any bounded time interval.For a collection Ḡ of digraphs, its union digraph is defined as . Moreover, we say that the collection Ḡ jointly contains a spanning tree if its union digraph G u has a spanning tree.
Consider a multi-agent system consisting of n agents.The dynamics of each agent is where x i ∈ R is the state of agent i, f (x i , t) is a nonlinear function describing the self-dynamics of agent i, and u i is the control input.
Assumption 1.The nonlinear function f (x, t) satisfies the Lipschitz condition with the Lipschitz constant ρ, i.e., Assumption 2. The switching topologies G σ(t) jointly contain a spanning tree.
For system Equation (1), we consider the following control input for the ith agent: Hence, the closed-loop system can be rewritten in compact form as where Here, we introduce a state transformation for system Equation ( 3) where E = [−1 n−1 I n−1 ] so that system Equation (3) can be rewritten in the following reduced-order form with respect to ξ ξ where Definition 1.The consensus error ξ(t) ∈ R n−1 is uniformly ultimately bounded (UUB) if there exists a bound B and a time t f (B, ξ(t 0 )), which are independent of t 0 ≥ 0, such that ||ξ(t)|| ≤ B for ∀t ≥ t 0 + t f .
Remark 1.
From the structure of transformation matrix E, we know that ξ is the indicator of the consensus performance of multi-agent system Equation (1).That is, the system Equation (1) achieves consensus if and only if ξ(t) = 0 of Equation ( 4) is asymptotically stable.In what follows, ξ is called the consensus error of the system.When ξ(t) is UUB, x i (t) is bounded within a bounded neighborhood of x 1 (t) for i = 2, 3, . . ., n and t ≥ t 0 + t f .Thus, this depicts an intuitive notion of "close enough" consensus.
Lemma 1. [22]
Let L 1 , L 2 , . . ., L M be the Laplacian matrices associated with the digraphs G 1 , G 2 , . . ., G M , respectively, then − ∑ M m=1 Lm is Hurwitz stable if and only if the union of digraph G u of these graphs contains a spanning tree.Lemma 2. [36] For any two real vectors x, y ∈ R n and positive definite matrix Φ ∈ R n×n , we have 2x T y ≤ x T Φx + y T Φ −1 y.
Main Results
In this section, we first design a stabilizing switching law for multi-agent system Equation (1).Then, the main result of this paper will be presented with the help of the above preliminary knowledge.
Switching Law Design
Define average matrix and − L0 is Hurwitz.As a result, the following Lyapunov equation has a positive definite solution Q.
Define auxiliary matrices Lm (m ∈ M) as follows: where argmax stands for the index which reaches the maximum among M.If there is more than one index, we choose the minimum index.Then, we define the switching instant and index sequences recursively by where r m ∈ (0, 1).
Proof.Assume t c and t c+1 are two consecutive switching time instants.By the property of the protocol that we design the switching instants, we have Firstly, let us consider the case Here, we define an auxiliary function It follows from ( 2) and (4) that Calculating the derivative of w(t) along time, we get Now, we denote and f and ν 3 = − Lσ(t c ) + I n .
Combining with (3) yields which together with the fact that (kϑ Next, suppose that Equation (8) does not hold, which means that there is a t * ∈ [t c , t c+1 ) satisfying From the system Equation ( 5), we have which is equivalent to Based on the property of exponential function and norm, there is a positive number ν 4 such that Suppose that As a result, which contradicts the inequality Equation (11).Hence, we get Combining the above discussions shows that for any consecutive switching time instants t c and t c+1 , we have which imposes a lower bound for the dwell time of switching signal.This means the switching signals are well-defined.
Remark 2. According to [35], a "good" switching signal should guarantee a positive dwell time and avoid fast switching.In the switching law Equations ( 6) and ( 7), we fix a threshold value for switching, which can prevent the switching signal σ(t) from chattering.However, there is a trade-off between the precise of consensus and frequency of switching due to such a threshold value.Specifically, a smaller may lead to a smaller dwell time, which implies high frequency switching, while a larger may bring about a larger consensus error which is undesirable.Similarly , there is also a trade-off between the control gain and frequency of switching due to parameters r m in Equations ( 6) and (7).Specifically, smaller r m may result in larger control gain, which can be seen in the upcoming Theorem 1, while larger r m may bring about high frequency switching.The existence of these two trade-offs suggests that the choice of these parameters should achieve a balanced interplay between consensus performance and feasibility of control protocols.
Consensus Analysis
Theorem 1.Consider the multi-agent system Equation (1) under Assumptions 1 and 2. Adopt the designed switching law in Equations ( 6) and (7).Then, by employing control protocol Equation (2) and selecting control gain such that where λ(Q) denotes the maximum eigenvalue of Q and r = min{r 1 , r 2 , . . ., r M }, the consensus error ξ(t) is UUB.That is, all agents reach consensus with a bounded error .
Proof.As the switching topologies jointly contain a spanning tree, − L is Hurwitz stable by Lemma 1, and the switching law is well-defined in Equations ( 6) and (7).Here, we consider the Lyapunov function candidate as V(t) = ξ T Qξ.In case of ||ξ|| > , by calculating the derivative of V(t) along the trajectory of system Equation ( 4), we have Therefore, V(t) is strictly decreasing during each time interval.This, together with the fact that V(t) is continuous, implies the consensus error satisfies lim t→∞ ||ξ(t)|| ≤ , which means the consensus error ||ξ(t)|| is UUB.This completes the proof.Remark 3. In [21], first-order nonlinear multi-agent system was investigated, where general algebraic connectivity needs to be calculated to design the control parameter.However, the general algebraic connectivity of a graph is not easy to obtain, especially when the network size is large.Here, we provide a novel method to design the control parameter to realize consensus.In addition, we allow the underlying topology to be disconnected all the time, which cannot be analyzed by the technique in [21].
Remark 4. In [18,22,23], consensus problems of multi-agent systems with nonlinear dynamics under switching topologies were considered.However, a common assumption of these works on the switching topologies is all the topologies are required to be connected or having a spanning tree.In [17], this assumption is relaxed where consensus of multi-agent systems was achieved without requiring the topology having a spanning tree all the time.However, these results are quite conservative in the sense that the underlying topology of the system switches without concerning the current states of the multi-agent systems.In this paper, another perspective to solve the consensus problem without requiring each possible topology containing a spanning tree is provided.The designed topology switching law arranges the underlying topology by taking states of agents into consideration, which is efficient.
Remark 5. When the consensus of the systems is achieved, the state of agents in the system is determined by the nonlinear function.Therefore, denoting the consensus state of the system by s(t) (i.e., the trajectory of ṡ(t) = f (s(t), t)), which can be any desired state: an equilibrium point, a nontrivial periodic orbit, or even a chaotic orbit in some applications.
Numerical Simulations
In this section, we present numerical simulations to demonstrate the effectiveness of theoretical results.For simplicity, we only consider the multi-agent systems consisting of ten agents labeled 1 through 10 and assume all weights of edges between agents are 0 or 1.
Consider the consensus of multi-agent system Equation (1) with the communication topology switching in a collection Ḡ = {G 1 , G 2 , G 3 }, as shown in Figure 1.Note that each digraph in Figure 1 does not contain a spanning tree, but the union digraph of them contains a spanning tree.The inherent nonlinear dynamics is given as f (x i , t) = 0.1x i cos(t).By Theorem 1, when the feedback gain k > 20.5456, the consensus of the system is achieved uniformly ultimately bounded under the designed state-controlled switching topologies.Figure 2 shows the states of the closed-loop system with k = 21 and = 0.1.We can see that the ten agents achieve convergence with bounded error although none of digraph G m contains a spanning tree.Figures 3 and 4 present the switching signal and the consensus error ||ξ(t)||, respectively.Switching signal σ(t) To demonstrate the merit of fixing a threshold for switching, we also consider the case that = 0 under the same setting as stated above.The states of agents, switching signal and consensus error are shown in Figures 5-7, respectively.From this comparison simulation, we can find that the multi-agent system reaches consensus precisely, while the interaction topology switches much more times than the case with = 0.1.
Conclusions
In this paper, we have investigated the leaderless consensus problem with two practical constraints: (i) The system includes intrinsic nonlinear dynamics; (ii) The switching topology may not contain a spanning tree at any time.We introduced a variable transformation to facilitate the consensus analysis, which shows great potential in solving the considered consensus problem.By designing a state-controlled switching law, the consensus problem has been solved under the assumption that the switching topologies jointly contain a spanning tree.The choice of parameters in the switching law allows us to balance the consensus performance with the feasibility of control protocols.Nevertheless, the nonlinearities in this work are assumed to be Lipschitz-type, which bring about some conservations.In addition, another drawback of this work is that some global information is used in the designed switching law.We believe that the results of this paper could be largely improved if general nonlinearities are considered and the topology switching law depends only on local information, which is still an open issue and will be the object of our future work.
Figure 5 .Figure 6 .Figure 7 .
Figure 5. State x i (t) under the state-controlled switching topologies without a threshold.
State x i (t) under the state-controlled switching topologies with a threshold = 0.1. | 3,980.2 | 2016-01-18T00:00:00.000 | [
"Mathematics"
] |
On the subgroup structure of the hyperoctahedral group in six dimensions
The subgroup structure of the hyperoctahedral group in six dimensions is studied, with particular attention to the subgroups isomorphic to the icosahedral group. The orthogonal crystallographic representations of the icosahedral group are classified, and their intersections are studied in some detail, using a combinatorial approach which involves results from graph theory and their spectra.
Introduction
The discovery of quasicrystals in 1984 by Shechtman et al. has spurred the mathematical and physical community to develop mathematical tools in order to study structures with noncrystallographic symmetry.
Quasicrystals are alloys with five-, eight-, ten-and 12-fold symmetry in their atomic positions (Steurer, 2004), and therefore they cannot be organized as (periodic) lattices. In crystallographic terms, their symmetry group G is noncrystallographic. However, the noncrystallographic symmetry leaves a lattice invariant in higher dimensions, providing an integral representation of G. If such a representation is reducible and contains a two-or three-dimensional invariant subspace, then it is referred to as a crystallographic representation, following terminology given by Levitov & Rhyner (1988). This is the starting point to construct quasicrystals via the cut-and-project method described by, among others, Senechal (1995), or as a model set (Moody, 2000).
In this paper we are interested in icosahedral symmetry. The icosahedral group I consists of all the rotations that leave a regular icosahedron invariant, it has size 60 and it is the largest of the finite subgroups of SOð3Þ. I contains elements of order five, therefore it is noncrystallographic in three dimensions; the (minimal) crystallographic representation of it is sixdimensional (Levitov & Rhyner, 1988). The full icosahedral group, denoted by I h , also contains the reflections and is equal to I Â C 2 , where C 2 denotes the cyclic group of order two. I h is isomorphic to the Coxeter group H 3 (Humphreys, 1990) and is made up of 120 elements. In this work, we focus on the icosahedral group I because it plays a central role in applications in virology (Indelicato et al., 2011). However, our considerations apply equally to the larger group I h . Levitov & Rhyner (1988) classified the Bravais lattices in R 6 that are left invariant by I : there are, up to equivalence, exactly three lattices, usually referred to as icosahedral Bravais lattices, namely the simple cubic (SC), body-centred cubic (BCC) and face-centred cubic (FCC). The point group of these lattices is the six-dimensional hyperoctahedral group, denoted by B 6 , which is a subgroup of Oð6Þ and can be represented in the standard basis of R 6 as the set of all 6 Â 6 orthogonal and integral matrices. The subgroups of B 6 which are isomorphic to the icosahedral group constitute the integral representations of it; among them, the crystallographic ones are those which split, in GLð6; RÞ, into two three-dimensional irreducible representations of I. Therefore, they carry two subspaces in R 3 which are invariant under the action of I and can be used to model the quasiperiodic structures.
The embedding of the icosahedral group into B 6 has been used extensively in the crystallographic literature. Katz (1989), Senechal (1995), Kramer & Zeidler (1989), Baake & Grimm (2013), among others, start from a six-dimensional crystallographic representation of I to construct three-dimensional Penrose tilings and icosahedral quasicrystals. Kramer (1987) and Indelicato et al. (2011) also apply it to study structural transitions in quasicrystals. In particular, Kramer considers in B 6 a representation of I and a representation of the octahedral group O which share a tetrahedral subgroup, and defines a continuous rotation (called Schur rotation) between cubic and icosahedral symmetry which preserves intermediate tetrahedral symmetry. Indelicato et al. define a transition between two icosahedral lattices as a continuous path connecting the two lattice bases keeping some symmetry preserved, described by a maximal subgroup of the icosahedral group. The rationale behind this approach is that the two corresponding lattice groups share a common subgroup. These two approaches are shown to be related (Indelicato et al., 2012), hence the idea is that it is possible to study the transitions between icosahedral quasicrystals by considering two distinct crystallographic representations of I in B 6 which share a common subgroup.
These papers motivate the idea of studying in some detail the subgroup structure of B 6 . In particular, we focus on the subgroups isomorphic to the icosahedral group and its subgroups. Since the group is quite large (it has 2 6 6! elements), we use for computations the software GAP (The GAP Group, 2013), which is designed to compute properties of finite groups. More precisely, based on Baake (1984), we generate the elements of B 6 in GAP as a subgroup of the symmetric group S 12 and then find the classes of subgroups isomorphic to the icosahedral group. Among them we isolate, using results from character theory, the class of crystallographic representations of I. In order to study the subgroup structure of this class, we propose a method using graph theory and their spectra. In particular, we treat the class of crystallographic representations of I as a graph: we fix a subgroup G of I and say that two elements in the class are adjacent if their intersection is equal to a subgroup isomorphic to G. We call the resulting graph G-graph. These graphs are quite large and difficult to visualize; however, by analysing their spectra (Cvetkovic et al., 1995) we can study in some detail their topology, hence describing the intersection and the subgroups shared by different representations.
The paper is organized as follows. After recalling, in x2, the definitions of point group and lattice group, we define, in x3, the crystallographic representations of the icosahedral group and the icosahedral lattices in six dimensions. We provide, following Kramer & Haase (1989), a method for the construction of the projection into three dimensions using tools from the representation theory of finite groups. In x4 we classify, with the help of GAP, the crystallographic representations of I. In x5 we study their subgroup structure, introducing the concept of G-graph, where G is a subgroup of I.
Lattices and noncrystallographic groups
Let b i , i ¼ 1; . . . ; n be a basis of R n , and let B 2 GLðn; RÞ be the matrix whose columns are the components of b i with respect to the canonical basis fe i ; i ¼ 1; . . . ; ng of R n . A lattice in R n is a Z-free module of rank n with basis B, i.e.
Any other lattice basis is given by BM, where M 2 GLðn; ZÞ, the set of invertible matrices with integral entries (whose determinant is equal to AE1) (Artin, 1991). The point group of a lattice L is given by all the orthogonal transformations that leave the lattice invariant (Pitteri & Zanzotto, 2002): PðBÞ ¼ fQ 2 OðnÞ : 9M 2 GLðn; ZÞ s:t: QB ¼ BMg: We notice that, if Q 2 PðBÞ, then B À1 QB ¼ M 2 GLðn; ZÞ. In other words, the point group consists of all the orthogonal matrices which can be represented in the basis B as integral matrices. The set of all these matrices constitute the lattice group of the lattice: ÃðBÞ ¼ fM 2 GLðn; ZÞ : 9Q 2 PðBÞ s:t: M ¼ B À1 QBg: The lattice group provides an integral representation of the point group and these are related via the equation and moreover the following hold (Pitteri & Zanzotto, 2002): We notice that a change of basis in the lattice leaves the point group invariant, whereas the corresponding lattice groups are conjugated in GLðn; ZÞ. Two lattices are inequivalent if the corresponding lattice groups are not conjugated in GLðn; ZÞ (Pitteri & Zanzotto, 2002).
As a consequence of the crystallographic restriction [see, for example, Baake & Grimm (2013)] five-and n-fold symmetries, where n is a natural number greater than six, are forbidden in dimensions two and three, and therefore any group G containing elements of such orders cannot be the point group of a two-or three-dimensional lattice. We therefore call these groups noncrystallographic. In particular, threedimensional icosahedral lattices cannot exist. However, a noncrystallographic group leaves some lattices invariant in higher dimensions and the smallest such dimension is called the minimal embedding dimension. Following Levitov & Rhyner (1988), we introduce: Definition 2.1. Let G be a noncrystallographic group. A crystallographic representation of G is a D-dimensional representation of G such that: (1) the characters of are integers; (2) is reducible and contains a two-or three-dimensional representation of G.
We observe that the first condition implies that G must be the subgroup of the point group of a D-dimensional lattice. The second condition tells us that contains either a twoor three-dimensional invariant subspace E of R D , usually referred to as physical space (Levitov & Rhyner, 1988).
Six-dimensional icosahedral lattices
The icosahedral group I is generated by two elements, g 2 and g 3 , such that g 2 2 ¼ g 3 3 ¼ ðg 2 g 3 Þ 5 ¼ e, where e denotes the identity element. It has size 60 and it is isomorphic to A 5 , the alternating group of order five (Artin, 1991). Its character table is given in Table 1.
From the character table we see that the (minimal) crystallographic representation of I is six-dimensional and is given by T 1 È T 2 . Therefore, I leaves a lattice in R 6 invariant. Levitov & Rhyner (1988) proved that the three inequivalent Bravais lattices of this type, mentioned in the Introduction and referred to as icosahedral (Bravais) lattices, are given by, respectively: We note that a basis of the SC lattice is the canonical basis of R 6 . Its point group is given by which is the hyperoctahedral group in dimension six. In the following, we will denote this group by B 6 , following Humphreys (1996). We point out that this notation comes from Lie theory: indeed, B 6 represents the root system of the Lie algebra soð13Þ (Fulton & Harris, 1991). However, the corresponding reflection group WðB 6 Þ is isomorphic to the hyperoctahedral group in six dimensions (Humphreys, 1990). All three lattices have point group B 6 , whereas their lattice groups are different and, indeed, they are not conjugate in GLð6; ZÞ (Levitov & Rhyner, 1988).
Let H be a subgroup of B 6 isomorphic to I. H provides a (faithful) integral and orthogonal representation of I. Moreover, if H ' T 1 È T 2 in GLð6; RÞ, then H is also crystallographic (in the sense of Definition 2.1). All of the other crystallographic representations are given by B À1 HB, where B 2 GLð6; RÞ is a basis of an icosahedral lattice in R 6 . Therefore we can focus our attention, without loss of generality, on the orthogonal crystallographic representations.
Projection operators
Let H be a crystallographic representation of the icosahedral group. H splits into two three-dimensional irreducible representations (IRs), T 1 and T 2 , in GLð6; RÞ. This means that there exists a matrix R 2 GLð6; RÞ such that The two IRs T 1 and T 2 leave two three-dimensional subspaces invariant, which are usually referred to as the physical (or parallel) space E k and the orthogonal space E ? (Katz, 1989). In order to find the matrix R (which is not unique in general), we follow (Kramer & Haase, 1989) and use results from the representation theory of finite groups (for proofs and further results see, for example, Fulton & Harris, 1991). In particular, let À : G ! GLðn; FÞ be an n-dimensional representation of a finite group G over a field F (F = R, C). By Maschke's theorem, À splits, in GLðn; FÞ, as m 1 À 1 È . . . È m r À r , where À i : G ! GLðn i ; FÞ is an n idimensional IR of G. Then the projection operator P i : F n ! F n i is given by where à À i denotes the complex conjugate of the character of the representation À i . This operator is such that its image ImðP i Þ is equal to an n i -dimensional subspace V i of F n invariant under À i . In our case, we have two projection operators, P i : R 6 ! R 3 , i = 1, 2, corresponding to the IRs T 1 and T 2 , respectively. We assume the image of P 1 , ImðP 1 Þ, to be equal to E k , and ImðP 2 Þ ¼ E ? . If fe j ; j ¼ 1; . . . ; 6g is the canonical basis of R 6 , then a basis of E k (respectively E ? ) can be found considering the set fê e j :¼ P i e j ; j ¼ 1; . . . ; 6g for i ¼ 1 (respectively i ¼ 2) and then extracting a basis B i from it. Since dim E k = dim E ? ¼ 3, we obtain B i ¼ fê e i;1 ;ê e i;2 ;ê e i;3 g, for i = 1, 2. The matrix R can be thus written as R ¼ ê e 1;1 ;ê e 1;2 ;ê e 1;3 |fflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflffl} basis of E k ;ê e 2;1 ;ê e 2;2 ;ê e 2;3 |fflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflffl} Denoting by k and ? the 3  6 matrices which represent P 1 and P 2 in the bases B 1 and B 2 , respectively, we have, by linear algebra Since (2)], we obtain for all g 2 I and v 2 R 6 . In particular, the following diagram commutes The set ðH; k Þ is the starting point for the construction of quasicrystals via the cut-and-project method (Senechal, 1995;Indelicato et al., 2012).
Crystallographic representations of I
From the previous section it follows that the six-dimensional hyperoctahedral group B 6 contains all the (minimal) orthogonal crystallographic representations of the icosahedral group. In this section we classify them, with the help of the computer software programme GAP (The GAP Group, 2013).
Representations of the hyperoctahedral group B 6
Permutation representations of the n-dimensional hyperoctahedral group B n in terms of elements of S 2n , the symmetric group of order 2n, have been described by Baake (1984). In this subsection, we review these results because they allow us to generate B 6 in GAP and further study its subgroup structure.
It follows from equation (1) that B 6 consists of all the orthogonal integral matrices. A matrix A ¼ ða ij Þ of this kind must satisfy AA T ¼ I 6 , the identity matrix of order six, and have integral entries only. It is easy to see that these conditions imply that A has entries in {0, AE1} and each row and column contains 1 or À1 only once. These matrices are called signed permutation matrices. It is straightforward to see that any A 2 B 6 can be written in the form NQ, where Q is a 6 Â 6 permutation matrix and N is a diagonal matrix with each diagonal entry being either 1 or À1. We can thus associate with each matrix in B 6 a pair ða; Þ, where a 2 Z 6 2 is a vector given by the diagonal elements of N, and 2 S 6 is the permutation associated with Q. The set of all these pairs constitutes a group (called the wreath product of Z 2 and S 6 , and denoted by Z 2 o S 6 ; Humphreys, 1996) with the multiplication rule given by where þ 2 denotes addition modulo 2 and ða Þ k :¼ a ðkÞ ; a ¼ ða 1 ; . . . ; a 6 Þ: Z 2 o S 6 and B 6 are isomorphic, an isomorphism T being the following: It immediately follows that |B 6 | = 2 6 6! = 46 080. A set of generators is given by which satisfy the relations Finally, the function ' : Z 2 o S 6 ! S 12 defined by 'ða; ÞðkÞ :¼ is injective and maps any element of Z 2 o S 6 into a permutation of S 12 , and provides a faithful permutation representation of B 6 as a subgroup of S 12 . Combining equation (8) with the inverse of equation (10) we get the function which can be used to map a permutation into an element of B 6 .
Classification
In this subsection we classify the orthogonal crystallographic representations of the icosahedral group. We start by recalling a standard way to construct such a representation, following Zappa et al. (2013). We consider a regular icosahedron and we label each vertex by a number from one to 12, so that the vertex opposite to vertex i is labelled by i þ 6 (see Fig. 1). This labelling induces a permutation representation : I ! S 12 given by ðg 2 Þ ¼ ð1; 6Þð2; 5Þð3; 9Þð4; 10Þð7; 12Þð8; 11Þ; ðg 3 Þ ¼ ð1; 5; 6Þð2; 9; 4Þð7; 11; 12Þð3; 10; 8Þ: Using equation (11) we obtain a representationÎ I : I ! B 6 given bŷ We see that Î I ðg 2 Þ ¼ À2 and Î I ðg 3 Þ ¼ 0, so that, by looking at the character table of I, we have which implies, using Maschke's theorem (Fulton & Harris, 1991), thatÎ I ' T 1 È T 2 in GLð6; RÞ. Therefore, the subgroup I I of B 6 is a crystallographic representation of I.
Before we continue, we recall the following (Humphreys, 1996): In order to find all the other crystallographic representations, we use the following scheme: (a) we generate B 6 as a subgroup of S 12 using equations (9) and (10); (b) we list all the conjugacy classes of the subgroups of B 6 and find a representative for each class; (c) we isolate the classes whose representatives have order 60; (d) we check if these representatives are isomorphic to I ; (e) we map these subgroups of S 12 into B 6 using equation (11) and isolate the crystallographic ones by checking the characters; denoting by S the representative, we decompose S as A planar representation of an icosahedral surface, showing our labelling convention for the vertices; the dots represent the locations of the symmetry axes corresponding to the generators of the icosahedral group and its subgroups. The kite highlighted is a fundamental domain of the icosahedral group.
We implemented steps (1)-(4) in GAP (see Appendix C). There are three conjugacy classes of subgroups isomorphic to I in B 6 . Denoting by S i ¼ hg 2;i ; g 3;i i the representatives of the classes returned by GAP, we have, using equation (11), Since 2A is decomposable into two one-dimensional representations, it is not strictly speaking two dimensional in the sense of Definition 2.1, and as a consequence, only the second class contains the crystallographic representations of I . A computation in GAP shows that its size is 192. We thus have the following: Proposition 4.1. The crystallographic representations of I in B 6 form a unique conjugacy class in the set of all the classes of subgroups of B 6 , and its size is equal to 192.
We briefly point out that the other two classes of subgroups isomorphic to I in B 6 have an interesting algebraic intepretation. First of all, we observe that B 6 is an extension of S 6 , since according to Humphreys (1996): Following Janusz & Rotman (1982), it is possible to embed the symmetric group S 5 into S 6 in two different ways. The canonical embedding is achieved by fixing a point in f1; . . . ; 6g and permuting the other five, whereas the other embedding is by means of the so-called 'exotic map' ' : S 5 ! S 6 , which acts on the six 5-Sylow subgroups of S 5 by conjugation. Recalling that the icosahedral group is isomorphic to the alternating group A 5 , which is a normal subgroup of S 5 , then the canonical embedding corresponds to the representation 2A È G in B 6 , while the exotic one corresponds to the representation A È H.
In what follows, we will consider the subgroupÎ I previously defined as a representative of the class of the crystallographic representations of I, and denote this class by C B 6 ðÎ IÞ.
Recalling that two representations D ð1Þ and D ð2Þ of a group G are said to be equivalent if there are related via a similarity transformation, i.e. there exists an invertible matrix S such that then an immediate consequence of Proposition 4.1 is the following: Corollary 4.1. Let H 1 and H 2 be two orthogonal crystallographic representations of I. Then H 1 and H 2 are equivalent in B 6 .
We observe that the determinant of the generators ofÎ I in equation (12) is equal to one, so thatÎ I 2 B þ 6 :¼ fA 2 B 6 : detA ¼ 1g. Proposition 4.1 implies that all the crystallographic representations belong to B þ 6 . The remark-able fact is that they split into two different classes in B þ 6 . To see this, we first need to generate B þ 6 . In particular, with GAP we isolate the subgroups of index two in B 6 , which are normal in B 6 , and then, using equation (11), we find the one whose generators have determinant equal to one. In particular, we have B þ 6 ¼ hð1; 2; 6; 4; 3Þð7; 8; 12; 10; 9Þ; ð5; 11Þð6; 12Þ; ð1; 2; 6; 5; 3Þð7; 8; 12; 11; 9Þ; ð5; 12; 11; 6Þi: We can then apply the same procedure to find the crystallographic representations of I, and see that they split into two classes, each one of size 96. Again we can chooseÎ I as a representative for one of these classes; a representativeK K for the other one is given bŷ We note that in the more general case of I h , we can construct the crystallographic representations of I h starting from the crystallographic representations of I. First of all, we recall that I h ¼ I Â C 2 , where C 2 is the cyclic group of order two. Let H be a crystallographic representation of I in B 6 , and let À ¼ f1; À1g be a one-dimensional representation of C 2 . Then the representationĤ H given bŷ where denotes the tensor product of matrices, is a representation of I h in B 6 and it is crystallographic in the sense of Definition 2.1 (Fulton & Harris, 1991).
Projection into the three-dimensional space
We study in detail the projection into the physical space E k using the methods described in x3.1.
LetÎ I be the crystallographic representation of I given in equation (12). Using equation (3) with n i ¼ 3 and jGj ¼ jI j ¼ 60 we obtain the following projection operators The rank of these operators is equal to three. We choose as a basis of E k and E ? the following linear combination of the columns c i;j of the projection operators P i , for i = 1, 2 and j ¼ 1; . . . ; 6: c 1;1 þ c 1;5 2 ; c 1;2 À c 1;4 2 ; c 1;3 þ c 1;6 2 |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl ffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl ffl} basis of E k ; c 2;1 À c 2;5 2 ; c 2;2 þ c 2;4 2 ; c 2;3 À c 2;6 2 |fflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl ffl{zfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflfflffl ffl} basis of E ? ! : With a suitable rescaling, we obtain the matrix R given by The matrix R is orthogonal and reducesÎ I as in equation (2). In Table 2 we give the explicit forms of the reduced representation. The matrix representation in E k of P 1 is given by [see equation (5) The orbit fT 1 ð k ðe j ÞÞg, where fe j ; j ¼ 1; . . . ; 6g is the canonical basis of R 6 , represents a regular icosahedron in three dimensions centred at the origin (Senechal, 1995;Katz, 1989;Indelicato et al., 2011).
Let K be another crystallographic representation of I in B 6 . By Proposition 4.1, K andÎ I are conjugated in B 6 . Consider M 2 B 6 such that MÎ I M À1 ¼ K and let S ¼ MR. We have Therefore it is possible, with a suitable choice of the reducing matrices, to project all the crystallographic representations of I in B 6 in the same physical space.
Subgroup structure
The nontrivial subgroups of I are listed in Table 3, together with their generators (Hoyle, 2004). Note that T , D 10 and D 6 are maximal subgroups of I, and that D 4 , C 5 and C 3 are normal subgroups of T , D 10 and D 6 , respectively (Humphreys, 1996;Artin, 1991). The permutation representations of the generators in S 12 are given in Table 4 (see also Fig. 1).
Since I is a small group, its subgroup structure can be easily obtained in GAP by computing explicitly all its conjugacy classes of subgroups. In particular, there are seven classes of nontrivial subgroups in I : any subgroup H of I has the property that, if K is another subgroup of I isomorphic to H, then H and K are conjugate in I (this property is referred to as the 'friendliness' of the subgroup H; Soicher, 2006). In other words, denoting by n G the number of subgroups of I isomorphic to G, i.e.
we have (cf. Definition 4.1) n G ¼ jC I ðGÞj: In Table 5 we list the size of each class of subgroups in I . Geometrically, different copies of C 2 , C 3 and C 5 correspond to the different two-, three-and fivefold axes of the icosahedron, respectively. In particular, different copies of D 10 stabilize one Table 2 Explicit forms of the IRs T 1 and T 2 withÎ I ' T 1 È T 2 .
T stands for the tetrahedral group, D 2n for the dihedral group of size 2n, and C n for the cyclic group of size n.
Subgroup
Generators Relations Size Table 4 Permutation representations of the generators of the subgroups of the icosahedral group.
Subgroups of the crystallographic representations of I
Let G be a subgroup of I. The function (11) provides a representation of G in B 6 , denoted by K G , which is a subgroup ofÎ I. Let us denote by C B 6 ðK G Þ the conjugacy class of K G in B 6 . The next lemma shows that this class contains all the subgroups of the crystallographic representations of I in B 6 .
Lemma 5.1. Let H i 2 C B 6 ðÎ IÞ be a crystallographic representation of I in B 6 and let K i H i be a subgroup of H i isomorphic to G. Then K i 2 C B 6 ðK G Þ.
Proof. Since H i 2 C B 6 ðÎ IÞ, there exists g 2 B 6 such that gH i g À1 ¼Î I , and therefore gK i g À1 ¼ K 0 is a subgroup ofÎ I isomorphic to G. Since all these subgroups are conjugate in I I [they are 'friendly' in the sense intended by Soicher (2006) We next show that every element of C B 6 ðK G Þ is a subgroup of a crystallographic representation of I.
Lemma 5.2. Let K i 2 C B 6 ðK G Þ. There exists H i 2 C B 6 ðÎ IÞ such that K i is a subgroup of H i .
Proof. Since K i 2 C B 6 ðK G Þ, there exists g 2 B 6 such that gK i g À1 ¼ K G . We define H i :¼ g À1Î I g. It can be seen immediately that K i is a subgroup of H i . & As a consequence of these lemmata, C B 6 ðK G Þ contains all the subgroups of B 6 which are isomorphic to G and are subgroups of a crystallographic representation of I. The explicit forms of K G are given in Appendix B. We point out that it is possible to find subgroups of B 6 isomorphic to a subgroup G of I which are not subgroups of any crystallographic representation of I. For example, the following subgroup T T ¼ is isomorphic to the tetrahedral group T ; a computation in GAP shows that it is not a subgroup of any elements in C B 6 ðÎ IÞ. Indeed, the two classes of subgroups, C B 6 ðK T Þ and C B 6 ðT T Þ, are disjoint.
Using GAP, we compute the size of each C B 6 ðK G Þ (see Table 5). We observe that jC B 6 ðK G Þj<jC B 6 ðÎ IÞj Á n G . This implies that crystallographic representations of I may share subgroups. In order to describe more precisely the subgroup structure of C B 6 ðÎ IÞ we will use some basic results from graph theory and their spectra, which we are going to recall in the next section.
Some basic results of graph theory and their spectra
In this section we recall, without proofs, some concepts and results from graph theory and spectral graph theory. Proofs and further results can be found, for example, in Foulds (1992) and Cvetkovic et al. (1995).
Let G be a graph with vertex set V ¼ fv 1 ; . . . ; v n g. The number of edges incident with a vertex v is called the degree of v. If all vertices have the same degree d, then the graph is called regular of degree d. A walk of length l is a sequence of l consecutive edges and it is called a path if they are all distinct. A circuit is a path starting and ending at the same vertex and the girth of the graph is the length of the shortest circuit. Two vertices p and q are connected if there exists a path containing p and q. The connected component of a vertex v is the set of all vertices connected to v.
The adjacency matrix A of G is the n  n matrix A ¼ ða ij Þ whose entries a ij are equal to one if the vertex v i is adjacent to the vertex v j , and zero otherwise. It can be seen immediately from its definition that A is symmetric and a ii ¼ 0 for all i, so that TrðAÞ ¼ 0. It follows that A is diagonalisable and all its eigenvalues are real. The spectrum of the graph is the set of all the eigenvalues of its adjacency matrix A, usually denoted by ðAÞ.
Theorem 5.1. Let A be the adjacency matrix of a graph G with vertex set V ¼ fv 1 ; . . . ; v n g. Let N k ði; jÞ denote the number of walks of length k starting at vertex v i and finishing at vertex v j . We have N k ði; jÞ ¼ A k ij : We recall that the spectral radius of a matrix A is defined by ðAÞ :¼ maxfjj : 2 ðAÞg. If A is a non-negative matrix, i.e. if all its entries are non-negative, then ðAÞ 2 ðAÞ (Horn & Johnson, 1985). Since the adjacency matrix of a graph is nonnegative, jj ðAÞ :¼ r, where 2 ðAÞ and r is the largest eigenvalue. r is called the index of the graph G.
Theorem 5.2. Let 1 ; . . . ; n be the spectrum of a graph G, and let r denote its index. Then G is regular of degree r if and only if 1 n X n i¼1 2 i ¼ r: Moreover, if G is regular the multiplicity of its index is equal to the number of its connected components.
Applications to the subgroup structure
Let G be a subgroup of I. In the following we represent the subgroup structure of the class of crystallographic representations of I in B 6 , C B 6 ðÎ IÞ, as a graph. We say that H 1 ; H 2 2 C B 6 ðÎ IÞ are adjacent to each other (i.e. connected by an edge) in the graph if there exists P 2 C B 6 ðK G Þ such that P ¼ H 1 \ H 2 . We can therefore consider the graph G ¼ ðC B 6 ðÎ IÞ; EÞ, where an edge e 2 E is of the form ðH 1 ; H 2 Þ. We call this graph G-graph.
Using GAP, we compute the adjacency matrices of the G-graphs. The algorithms used are shown in Appendix C. The spectra of the G-graphs are given in Table 6. We first of all notice that the adjacency matrix of the C 5 -graph is the null matrix, implying that there are no two representations sharing precisely a subgroup isomorphic to C 5 , i.e. not a subgroup containing C 5 . We point out that, since the adjacency matrix of the D 10 -graph is not the null one, then there exist crystallographic representations, say H i and H j , sharing a maximal subgroup isomorphic to D 10 . Since C 5 is a (normal) subgroup of D 10 , then H i and H j do share a C 5 subgroup, but also a C 2 subgroup. In other words, if two representations share a fivefold axis, then necessarily they also share a twofold axis.
A straightforward calculation based on Theorem 5.2 leads to the following Proposition 5.1. Let G be a subgroup of I. Then the corresponding G-graph is regular.
In particular, the degree d G of each G-graph is equal to the largest eigenvalue of the corresponding spectrum. As a consequence we have the following: Proposition 5.2. Let H be a crystallographic representation of I in B 6 . Then there are exactly d G representations K j 2 C B 6 ðÎ IÞ such that H \ K j ¼ P j ; 9 P j 2 CðK G Þ; j ¼ 1; . . . ; d G : In particular, we have d G = 5, 6, 10, 0, 30, 20, 60 and 60 for G ¼ T ; D 10 ; D 6 , C 5 ; D 4 ; C 3 ; C 2 and feg, respectively.
In particular, this means that for any crystallographic representation of I there are precisely d G other such representations which share a subgroup isomorphic to G. In other words, we can associate to the class C B 6 ðÎ IÞ the 'subgroup matrix' S whose entries are defined by The matrix S is symmetric and S ii ¼ 60, for all i, since the order of I is 60. It follows from Proposition 5.2 that each row of S contains d G entries equal to jGj. Moreover, a rearrangement of the columns of S shows that the 192 crystallographic representations of I can be grouped into 12 sets of 16 such that any two of these representations in such a set of 16 share a D 4 -subgroup. This implies that the corresponding subgraph of the D 4 -graph is a complete graph, i.e. every two distinct vertices are connected by an edge. From a geometric point of view, these 16 representations correspond to 'six-dimensional icosahedra'. This ensemble of 16 such icosahedra embedded into a six-dimensional hypercube can be viewed as a sixdimensional analogue of the three-dimensional ensemble of five tetrahedra inscribed into a dodecahedron, sharing pairwise a C 3 -subgroup.
We notice that, using Theorem 5.2, not all the graphs are connected. In particular, the D 10 -and the D 6 -graphs are made up of six connected components, whereas the C 3 -and the C 2graphs consist of two connected components. With GAP, we implemented a breadth-first search algorithm (Foulds, 1992), which starts from a vertex i and then 'scans' for all the vertices connected to it, which allows us to find the connected components of a given G-graph (see Appendix C). We find that each connected component of the D 10 -and D 6 -graphs is made up of 32 vertices, while for the C 3 -and C 2 -graphs each component consists of 96 vertices. For all other subgroups, the corresponding G-graph is connected and the connected component contains trivially 192 vertices.
We now consider in more detail the case when G is a maximal subgroup of I. Let H 2 C B 6 ðÎ IÞ and let us consider its vertex star in the corresponding G-graph, i.e.
VðHÞ :¼ fK 2 C B 6 ðÎ IÞ : K is adjacent to Hg: ð15Þ A comparison of Tables 5 and 6 shows that d G ¼ n G [i.e. the number of subgroups isomorphic to G in I , cf. equation (14)] and therefore, since the graph is regular, jVðHÞj ¼ d G ¼ n G . This suggests that there is a one-to-one correspondence between elements of the vertex star of H and subgroups of H isomorphic to G; in other words, if we fix any subgroup P of H isomorphic to G, then P 'connects' H with exactly another representation K. We thus have the following: Proposition 5.3. Let G be a maximal subgroup of I. Then for every P 2 C B 6 ðK G Þ there exist exactly two crystallographic representations of I, H 1 ; H 2 2 C B 6 ðÎ IÞ, such that P ¼ H 1 \ H 2 .
In order to prove it, we first need the following lemma: Lemma 5.3. Let G be a maximal subgroup of I. Then the corresponding G-graph is triangle-free, i.e. it has no circuits of length three.
Proof. Let A G be the adjacency matrix of the G-graph. By Theorem 5.1, its third power A 3 G determines the number of walks of length three, and in particular its diagonal entries, ðA 3 G Þ ii , for i ¼ 1; . . . ; 192, correspond to the number of triangular circuits starting and ending in vertex i. A direct computation shows that ðA 3 G Þ ii ¼ 0, for all i, thus implying the non-existence of triangular circuits in the graph. & Proof of Proposition 5.3. If P 2 C B 6 ðK G Þ, then, using Lemma 5.2, there exists H 1 2 C B 6 ðÎ IÞ such that P is a subgroup of H 1 . Let us consider the vertex star VðH 1 Þ. We have jVðH 1 Þj ¼ d G ; we call its elements H 2 ; . . . ; H d G þ1 . Let us suppose that P is not a subgroup of any H j , for j ¼ 2; . . . ; d G þ 1. This implies that P does not connect H 1 with any of these H j . However, since H 1 has exactly n G different subgroups isomorphic to G, then at least two vertices in the vertex star, say H 2 and H 3 , are connected by the same subgroup isomorphic to G, which we denote by Q. Therefore we have research papers This implies that H 1 , H 2 and H 3 form a triangular circuit in the graph, which is a contradiction due to Lemma 5.3, hence the result is proved. & It is noteworthy that the situation in B þ 6 is different. If we denote by X 1 and X 2 the two disjoint classes of crystallographic representations of I in B þ 6 [cf. equation (13)], we can build, in the same way as described before, the G-graphs for X 1 and X 2 , for G ¼ T ; D 10 and D 6 . The result is that the adjacency matrices of all these six graphs are the null matrix of dimension 96. This implies that these graphs have no edges, and so the representations in each class do not share any maximal subgroup of I. As a consequence, we have the following: Proposition 5.4. Let H; K 2 C B 6 ðÎ IÞ be two crystallographic representations of I, and P ¼ H \ K, P 2 C B 6 ðK G Þ, where G is a maximal subgroup of I. Then H and K are not conjugated in B þ 6 . In other words, the elements of B 6 which conjugate H with K are matrices with determinant equal to À1.
We conclude by showing a computational method which combines the result of Propositions 4.1 and 5.2. We first recall the following: Definition 5.1. Let H be a subgroup of a group G. The normaliser of H in G is given by N G ðHÞ :¼ fg 2 G : gHg À1 ¼ Hg: Corollary 5.1. Let H and K be two crystallographic representations of I in B 6 and P 2 CðK G Þ such that P ¼ H \ K. Let A H;K ¼ fM 2 B 6 : MHM À1 ¼ Kg be the set of all the elements of B 6 which conjugate H with K and let N B 6 ðPÞ be the normaliser of P in B 6 . We have A H;K \ N B 6 ðHÞ 6 ¼ ;: In other words, it is possible to find a nontrivial element M 2 B 6 in the normaliser of P in B 6 which conjugates H with K.
Proof. Let us suppose A H;K \ N B 6 ðHÞ ¼ ;. Then MPM À1 6 ¼ P, for all M 2 A H;K . This implies, since MHM À1 ¼ K, that P is not a subgroup of K, which is a contradiction.
We give now an explicit example. We consider the repre-sentationÎ I as in equation (12), and its subgroup K D 10 (the explicit form is given in Appendix B). With GAP, we find the other representation H 0 2 CðÎ IÞ such that K D 10 ¼Î I \ H 0 . Its explicit form is given by belongs to N B 6 ðK D 10 Þ and conjugateÎ I with H 0 . Note that det M ¼ À1.
Conclusions
In this work we explored the subgroup structure of the hyperoctahedral group in six dimensions. In particular we found the class of the crystallographic representations of the icosahedral group, whose size is 192. Any such representation, together with its corresponding projection operator k , can be chosen to construct icosahedral quasicrystals via the cut-andproject method. We then studied in detail the subgroup structure of this class. For this, we proposed a method based on spectral graph theory and introduced the concept of G-graph, for a subgroup G of the icosahedral group. This allowed us to study the intersection and the subgroups shared by different representations. We have shown that, if we fix any repre- Table 6 Spectra of the G-graphs for G a nontrivial subgroup of I and G ¼ feg, the trivial subgroup consisting of only the identity element e.
The numbers highlighted are the indices of the graphs, and correspond to their degrees d G .
T -graph D 10 -graph D 6 -graph C 5 -graph sentation H in the class and a maximal subgroup P of H, then there exists exactly one other representation K in the class such that P ¼ H \ K. As explained in the Introduction, this can be used to describe transitions which keep intermediate symmetry encoded by P. In particular, this result implies in this context that a transition from a structure arising from H via projection will result in a structure obtainable for K via projection if the transition has intermediate symmetry described by P. Therefore, this setting is the starting point to analyse structural transitions between icosahedral quasicrystals, following the methods proposed in Kramer (1987), Katz (1989) and Indelicato et al. (2012), which we are planning to address in a forthcoming publication. These mathematical tools also have many applications in other areas. A prominent example is virology. Viruses package their genomic material into protein containers with regular structures that can be modelled via lattices and group theory. Structural transitions of these containers, which involve rearrangements of the protein lattices, are important in rendering certain classes of viruses infective. As shown in Indelicato et al. (2011), such structural transitions can be modelled using projections of six-dimensional icosahedral lattices and their symmetry properties. The results derived here therefore have a direct application to this scenario, and the information on the subgroup structure of the class of crystallographic representations of the icosahedral group and their intersections provides information on the symmetries of the capsid during the transition.
APPENDIX A
In order to render this paper self-contained, we provide the character tables of the subgroups of the icosahedral group, following Artin (1991), Fulton & Harris (1991) and Jones (1990).
Tetrahedral group T [! ¼ expð2i=3Þ]: Dihedral group D 10 : Dihedral group D 6 (isomorphic to the symmetric group S 3 ): APPENDIX B Here we show the explicit forms of K G , the representations in B 6 of the subgroups of I, together with their decompositions in GLð6; RÞ.
APPENDIX C
In this Appendix we show our algorithms, which have been implemented in GAP and used in various sections of the paper. We list them with a number from 1 to 5.
Algorithm 1 (Fig. 2): Classification of the crystallographic representations of I (see x4). The algorithm carries out steps 1-4 used to prove Proposition 4.1. In the GAP computation, the class C B 6 ðÎ IÞ is indicated as CB6s60. Its size is 192.
Algorithm 2 (Fig. 3): Computation of the vertex star of a given vertex i in the G-graphs. In the following, H stands for the class C B 6 ðÎ IÞ of the crystallographic representations of I, i 2 f1; . . . ; 192g denotes a vertex in the G-graph corresponding to the representation H½i and n stands for the size of G: we can use the size instead of the explicit form of the subgroup since, in the case of the icosahedral group, all the non isomorphic subgroups have different sizes.
Algorithm 3 (Fig. 4): Computation of the adjacency matrix of the G-graph.
Algorithm 4 (Fig. 5): This algorithm carries out a breadthfirst search strategy for the computation of the connected component of a given vertex i of the G-graph.
Algorithm 5 (Fig. 6): Computation of all connected components of a G-graph. | 11,065.6 | 2014-02-13T00:00:00.000 | [
"Mathematics"
] |
Bilateral Wilms tumour: a review of clinical and molecular features
Wilms tumour (WT) is the most common paediatric kidney cancer and affects approximately one in 10 000 children. The tumour is associated with undifferentiated embryonic lesions called nephrogenic rests (NRs) or, when diffuse, nephroblastomatosis. WT or NRs can occur in both kidneys, termed bilateral disease, found in only 5–8% of cases. Management of bilateral WT presents a major clinical challenge in terms of maximising survival, preserving renal function and understanding underlying genetic risk. In this review, we compile clinical data from 545 published cases of bilateral WT and discuss recent progress in understanding the molecular basis of bilateral WT and its associated precursor NRs in the context of the latest radiological, surgical and epidemiological features.
Introduction
Wilms tumour (WT) is a rare kidney cancer that occurs almost exclusively in childhood, with a prevalence of one in 10 000 children younger than 15 years of age. This embryonal tumour generally shows mimicry of cell types seen during normal nephrogenesis, with the classical 'triphasic' WT comprising undifferentiated blastemal cells with differentiation towards both stromal and epithelial elements. The genetics of the embryonal tumours of childhood underpinned Knudson's two-hit hypothesis for cancer generation whereby a tumour suppressor gene is silenced by either germline or random somatic loss-of-function mutation of one allele, with the remaining allele lost as a second event post-natally. Hereditary cases are predicted to occur earlier and be more likely to present bilaterally in paired organs such as the kidney. However, when the first WT gene (WT1) was identified, it was found to account for only a minority of bilateral and familial WT cases. Indeed, genetic predisposition to WT is uncommon (∼5% of all cases) and can be owing to one of several different genetic or epigenetic changes (Ref. 1,2). With the recent discovery of many new WT genes, the proportion with known genetic predisposition may increase, especially if some have low penetrance ( Refs 3,4,5).
WTs presenting as bilateral disease can be associated with early disruption in renal development, not only because of involvement of both kidneys but due to the fact that in nearly all cases, tumours are associated with the presence of precursor lesions termed nephrogenic rests (NRs). NRs are clusters of residual embryonic renal cells persisting in a mature kidney that result from incomplete differentiation of metanephric blastema into mature renal parenchyma (Refs 6, 7). Two types of NR are recognised based on morphological features and anatomical location within the kidney. Intralobar NR (ILNR) are usually observed singularly and show predominant stromal composition and often mature fat cells with irregular, indistinct borders and are located towards the renal medulla whereas perilobar NR (PLNR) are often numerous and diffuse located towards the periphery of the renal lobule composed predominantly of blastemal cells with well-defined borders that develop epithelial structures and sclerosis with age (Refs 6, 7). Nephroblastomatosis is defined as the presence of multiple or diffuse NR. In unilateral WTs, NRs are usually only detectable by histology whereas in bilateral WT, the proliferating NRs may be large enough to be seen on imaging (Ref. 8). The term 'bilateral disease' is used to encompass bilateral WT, WT in one kidney with nephroblastomatosis in the other, or bilateral nephroblastomatosis, as these cannot always be easily distinguished on imaging. Whilst NRs are considered benign and can regress spontaneously or under chemotherapy, they have a significant risk of progression to WT (Ref. 9).
Bilateral disease can be synchronous (both kidneys affected at the same time) or metachronous (one affected after the other), which occurs in 6.3 and 0.85% WT patients respectively (Ref. 10) with an overall frequency of ∼5 to 8% ( Refs 11,12). In general, PLNRs are associated with synchronous bilateral WT, whereas ILNRs are more strongly associated with metachronous WT (Ref. 6). As expected from Knudson's two-hit model, the median age of onset of bilateral WT is younger than for unilateral WTunder 2 years compared with 38 months. What remains unexplained is the remarkable female excess seen in bilateral WT (Ref. 12). Furthermore, the bimodal distribution of age at onset implies a genetic complexity that is as yet only partially understood. For both unilateral and bilateral WT, age at diagnosis is affected by the presence of NRs, patient sex (males are diagnosed on average 6 months earlier than females), underlying syndromes and laterality (Refs 6, 10).
At present, bilateral disease is treated with preoperative chemotherapy at time of diagnosis followed by surgery. A major clinical challenge is to decide the best time for nephron-sparing surgery (NSS) and if and when there may be value in intensifying or prolonging pre-operative chemotherapy. Thus far, response assessment is based purely on tumour shrinkage. However, it is recognised that the stromal subtype of WT, common in children with WT1 mutant tumours, may not shrink and may even show a paradoxical increase in tumour size owing to rhabdomyoblastic differentiation, even though it is a favourable histological subtype. Hence, having a technique that could monitor histological response during pre-operative chemotherapy would be useful in planning NSS. Advanced functional imaging using apparent diffusion coefficient (ADC) is a new approach that has the potential to make this distinction (Ref. 13). Furthermore, while WT needs to be surgically removed (Ref. 14), NRs may be left within a patient in some circumstances making their distinction from WTs essential for effective treatment. Patients with bilateral disease need to maintain maximal renal function to ensure longevity requiring advanced imaging and surgical techniques. Here, we review the most recent advances in these fields and explore the molecular biology aspects of bilateral WT.
Search strategy and selection criteria
References for this Review were identified through searches of PubMed, using appropriate search terms for each section, for the period from 1990 until August 2016 ('Nephroblastoma' or 'Wilms', 'Bilateral' and 'Nephroblastomatosis'). For the surgical section, only reviews by national or cooperative groups were included because of a recent comprehensive review of this aspect published in 2009 (Ref. 15). Only papers published in English were reviewed. The final reference list was generated on the basis of originality and relevance to the broad scope of this Review.
WT predisposition syndromes
Unlike adult carcinomas where cells have a lifetime to accumulate damage, the embryonic tumours of childhood are felt to represent random spontaneous genetic changes in a pool of cells that retain the pluripotent differentiation potential of their embryonic counterparts. However, in certain cases, a germline mutation predisposes to WT onset by either providing the first tumour suppressor gene 'hit', as previously discussed, or by causing sustained proliferation of renal precursors providing an optimal environment for a second transforming event. Not surprisingly, a much higher frequency of bilateral disease is observed in patients with predisposition syndromes.
Approximately 5% of WTs are associated with known constitutional predisposition syndromes; whilst over 100 syndromic associations are described (Ref. 1), the commoner ones fall into two major categories: those associated with genito-urinary malformation because of underlying abnormalities in the WT1 gene (WT with Aniridia, Genitourinary abnormalities and mental Retardation (WAGR) syndrome; Denys-Drash syndrome (DDS)) and those associated with an overgrowth phenotype [Beckwith-Wiedemann syndrome (BWS) and Perlman syndrome].
WAGR syndrome is associated with 11p13 deletion encompassing the WT1 gene. The size of the deletion varies, with mental retardation observed in patients with large deletions. Subsequent to germline WT1 loss, the second somatic event leading to WT formation in patients with WAGR syndrome is commonly intragenic WT1 mutation, rather than a second 11p genomic loss, as the latter is likely to be cell lethal. Of children born with WAGR syndrome, 45-57% develop WT ( Refs 16,17). A range of germline intragenic WT1 mutations have been associated with DDS with the majority affecting the WT1 DNA-binding domain, specifically within exon 9 (Ref. 18). Although the penetrance of WT in children with constitutional WT1 mutation is likely much lower, around 74% children with the classical DDS triad develop WT, often with associated ILNRs (Ref. 18) (using the original narrow phenotypic definition of DDS and not including the more recently broadened phenotype with milder renal dysfunction/genitourinary abnormalities with WT1 mutation).
Germline aberration of WT1 is clearly associated with increase in bilateral disease as the overall rate of bilateral WT is 5% whereas patients with DDS show incidence of 20% (Ref. 18 None of these predisposing syndromes show 100% association with bilateral disease, as there is a requirement for a second event prior to tumour formation. The frequency of bilateral disease may be associated with the developmental timing at which the primary aberration occurs. For patients with Perlman syndrome, a very high number develop WT in one or both kidneys whereas for BWS this is much lower suggesting that, on a background of germline DIS3L2 mutation, a transforming second event occurs more readily, whereas on a background of IGF2 overexpression and H19 loss, there is less selection pressure for transformation. Another potential confounder is the presence of mosaicism in patients, where certain tissues may carry the aberration and others not, and even certain cells within the tissue, if the aberration occurs late in development. Molecular features of bilateral WT (Fig. 1) WT1 and bilateral WT. WT1 mutation is observed in ∼12% sporadic WTs (Ref. 25) and germline WT1 mutation or loss significantly increases the likelihood of developing bilateral disease. In a comprehensive review of 117 published WT cases with germline WT1 alterations, the authors showed a frequency of bilateral WT in 24, 17 and 52% of the deletion, missense and truncation mutations groups (Ref. 26). When the truncation group was subdivided further, the frequency of bilateral WT was 50% for patients with frameshift and 54% for patients with nonsense mutations (Ref. 26). Two studies that performed WT1 analysis in large cohorts of nonsyndromic patients with WT found that 8/201 (4%) (Ref. 27) and 6/282 (2%) (Ref. 28) patients had constitutional WT1 mutation with three and two of these having bilateral disease, respectively. This shows that a relatively low frequency of cases thought to be sporadic may in fact be germline, despite the patients showing no other obvious clinical phenotype.
Taking the opposite approach, another study focused specifically on assessment of germline WT1 status in patients with bilateral disease. By targeted sequencing of WT1 in eight bilateral WTs (defined in this case as only synchronous bilateral tumours), three patients were found to have germline heterozygous nonsense mutations in WT1 exon 8, leading to WT1 protein truncation with no wild-type allele present in the tumours (Ref. 29). The other five patients had no WT1 mutation and were not further characterised for germline or somatic mutation of other WT genes. A separate study described a much higher frequency, with seven of eight patients with bilateral disease (defined here as either WT in each kidney or WT with NR in the other kidney) showing germline WT1 mutation (Ref. 30). The final patient had BWS and no WT1 mutation (Ref. 30). Of the seven WT1 germline mutant cases, three patients relapsed; all of whom initially had WT and one NR in the contralateral kidney. Two patients developed WT in the kidney with previous NR and one patient developed bilateral WTs. As one of these recurrences was 11 years later, the authors suggest careful follow up for patients with bilateral disease. Although no molecular analysis was performed on the recurrences, the authors did look for CTNNB1 mutation in the tumours and NRs. It has been hypothesised that WT1 mutation is an initiating event and CTNNB1 mutation a secondary event in WT tumourigenesis as WT1 mutations have been identified in both NRs and WTs, but CTNNB1 mutations only in the associated WTs (Ref. 31). However, the data shown in this study did not agree with this model, because for the three cases where both the WT and contralateral NR were examined for CTNNB1, two showed both the NR and WT were positive for CTNNB1 mutation while the last case was uninformative.
In a separate study where CTNNB1 mutations were specifically studied in a patient with germline WT1 mutation and bilateral WTs, both tumours had a second WT1 hit of loss of heterozygosity (LOH), while the right tumour had delta45S CTNNB1 mutation and the left side had S45P in all cell types and a T41A CTNNB1 mutation specific to a separately microdissected stromal component (Ref. 32). The surrounding kidney was shown to be absent for CTNNB1 mutation or LOH. These data support CTNNB1 mutation being a later event in WT tumourigenesis, which is further supported by the fact that new bilateral WTs subsequently developed with novel CTNNB1 mutations (S45C on the right; S45F on the left) (Ref. 32). A separate study that showed three of five tumours within one patient had different CTNNB1 mutations (delta45, S45C and S45P) (Ref. 33).
Although the evidence for CTNNB1 mutation being a late event is inconsistent, these studies, and others (Refs 34, 35) clearly demonstrate that WT1 mutation can follow the 2-hit tumour suppressor model for the development of cancer. However, the somatic genetics can be complex, with WT1 mutant proteins demonstrating tumour suppressor functions in some cases and oncogenic properties in others. The differing roles for WT1 are further supported by the difference in clinical phenotype observed in patients with WT1 loss and WT1 mutation; a dominant-negative effect is predicted for intragenic WT1 point mutations because of the more severe genitourinary phenotype observed in patients with DDS in comparison with patients with complete WT1 deletion (WAGR syndrome).
IGF2 and bilateral WT. In healthy normal tissue, the expression of IGF2 (located at 11p15) is controlled by a nearby imprinting control centre, at which the DNA is methylated on the paternal allele and not methylated on the maternal allele. Expression of IGF2 occurs only when the imprinting control centre is methylated, i.e. from the paternal allele. This normal phenomenon, termed 'genomic imprinting' is disrupted in WTs. Somatic biallelic expression because of the loss of the silent maternal allele and duplication of the active paternal allele by LOH is observed in 32% and LOI by gain of methylation is observed in 37% WTs, with overall frequency of around 70% (Ref. 25), reviewed elsewhere (Ref. 36). The low frequency of tumours observed in patients with constitutional LOI may be explained by the presence of mosaicism. 11p15 aberration in lymphocyte DNA has been described in 12% of patients with bilateral WTs and 3% of unilateral sporadic WTs without reported syndromes or associated overgrowth (Ref. 2). Furthermore, mosaic LOI has been reported in the kidney in patients without constitutional aberration (Ref. 37). Therefore, the reverse may be true; that patients with 'germline' LOI may show LOI in many tissues, but not the kidney, hence the absence of tumour formation.
In addition to the strong association between constitutional LOI at 11p15 and an increased frequency of bilateral WT, bilateral disease was also significantly more frequent in sporadic WTs with somatic LOI by gain of methylation, compared with tumours without (P < 0.001) (Ref. 25) and LOI by LOH was shown to occur infrequently in bilateral tumours compared with unilateral (Ref. 38). Therefore, despite a relatively low penetrance level, LOI by gain of methylation at 11p15 is clearly associated with both unilateral and bilateral WTs, indicating a disruption in normal epigenetic control.
Recently discovered WT genes. Besides WT1 and IGF2, several other genes or chromosomes have been analysed in bilateral WTs. Whether these are causative for the predisposition or for the individual tumour analysed remains unanswered and addressing the latter requires detailed analysis of multiple tissue samples from one individual, which is not always achieved in the small series or anecdotal series described. One study highlighted a specific case of bilateral WT in which isochromosome 7q was observed only in the left tumour (Ref. 39). Anaplastic histology, associated with TP53 mutation, is also frequently discordant between bilateral tumours and hence is believed to be a later event in tumourigenesis (Ref. 40). An example is the longitudinal analysis of a patient with bilateral disease, where TP53 mutation was not initially detected at diagnosis in biopsies of either side but was found 5 In cases where cells carry germline aberrations (but not in every case), normal development is disrupted and retained embryonic tissue is found in the normal kidney (nephrogenic rests; dark orange). Intralobar nephrogenic rests (ILNR) are associated with WT1 mutation and perilobar nephrogenic rests (PLNR) are associated with 11p15 loss of imprinting (LOI). These lesions are considered precursors to Wilms tumour and are found in nearly all cases of bilateral Wilms tumour (BWT; dark red) although the molecular mechanisms involved in transformation are unknown. Mutation of CTNNB1 is likely to be a secondary event following germline WT1 mutation. Further late events are acquired over the progression of the tumour. Shown in black are several reported germline aberrations found in patients with BWT, however the genetic background is not always known and BWT could also arise from somatic mutation in each kidney. Evidence from mouse models also highlights genetic events that may lead to bilateral disease including the combination of CTNNB1 mutation with KRAS activation in which mice developed bilateral WT-like renal epithelial tumours that were metastatic and multifocal (Ref. 43). However, KRAS has not been identified as a human WT-associated gene. On the other hand, Lin28a overexpression, led to mainly bilateral tumours (4/5 tumours observed in 50 mice) when it was serendipitously overexpressed from 'leaky' expression in a primordial germ cell lineage mouse model experiment (Ref. 44). A further mouse model inducing spatial and temporal control of Lin28a expression in mouse yielded 15 tumours in 15 mice; however, the frequency of bilateral lesions was not discussed. Lin28a human homologue LIN28B, was also shown to be overexpressed in the blastemal component of human WT (Ref. 44). LIN28 overexpression is associated with degradation of Let-7 miRNAs, and as Perlman syndrome is associated with DIS3L2 mutation, the nuclease that degrades poly-uridylated let-7 miRNAs (Ref. 45), and also shows high rates of bilateral WT, this indicates that the miRNA processing pathway may be particularly penetrant for generating bilateral WTs.
More recently, additional genes were found to be mutated in WT, including genes involved in early renal development (SIX1, SIX2 and SALL2) as well as genes involved in the miRNA processing pathway (DIS3L2, DGCR8, DICER1, DROSHA, XPO5 and TARBP2) ( Refs 4,5,46,47). It is currently unclear whether there is a link between these novel gene mutations and bilateral disease however mutations in several (DICER1, DROSHA, DGCR8, XPO5 and DIS3L2) have been observed in the germline ( Refs 4,5,47,48).
Finally, a very recent article showed intra tumour genetic heterogeneity in WT, bilateral WT appearing genetically distinct and probably arising independently one side from the other (Ref. 49). Such variable heterogeneity will probably become predominant in the near future research to better understand the real genetic landscape of syndromic and nonsyndromic bilateral WT. It may have major implications in the clinical decision-making process to more accurately adapt and personalise treatment strategies for each individual cases. Thus, differentiating WT from its associated and presumed precursor NR remains challenging on a molecular basis. It would be of great value clinically if imaging features could also contribute to the assessment of this distinction, to predict histological risk group and hence aid with surgical planning of NSS.
Clinical features
Despite the lack of controlled studies, reports from recently published cooperative, national groups or single institutional series from developing countries with at least 15 patients provide useful data allowing identification of some key features specific to bilateral WT ( Refs 20,56,57,58,59,60,61,62,63,64,65). Clinical characteristics are detailed in Table 1. Age of onset of bilateral WT varied from 15 months to 3.6 years, the lowest being the Japanese series that also presented with the highest rate of associated anomalies, while the highest age of onset was observed in patients from Cape Town that had no associated anomalies (Refs 61, 62). We could argue that better screening of patients followed by paediatricians for other anomalies allows an earlier detection of an abdominal mass. A total of 120 (22%) patients among the 545 listed had associated syndromes or clinically relevant anomalies, the commonest being isolated genito-urinary anomalies (35%), i.e. hypospadias or undescended testis, that were not associated with an already described syndrome. The second most frequent anomaly was isolated BILATERAL WILMS TUMOUR It should be noted however that the Japanese population has a much lower proportion of WT associated LOI at 11p15 than is found in populations of largely Caucasian descent (Ref. 67). No further data on the clinical, radiological, pathological and treatment differences between patients with or without syndromic patterns were displayed in these national series.
The only national series reporting on bilateral disease associated with nephroblastomatosis presented 52 patients with hyperplastic perilobar nephroblastomatosis, including three patients with unilateral lesions and 49 with bilateral lesions. Among them, 24 developed a WT in their follow-up; 13 a single WT and 11 developed two or more synchronous or metachronous uni or bilateral WTs. The histology of the nephrectomy showed a higher percentage of anaplastic WT (33% of those who developed a WT, 15% of the whole cohort) (Ref. 9). Distinguishing nephroblastomatosis from WT at diagnosis is one of the most difficult aspects of bilateral disease and has clinical significance as the overall prognosis of having a WT associated with nephroblastomatosis led to worse overall and event free survival compared to having an isolated WT (Ref. 68). The study of multiple nephroblastomatosis cases described here showed that the initial biopsy did not aid with distinction in 63% of cases (Ref. 9). Instead, the most reliable pathologic feature seemed to be the presence of a well-defined fibrous pseudo-capsule separating the lesion from the adjacent normal kidney in WT (Ref. 9).
Radiological features
Bilateral WTs are usually associated with NRs that are small, microscopic lesions not visible on imaging. However, some cases present with one or more expansile lesions seen on imaging. The smallest lesion detectable by ultrasound is at least 8 and 5 mm by CT scan or MRI (Ref. 69). Distinguishing NR from WT is difficult, the most characteristic feature of NR at diagnosis being their diffuse homogeneity both before and after contrast agent administration. After chemotherapy, MRI has been shown to differentiate the active NR and WT (bright on T2 and STIR sequences) from inactive NR and treated WT (dark on T2-weighted images and STIR sequences). The shape of the lesion may aid distinction because of the more oblong or lenticular shape of NRs; however, they can also be spherical like WT, resulting in less than perfect specificity and sensitivity of MRI and CT in the distinction between WT, NR and nephroblastomatosis (Ref. 70).
Nephroblastomatosis in its diffuse hyperplastic perilobar form is confined to the periphery of the kidneys. Its appearance is usually hypointense to the cortex and isointense to the medulla in MRI nonenhanced T1weighted images, and hyperintense on T2-weighted images with similar appearance of the cortex. Contrast enhanced MRI or CT make the lesions the most conspicuous (Fig. 2) (Ref. 9).
The recent development of diffusion-weighted MRI in paediatric abdominal tumours (Ref. 71) has shown an inverse relationship between the cellularity of extra cranial tumours and the ADC of these tumours (Ref. 72). The use of ADC measurements to enable differentiation between benign and malignant tumours shows controversial results, potentially explained by a difference in the drawing of the region of interest that should not include any necrotic or cystic area as these areas render ineffective ADC measurement ( Refs 71,72). So far, in WTs, diffusion-weighted MRI has been able to stratify WT histological subtypes with significantly lower values observed in high-risk blastemaltype WTs compared with intermediate risk stromal, regressive and mixed-type. No significant difference in ADC was found between blastemal-type WTs and intermediate risk epithelial-type (Ref. 13). This may be particularly important for identifying the proportion of blastema that has responded to chemotherapy, and the proportion of residual chemotherapy-resistant blastema, as mentioned in the introduction. Ongoing studies are assessing the prognostic significance of these measurements.
Treatment for bilateral disease
The preoperative chemotherapy regimen favoured when primary surgery was not performed was a course of Vincristine and Actinomycin D, with or without Doxorubicin for a mean duration of about 3 months before the first surgery ( Table 2). The timing of performing Nephron NSS or radical nephrectomy reached a consensus on the need to operate before the 12th week of preoperative chemotherapy, first because of the risk of anaplastic transformation (Ref. 73), then because continuing chemotherapy longer will not facilitate conservative resection (Ref. 65) and because nonresponding tumours on radiological assessment may be differentiated tumours (like stromal type) that will not shrink more under further chemotherapy. NSS was performed in 344/517 (66%) patients, combining radical nephrectomy on one side and NSS on the other side (n = 192), bilateral NSS (n = 127), unilateral NSS and biopsy on the other side (n = 11) or unilateral NSS alone (n = 14) ( Table 3). Twenty-two additional NSS were performed by the Durban surgical team but with no detail on the side of the surgery (Ref. 57). For central tumours involving the renal hilus, a longitudinal partial nephrectomy was reported in five bilateral WT patients, three of them carrying a WT1 mutation, with good oncological and outcome results (Ref. 74).
The quality of resection could be evaluated by the number of surgical complications and the number of stage III. Surgical complications occurred in 40/517 (7.7%) patients leading to death in two Italian patients (one chylous ascites and one acute cerebral ischaemia) (Ref. 59) ( Table 3). These surgical fatal complications led the Italian group to advocate for a more centralised management of bilateral WT, also noticing that the highest rate of conservative procedures arose from a single expert institution (Ref. 59).
The final pathological analysis showed about 30% WTs were stage III in major series (Refs 58, 59, 64, 65) but without distinguishing radical nephrectomy from NSS. Reasons for stage III were not detailed, but one could argue positive margins as well as omission of lymph nodes sampling that seems more Table 2). Like for unilateral WT, histology remains a major risk factor for outcome of bilateral WT, even with adapted postoperative chemotherapy. A real difference in overall survival was also noticed between synchronous and metachronous disease however only one study separated the samples and sample size was small ( Table 2) (Ref. 61). Among the studies involving only synchronous disease, the relapse rate ranged from 13 to 29% (Table 2), with around half being only local relapse of whom half were treated by repeat NSS ( Refs 58,77). No details were given on the survival or recurrence rate depending on the presence or absence of associated anomalies or syndromes whether the bilateral disease was synchronous or metachronous.
End Stage Renal Disease (ESRD) after bilateral WT The major concern for bilateral WT patients after complete remission of the disease is the evolution of their renal function at long-term follow-up. ESRD was estimated at 0.6% of unilateral nonsyndromic WT but increased to 6.7% for patients with genito-urinary anomalies, 36% for patients with WAGR and 74% for DDS patients (Ref. 78). In cases of bilateral WT, ESRD was 11.5% at a mean of 11.5 years follow-up for nonsyndromic patients, 25% for patients with genito-urinary anomalies, 90% for patients with WAGR and 50% for DDS patients (Ref. 78). Hypertension is another concerning risk at long-term follow-up and has been estimated in a recent analysis of GPOH patients at 66.7% of patients undergoing total nephrectomy on one side versus 20% for patients undergoing bilateral NSS. In a recent single institution review of their bilateral WT operated on by NSS in 92.9% of cases, the authors showed a treated hypertension rate of 30.6% of the 36 living patients at a median follow-up of 3.7 years (Ref. 79). An additional seven patients presented nontreated persistent systolic or diastolic blood pressure readings between the 90th and 95th percentile for their age group increasing the rate of hypertension in the cohort to 50%. The renal function assessed by Schwartz formula showed 36.1% of patients having an estimated glomerular filtration rate of less than 90 ml/min/1.73 m 2 but none had <60 (Ref. 80).
Conclusions
Advances in understanding the molecular basis of WT hold much promise for improving the management of the rare but challenging scenario of bilateral disease. Surgical treatment strives to preserve renal function through NSS without compromising complete tumour excision. This is generally facilitated by pre-operative chemotherapy, which brings additional information from assessment of histological response.
Interpretation of the completeness of tumour excision may be confounded by the difficulties in distinguishing NR from fully malignant WT. Here, epigenetic changes may add to current knowledge about the key genetic drivers (WT1 and IGF2 disruption) early in renal development and those occurring as later events (MYCN, TP53 and CTNNB1 mutation). Recent research has highlighted new pathways associated with WT formation, including mutation of new genes involved in renal development and the miRNA processing pathway. The contribution of mutation in these genes to bilateral disease and, separately, to risk of renal failure, requires further assessment by epidemiological studies in combination with molecular analysis. It is likely that these questions will be answered in a relatively short time scale because of large-scale collaborations and ever decreasing costs of molecular analysis.
There remains a need for noninterventional methods to predict histological subtype so that decisions about intensification of pre-operative chemotherapy and timing of surgery can be planned to maximise the possibility of NSS. Recent advances in MRI diffusion measurements and in detecting circulating tumour DNA may aid in assessment here (Ref. 55). Understanding the full genetic spectrum of bilateral WT is important for treatment planning and follow up to optimise the overall survival of these children, many of whom are expected to have constitutional mutations in WT predisposition genes. These may contribute to their risk of further tumours and of end stage renal failure as well as increased tumour risk in their offspring. Optimum management of bilateral WT requires an experienced multi-disciplinary team with input from the point of diagnosis of all the above specialist areas to achieve the best outcome for each patient.
Cancer Research UK (C1188/A4614), Great Ormond Street Hospital Children's Charity and Children with Cancer. KPJ is part supported by the NIHR GOSH UCL Biomedical Research Centre. CB's contribution is funded by the Association Léon Bérard Enfant Cancéreux (ALBEC) charity. | 7,053.2 | 2017-07-18T00:00:00.000 | [
"Medicine",
"Biology"
] |
Exploring E Turkey : Rainfall Precursor Predicts 100 % Earthquake in a Consistent Manner in Just 2 Weeks
Rainfall event is the very specific, reliable unambiguous precursor for the earthquake event. Over the years scientists have hunted for some signal—a precursory sign, however faint—that would allow forecasters to pin-point exactly where and when the big ones will hit. After decades spent searching in vain, many seismologists now doubt whether such a signal even exists. But in a great surprise to everyone, from an ordinary lay man to eminent scientists, 100% earthquakes occur after rainfalls! Though I have the findings for the entire regions of the world, here E Turkey are the region for submission for the period Jan-November, 2012 to study the strong correlation and show the strong evidence to prove that the 100% earthquakes after rainfall in a consistence manner. Anyone can very easily verify the validity of the findings for any forthcoming earthquakes for any regions of E Turkey in just two weeks of period. Nature does not give two different results for the same phenomena, for two different observers. Though there exists a very strong relation between the rainfalls and the earthquakes, scientists and seismologists have not been able to detect and identify this rainfall precursory signal for hundreds of years that consistently occurs before earthquakes. The methodology of rainfall event before earthquakes, even works consistently for earthquake prediction purpose, especially in any regions of the world. Rainfall type precursor is the best approach to predict specific earthquakes, which provide the potential for estimating the epicenter and magnitude of any moderate to strong earthquakes. Earthquakes are more likely when there is rain than it is not. The magnitudes of a resulting individual earthquake depend on the severity of the weather changes. However, in a very few cases the time scales and magnitude do vary substantially as a consequence of local site geology and other factors.
Introduction
The purpose of this paper is to identify the best earthquake precursor that can be predict of any individual earthquake(s) location and magnitude accurately in shortterm, that work really quite well than chance-correlation between the specific regional rain fall and to the respecttive individual regional earthquake in E Turkey.Significantly the earthquake precursors not within the earth but can be witnessed at the earth atmosphere by manifesting weather anomalies.Apart from the expectation of the best laboratory level of accuracy, its significant result narrowly confined to identify and detect the valid and reliable scientific precursor for any impending individual earthquake for the entire regions of the world.It is quite possible to identify the epicenter of the any huge earthquake tragedies waiting to happen, including its magnitude based on the location and severity of the regional weather changes in just two weeks of time.The method-ology discussed here enhances the feasible solution for predicting the quakes reliably and scientifically.There is currently no reliable way to predict the days or months when an earthquake will occur in any specific location.Each and every form of regional weather changes is followed by respective regional earthquakes in a repeatable manner and the happenings of earthquakes in a given region have a recurrent pattern.The onset of any regional weather changes set a time scale of advancing the regional earthquakes in just 10 to 15 days.Earthquake is a short term process-only few weeks to few months and also happens quite in an orderly pattern.things, in a restricted usage, implies that some anomalous phenomenon that always occurs before an earthquake in a consistent manner [1].Weather anomalies are known as the significant reliable unambiguous precursor to predict any individual earthquake with the location, magnitude and time (period).Both the rainfall event and the subsequent earthquake events occur in the same general repeatable patterns, because, they are all governed by, same set of physical laws.
Typically there is a very strong direct relationship between the severity of the weather event and the subsequent magnitude of a respective earthquake.All weather anomalies may not followed by earthquakes but 100% earthquakes are always happening after weather anomaly and more specifically the rainfall events.
List of Unsuccessful Precursors
Scientists have had more success predicting aftershocks, additional quakes following an initial earthquake but they have been far less successful, however, in finding ways to predict when earthquakes will occur.On the basis of foreshocks, Chinese scientists in 1975 predicted that an earthquake would occur near Haicheng in northern China.Officials' evacuated people from buildings in the area, and just hours later an earthquake struck.Despite that dramatic success, however, seismologists have since come to the conclusion that foreshocks are not a reliable way to predict earthquakes.Many large earthquakes strike with no foreshocks or other warning, and small tremors often occur that are not followed by an earthquake.Global efforts to predict earthquakes were started about a century ago and peaked during 1970s.The first scientifically well documented earthquake prediction was made on the basis of temporal and spatial variation of ts/tp relation in Blue mountain lake, New York on the 3rd August, 1973 There are more than 35 lists of precursory activities includes; radon and Helium emanation; electromagnetic emissions; water level and temperature changes; ground uplift and tilt; changes in ionospheric parameters and so on.But none have been shown to be successful beyond what might be expected by chance.
Happenings of Weather Anomalies
Although the tropical disturbances exist all the time in each year in different Ocean basins, but only a few of them may develop into tropical cyclones.Weather is all around us and the entire atmosphere sits on the Earth's surface, held in place.Temperature also varies from place to place due to unequal cooling and heating.All form of weather changes are temperature related events in which, water plays a major role in weather.The weather is simply the state of the atmosphere; the gaseous layer that subject to influence from a host of terrestrial and extra-terrestrial forces.There are several extreme forms of weather anomalies like the formation and strengthening of powerful tropical cyclone; days of heavy rain, massive flooding and mudslides; melting snow, heavy snowfall, fog and intense cold wave; high winds and thick plume of dust storm; blighting extreme heat waves and forest fire and massive waves hit the shores.
Happenings of Seismic Events
It is estimated that around 500,000 earthquakes occur each year, detectable with current instrumentation.About 100,000 of these can be felt.Minor earthquakes occur nearly constantly around the world in places like California and Alaska in the US, as well as in Mexico, Guatemala, Chile, Peru, Indonesia, Iran, Pakistan, the Azores in Portugal, Turkey, New Zealand, Greece, Italy, and Japan, but earthquakes can occur almost anywhere, including New York City, London, and Australia.At the Earth's surface, earthquakes manifest themselves by shaking and sometimes displacement of the ground.When the epicenter of a large earthquake is located offshore, the seabed may be displaced sufficiently to cause a tsunami.Earthquakes can also trigger landslides, and occasionally volcanic activity.Formation of a cyclone process is a "complicated process" and the formation of an earthquake process is a "complex process".The preparatory phase an earthquake, manifest weather anomalies in the atmosphere and the earthquake events, manifest themselves by shaking the ground.
Flawed Understanding of Cyclone Forming
The formation of tropical cyclone is the topic of extensive ongoing research and because of the flawed understanding of the cyclone formation, the seasonal predictions were often marred by significant errors.With the precise nature of the interaction between the atmosphere and the Ocean is not fully understood, what produces a hurricane or a typhoon?Despite years of effort by many meteorologists, the question has not yet been completely answered.A simple buoyant convective hypothesis will not adequately explain the observed fact.It is scientifically true that the incoming solar radiation is the same on the same date irrespective of the year, but the actual conditions of the surface of the tropical Oceans are by no means the same on the same date of each year.Also, the frequency and intensity of hurricanes varies significantly from year to year, and scientists haven't yet figured out all the reasons for the variability.Globally, about 80 tropical cyclones occur annually.The most active area is the western Pacific Ocean, which contains a wide expanse of warm ocean water.To form a tropical cyclone in most situations, water temperatures of at least 26.5˚C [80˚F] or greater are needed down to depth of at least 50 m (150 feet); water of this temperature cause the overlying atmosphere to be unstable enough to sustain convection and thunderstorms Even though all essential factors satisfied, the cyclone formations are confined to the Island concentrated Ocean basins alone and not everywhere in the western Pacific Ocean and the Island free South Atlantic basins or over any major rivers like Nile, Amazon and lakes like Baikal.So there seems to be no scientific relationship exists between the cyclone formation with the solar energy as prime source and the position of the earth with respect to the Sun.Month May (nearest to the Sun) is the least active month, while September (farthest to the Sun) is the most active month.
Cyclones could be formed without the solar energy!Further, the thermal energy factor responsible for cyclone intensification is not in the earth atmosphere but the frictional heat originating under the Ocean bed.This is the reason why the problem of tropical-cyclone intensification continues to challenge both weather forecasters and researchers.For instance, the regions with group of islands in the Oceans are warmer, that indicates the stant seismic activities at work than at the Island free regions Also, the regional geological coordinates for both the tropical Cyclones and the resulting regional quakes are very closely matched.It is from the observations that any form of atrocious weather changes happened in any region, that strongly illustrates the powerful seismic forces at work underground and the process for an earthquake is over in that specific region.
Land Surface Temperature
In 1980s Russian scientist found some short lived thermal anomalies from satellite image before an earthquake in central Asia .Since then many scientist begun to study this thermal anomaly with satellite data in china, Japan, India, Iran and Algerian earthquakes [2,3].Although most of the precursors have an important role in the earthquake prediction process, the thermal anomaly precursor is one of the precursors which have gained more attention and support from the scientific community across the world (Panda et al. 2007).Thermal anomaly is an unusual increase in Land Surface Temperature (LST) that occurs around 1 -24 days prior to an earthquake with increases in temperature of the order of 3˚C -12˚C or more and disappears few days after the event.The proposed method for anomaly detection was also applied on regions irrelevant to earthquakes for which no anomaly was detected, indicating that the anomalous behaviours can be related to impending earthquakes.Though, there may be various physical explanations for thermal anomalies appearing before an impending earthquake, frictional heat will be most appropriate.The tremendous amount of frictional heat energy generated during the preparatory phase of an earthquake could only responsible for the Land Surface Temperature (LST) anomalies.Further, it has been observed that the thermal anomaly is ground related phenomena, not an atmospheric one [4].
Scientific Reasons That Rainfall Connects Earthquakes
During the preparatory phase of an earthquake, two blocks of the Earth's crust slide past one another generating massive amounts of frictional heat.In fact, Kanamori & Brodsky (2001) describe earthquakes as thermal events more than seismic events because most of the energy release during an earthquake goes into heat rather than seismic waves.The actual temperature rise depends on the thickness of the fault zone, which is not known, but for a zone whose thickness is only few centimetres, the temperature could have risen to above 5000˚C.However, seismologists have never directly observed ruptures occurring in Earth's interior.Instead, they rely on the information gleaned from the few available types of data, the most important of which is the record of seismic waves [5,6].This tremendous quantity of heat energy is continuously fed into the atmosphere, setting into motion and creating weather.This massive amounts of frictional heat generated within the earth, reaches on to the Ocean surface by means of convection, a large amounts of steamy water rise off the ocean, forming an area of low pressure (tropical depression).The tropical cyclone draws energy from its thermal reservoir-the warm water at the surface of the Ocean.In this way, the frictional heat responsible for the direct and primary cause for all form of weather changes includes the formation and strengthening of powerful tropical cyclone; cluster of tornadoes; days of heavy rain, massive flooding and mudslides; melting snow, heavy snowfall, fog and intense cold wave; high winds and thick plume of dust storm; record breaking heat waves and forest fire; massive waves hit the shores.
Minimum Two Weeks Period for the Clamping Stress Overcome by the Shear Stress
The instigating factor in earthquake triggering is pressureor stress, transmitted through rock to a fault.If the clamping stress, oriented perpendicular to the fault surface and which acts to hold the fault in place, is overcome by the shear stress, which forces the two sides of a fault to slide parallel against each other, then the fault will rupture.This would be the reason for the minimum two weeks of time delay, between the happenings of the
Correlation between Rainfall and Earthquakes
A scientist, Jerome Mamias, from scrip institution of Oceanography publish a paper In 1989 suggesting that earthquake do occur more often in hot weather.Since, the solid earth surface is in direct contact with atmosphere and oceans and its evolving.The subsurface seismic shaking effects, affects the everyday surface atmospheric weather condition.Because the preparatory phase of the earthquakes itself generates observable anomalies in the atmospheric weather conditions.The overall atmospheric disturbances are seems to be controlled by the seismic events, without which there would be practically no atmospheric weather anomalies.The Ocean water (above 4˚C) is less dense as its temperature increases.This allows surface cold water to sink to the bottom.Basic Physics suggest that a warmer atmosphere can hold more water vapour, for example and should therefore develop more storms [7].Besides the reports of thermal anomaly related seismic activity, data gaps caused by cloud cover [8], indicates that there was cloud formation during the period of thermal anomalies as reported: "A difference in the time of appearance of the thermal IR anomalies has been observed in day time and night time data at the Bam, Bhuj and other studied earthquakes.This is probably due to the typical meteorological phenomena or the appearance of clouds [9].Understanding the law of heat transfer, that heat moves to the cooler region until both reach equilibrium."
Unification of Weather Anomalies and Earthquakes
So far, both the weather and seismic events are observed as continuous and unpredictable and also viewed as not interconnected with one another.The failure of earthquake prediction is not only because of negative observa-tion of precursor phenomena but also due to the failure in connecting the seismic events with the atmospheric events.Weather precursors to earthquakes are the one-to-one correlation.Rainfall is always associated and accompanied with any one of the weather conditions like heat waves, wildfire, strong winds and snow fall.Among the list of weather precursors, the location of rainfall event alone, enough to forecast the epicenter of any forthcoming individual earthquake precisely.Because, during the preparatory phase of a specific seismic regional events are always affect the respective atmospheric weather of a specific regions (Tables 1-3 summarized for the entire regions of South Pacific, Iran and Italy respectively).The happenings of earthquake after weather anomalies are like thunder after lightning.Without identifying the correct earthquake precursory signal that consistently occurs before earthquakes, it would be impossible to predict any individual earthquake of small or big one.It is impossible to predict when and where the next cyclone will form, but based on the location of the cyclone formation it quite possible to say when and where the next earthquake will happen.
Observations
Irrespective of ocean depth (regions like Philippines and Fiji) earthquake events are happening after the rainfall events in a consisting manner in two weeks of time is remarkable to mention.Year after year, many faults in different regions of the world tend to produce repeated earthquakes but varying magnitude, depth, rupture length and rupture mechanism.For the past five years thousands of events are warned, observed, tabulated and analyzed in a rigorous manner for the entire regions of the world (see Tables 1-5).
Methodology
It is quite possible to make the prediction of any individual earthquake to the maximum possible confidence level.Rainfall events are earthquake precursors as a valid only prediction method for every individual earthquake for the entire regions of the world.I have verified more than 1000 of events in the past five years and also continuously sending earthquake warnings based on the weather precursor to more than 2500 quakes events in the same duration to a list of experts to the concerned region.I warned, observed and recorded the earthquake of magnitude M4+ against the rainfall 50 mm above for the entire regions of the world.By applying this prediction method, without wasting away any precious time, scientists can ensure and prevent any horrific loss of life by any potential earthquake and tsunami!Only at the instant of noticing the location and amount of rainfall on a weather map using the web site [10,11] and with the help of the observation table given below, except precise time, it is quite possible to identify the epicentre of the location and magnitude of any forthcoming earthquake(s) in E Turkey.
With the study of interdisciplinary experts based on these findings, the accuracy could be promoted certainly.This promising earthquake prediction method is quite easy, quick and also low cost.
Location
The geological coordinates of the origin of the weather changes (geological process within the earth) are vitally very important (because a storm may move thousands of kilometers away from its origin) for fixing the geological coordinates for any forthcoming earthquakes.Sometimes the regional weather changes over a same region have been affected by the different geological process of dif-ferent adjacent regions.
Magnitude
The magnitudes of a resulting individual earthquake depend on the severity of the weather changes and also depends local site geology and other factors.Up to Magnitude 5 for any individual event of rain fall 50 to 100 mm.M5-6 for a wide spread rainfall (say 50 to 100 mm and above).
M6-7 for a very wide spread continuous heavy rain (say 50 to 100 mm and above), plus blighting extreme heat waves and forest fire.
Time
Normally, the time interval between the regional weather changes and the resulting respective individual earthquake(s) are usually around 10 to 15 days.However, for a heaviest snow fall, the resulting earthquake takes nearly 2 -3 months of time.Only at the laboratory level, it is quite possible to predict the precise time of any forthcoming earthquake.Like seasonal weather changes, earthquakes are also having a cyclic pattern to occur year after year in the same region except its magnitude, which will vary in accordance with the extremity of the regional weather changes.
Conclusions
In my continuous efforts, I got the results conformed to reality.Though there are several other methods and models used to predicting both the monsoon and earthquakes, but all are turned significantly unsuccessful.
It is quite impossible to say when and where the next cyclone will form and at what intensity, but based on the location of low depression formation, it is quite possible to say when and where will the next earthquake will occur and also its magnitude based on the severity of the weather event.
Reliably there is some physical connection between the weather anomalies and the subsequent earthquake event.It can be observed that the weather changes are not having any surface origin but often very closely related to geological process within the earth, the preparatory phase of an earthquakes and both responding the same dynamic earth forces.It has been observed that the seasonal weather changes are due to the orbital motion of the earth and, in the same way the constant trigger of large-scale effects of plate tectonics could be with the same effects of orbital motio of the earth.Without in-n Copyright © 2013 SciRes.IJG cluding the weather anomalies prediction of earthquakes are impossible and without considering effects of crustal movements and its effect on atmosphere, it would be impossible to predict monsoon successfully.
Acknowledgements
This paper would not have been possible without the data source available from the websites USGS and WMO.I am very particularly grateful to Dr. Michael L Blanpied, Associate Coordinator, USGS Earthquake Hazards Program for his all time pleasing and suggesting expert level of very valuable scientific guidance through his numerous correspondence.Then I would like to express my very great appreciation to Dr. Jeremy Zechar, scientist, CSEP, USA, a man of helping tendency, for his sincere efforts to turn my hypothesis into testable statements.
I am very much thankful to T. Tamilselven, Managing Director, Super Quality Services for his magnanimous financial assistance and permission to publish this manuscript.
My special thank goes to Prof. Dr. K. V. Gopalakrishnan, (Deceased) IIT, Chennai, India; Dr. P. Srinivasulu, Deputy Director (Retd), SERC, CSIR, Chennai, India and Mr. S. Nagarajan, Salem, Tamilnadu, India, serves as a great source of inspiration and support for more than 25 years.Finally my heartfelt thanks to the SCIRP grant office, unknown reviwers of this manuscript and my family members and friends for their every valuable help and assistance. | 4,957.4 | 2013-06-20T00:00:00.000 | [
"Geology"
] |
A Reaction Microscope for AMO Science at Shanghai Soft X-ray Free-Electron Laser Facility
: We report on the design and capabilities of a reaction microscope (REMI) end-station at the Shanghai Soft X-ray Free-Electron Laser Facility (SXFEL). This apparatus allows high-resolution and 4 π solid-angle coincidence detection of ions and electrons. The components of REMI, including a supersonic gas injection system, spectrometer, detectors and data acquisition system, are described in detail. By measuring the time of flight and the impact positions of ions and electrons on the corresponding detectors, three-dimensional momentum vectors can be reconstructed to study specific reaction processes. Momentum resolutions of ions and electrons with 0.11 a.u. are achieved, which have been measured from a single ionization experiment of oxygen molecules in an infrared (IR), femtosecond laser field, under vacuum at 1.2 × 10 − 10 torr, in a reaction chamber. As a demonstration, a Coulomb explosion experiment of oxygen molecules in the IR field is presented. These results demonstrate the performance of this setup, which provides a basic tool for the study of atomic and molecular reactions at SXFEL.
Introduction
A reaction microscope (REMI), or cold-target recoil-ion momentum spectroscopy (COLTRIMS), is/uses a device designed to measure the momenta of charged reaction fragments in coincidence [1][2][3][4].In 1987, the first system of spectrometers to detect recoil ions using static gas as a target, at room temperature, was built by J. Ullrich et al. [5].A uniform magnetic field generated by a pair of Helmholtz coils, having been introduced a few years later, measures electrons with a 4π solid angle [6,7].Since then, kinematically complete experiments have become possible and the continuous development of supersonic gas jet technology [8][9][10] and detector technology, based on a micro-channel plate and delayline anode, has greatly improved momentum resolution [11][12][13][14].After development of more than three decades, this technology has become mature and is presently widespread.By measuring the energies and emission angles of charged products with high precision, detailed information can be obtained for deeper investigation of atomic and molecular reactions with ions, electrons or photons.
In the past 10 years, REMI/COLTRIMS has been employed for a tremendous amount of studies in synchrotron radiation.On the other hand, with the rapid innovation of laser technology, especially chirped pulse amplification technology [15], ultra-short laser pulses at high intensities have provided extraordinary experimental conditions for the study of atomic, molecular and optical (AMO) physics.Through the combination of reaction microscopes and laser devices [16][17][18], many physical phenomena of the reactions of gas targets with photons have been explored, such as multi-photon ionization, above-threshold ionization, tunneling ionization, high harmonic generation (HHG) and so on.As a new generation of coherent light source, X-ray free-electron lasers (FEL) have the characteristics of high power, a wide and continuously tunable wavelength, short pulses and good coherence, serving a number of applications in various areas of science [19,20].In AMO science, the most far-reaching applications are manifested in the topics of nonlinearity induced by single-or many-photon absorption, and the ultrafast molecular dynamics and structural imaging of a single molecule in the regime of UV-to-hard-X-ray photons [21][22][23][24][25]. Fewphoton induced atomic and molecular dynamics [26][27][28][29][30] and isomerization [31,32] have been studied in detail.Some exotic physical phenomena, such as the "hollow atoms" [33] and the "molecular black hole" [34] have been observed recently.The former is formed by quickly ejecting a few inner-shell electrons, resulting in the phenomenon of "transparency" to X-rays [35].In the latter case, the X-ray pulse first strips the number of electrons from a heavy atom, due to its huge cross section, and then a highly positive resulting charge traps electrons from neighboring atoms within the molecule.X-ray lasers can also ionize a number of electrons from one heavy atom within a large biomolecule, which leads to the rapid redistribution of electrical charge in a tiny time window and provides rich information for analyzing biomolecules using X-ray lasers.Further, some basic physical processes of atoms and molecules have been reached using X-ray lasers with the control of wavelength, phase and intensity [36,37].Moreover, the circularly polarized light generated in free-electron lasers is used to study the circular dichroism of atoms, linear molecules and chiral molecules [38][39][40].More recently, experimental studies were extended to the photoelectron diffraction imaging in a molecular breakup frame using an X-ray FEL [41].
The construction of the Shanghai Soft X-ray Free-Electron Laser Facility (SXFEL) began in 2014.Two beamlines, including the seeded line and SASE line, together with five end stations, including cell imaging, AMO, ultrafast physics and surface chemistry, are expected to be put into user operation in 2022 [42].Its focused free-electron laser beam with high pulse intensity (>10 14 W/cm 3 ) and high single-photon energy (100-600 eV) brings a new opportunity for the research of AMO physics.The reaction microscope introduced, here, is installed at the south branch of the seeded FEL line currently and all the endstations are movable in principle, thus, experiments at the SASE beamline are also possible.Multiparticle coincidence detection can be realized by REMI for both ions and electrons, featuring a high collection rate, wide energy detection range and high resolution.These advantages make the instrument ideal for coincident experiments on atoms and molecules.Limited by the low repetition of SXFEL, at present, REMI may be proper for measurements of molecules with heavy atoms.Here we introduce is principles, design, structure and test results, using an infrared femtosecond laser.
Apparatus Setup 2.1. Overview
The setup we present here is mainly composed of a vacuum system, a supersonic gas jet system, a spectrometer system and the data acquisition system.Atoms or molecules are collimated and cooled to form a supersonic target and react with the focused laser.Then, the ions and electrons are accelerated and guided onto opposing detectors by electric and magnetic fields.The three-dimensional momentum vector of each particle is accurately reconstructed from the particle's arrival position (X,Y) and TOF at the detector.
To get a good resolution, a cold supersonic gas beam is essential to decrease the thermal momentum spread.The gas from the high-pressure gas bottle is forced into the low pressure vacuum chamber.Through two skimmers and two pairs of slits perpendicular to each other, a thin gas target is formed in the intersection chamber (science chamber).A multistage vacuum system is used to maintain a typical vacuum of 1.2 × 10 −10 torr in the science chamber.The schematic picture is shown in Figure 1.A uniform electric field from the spectrometer, along with the uniform magnetic field generated by a pair of Helmholtz coils (orange) are applied to guide the recoil ions and electrons to the bilateral detectors.The details of key components of this apparatus are introduced in the following section.
Supersonic Gas Jet
At room temperature, the thermal motion of atoms and molecules in a gas target would bring a great error to the measurement of momentum.Therefore, the formation of a welllocalized and internally cold gas target is one of the key factors to ensure high-resolution measurement.In addition, it is desirable to ionize less than one molecule per laser pulse in experiments, in order to make accurate ion-electron coincidence measurements, so, a thin target is required, considering the high power of SXFEL.
The composition of the supersonic gas jet system is depicted in Figure 2. A 50 µm nozzle is installed on a three-dimensional adjustable manipulator.The high-pressure (1-5 bar) gas froma conventional gas cylinder passes into the low-pressure vacuum chamber through the nozzle.The gas in the low-pressure region expands quasistatically and adiabatically so that the enthalpy of the gas is converted into directional kinetic energy.A conical skimmer of 0.2 mm inner diameter is placed at 15-mm downstream of the nozzle to extract the centerline gas beam from the so-called zone of silence.The length of zone of silence during the experiments is estimated to be 30 mm by equation [43] where p 0 is the initial gas pressure and p b is the background pressure in the first stage.Subsequently, a skimmer of 0.4 mm in diameter and mounted parallel to the first skimmer selects particles with a small momentum spread in the x and z planes.After that, two pairs of adjustable slits are arranged to tune the size of the gas beam in the direction perpendicular to the jet's propagation direction.An internally cold supersonic gas target is then obtained and the remaining gas is collected in the dumping chamber, which is an independent residual gas extraction system (not shown here).The distance between the nozzle and reaction center is about 1045 mm.As is shown in Table 1, the whole supersonic gas jet system has six vacuum stages.The low vacuum pressurein the source chamber is gradually transitioned to ultra-high vacuum conditions when reaching the intersection chamber via several differential pumping stages.At the intersection region, the maximum diameter of the gas jet(uncollimated) in the x and z directions is around 4.0 mm, and the size adjustment accuracy is 0.02 mm, which is determined by the adjustment accuracy of the slits.The density of gas target is estimated to be about 10 8 /cm 3 .From the speed ratio, S, of 38.7 in the experiment of the single ionization of He, which is the mean jet velocity divided by the thermal spread in velocities, it is estimated that the gas jet is eventually cooled to 0.5 K [44].Ultra-high vacuum conditions and a thin gas-jet target make the coincidence measurement of ions and electrons in the strong field of free electron laser feasible.Schematic diagram of supersonic gas-jet system structure.The green arrow represents the propagation direction of the gas jet passing through six chambers from the nozzle to the dumping chamber.After passing through two skimmers and two pairs of slits, the jet collides with the laser (pink) in the center zone of the interaction chamber (purple dot) and is collected in the dumping chamber.Table 1.Typical pressures of different chambers before and after gas injection.Numbers one through six represent the source, first differential, second differential, third differential, intersection and dumping chambers, respectively.The pressures are given in torr.
Spectrometer
Both the reaction between the supersonic gas jet and the laser and the collection of the reaction particles occur in the intersection chamber.The spectrometer, providing a uniform electric field to accelerate ions and electrons, is schematically shown in Figure 3.It consists of 23 pieces of 1-mm-thick stainless steel-ring electrodes of 120 mm inner diameter and 200 mm outer diameter.The rings are separated by an 11-mm distance between each other by insulating ceramics and electrically connected by a cascade of 30-kΩ resistors.Thus, the voltages on each ring electrode are uniformly distributed simply by applying two voltages to the ends of the spectrometer.The length of ion-flight region is 100 mm, while that of electron is 184 mm.The shorter length of ion-acceleration region makes the ion detector more suitable for particles with higher energies, such as recoil ions from the Coulomb explosion of molecules.Two meshes are mounted at both ends of spectrometer to eliminate the influence of the high voltage from the MCPs and to ensure the uniformity of the electric field.The stainless steel grids (wire diameter: 25 µm; mesh size: 292 µm) provide a transmittance of 85%.
For the same momentum, the electron and ion velocities differ by a large mass ratio.Thus, for increasing electron energies, the detection efficiency decreases.Increasing the electric field would reduce the electron-momentum resolution.In order to achieve a 4π solid-angle measurement for high-energetic electrons, a pair of Helmholtz coils is necessary to generate a uniform magnetic field along the spectrometer axis, constraining the trajectories of the electrons.In this way, electrons will fly in spiral tracks before reaching the detector.Helmholtz coils are a pair of parallel coils with the same radius, number of turns and resistance.The distance between the two coils and the radius of the coils are both 87.5 cm.According to the test, the coils can stably generate a maximum magnetic field of 17 Gs with no air cooling or water cooling.The effect of the electric and magnetic fields on particle trajectories can be seen in Figure 3.For electrons with an energy of 15 eV and oxygen ions with an energy of 0.2 eV, charged particles towards different directions are all collected by the detector with an electric field of 4 V/cm and a magnetic field of 6 Gs.Then, the three-dimensional momenta of ions and electrons can be reconstructed from the TOF, achieving the detection position of each event.
Data Acquisition System
To detect the fragment ions and electrons in coincidence, a fast data acquisition system is required.Two MCP detectors equipped with delay-line anodes are employed to measure the arrival time and impact position of the charged fragments [45].For ion detection, to have a high collection efficiency for Coulomb explosion studies, a pair of larger MCPs (120 mm diameter) is employed with a square delay-line anode (DLD 120, Roentdek).For electron detection, as a result of the small time-of-flight differences, a hexagonal anode is of significant importance in reducing the dead time and increasing detection efficiency [14].The effective detection area is limited by the diameters of the MCPs and anode about 80 mm (Hex75, Roentdek).It should be noted that a mesh with the same parameter described before is placed in front of each detector to generate a post-acceleration voltage so as to increase the MCP detection efficiency.
The delay-line readout relies on a fast and precise timing measurement.The dataacquisition system combines the data collected from each detector into complete events, which are stored for offline analysis.The positive signals from the MCPs are inverted and then amplified by fast-timing amplifiers (ORTEC FTA820) along with signals from the delay-line anodes.The amplified signals (gain 200) are inputs to the constant-fraction discriminator (ORTEC QUAD935) with a minimum pulse-pair dead-time of 15 ns to discriminate the signal and convert it into a signal in NIM format, which can minimize the influence of changes in the size and shape of the analog signal on signal processing.
A time-to-digital conversion module (Roentdek TDC8HPi) containing two cards is used to convert the input NIM signal into a digital signal.Each card contains eight, highprecision acquisition channels and each channel can perform multi-hit response with a high time resolution of 35 ps RMS with 25 ps LSB (Least Significant Bit).Taking the trigger signal as the time-zero point, all time signals belonging to the same event are stored in the data packet as a whole.The data stored in event-by-event mode can reconstruct complete information of the particle momentum in the offline analysis.The analysis code is embedded in the Go4 (GSI Object Oriented On-line Off-line system) analysis environment (GSI, 2018) based on CERN's ROOT (CERN, 2018).Besides, in the testing phase, we also use the software package CoboldPC (Computer Based Online-offline List-mode Data analyzer for PC) supported by RoentDek to realize the data acquisition and data analysis.The timing resolution achieved is about 1 ns and the position resolution of these detectors is typically about 0.1 mm, which can be improved a bit via careful calibration.
Result and Discussion
The test experiments have been performed with an infrared femtosecond laser, which had a wavelength of 800 nm, a pulse width of 35 fs, a focal length of 25 cm and a repetition rate of 1 kHz.Through the single ionization experiment of O 2 , the momentum resolution of the ion and electron detectors and the detection range of electron momenta are given.The result of the Coulomb explosion experiment of O 2 demonstrates the performance for multi-particle coincidence detection and the large ion-momentum detection range.Through data analysis, several oxygen Coulomb explosion channels are clearly found and detailed information on their explosion pattern is obtained.
Single Ionization Experiment of Oxygen Molecules
The 2.3 × 10 14 W/cm 2 -intensity laser interacted with the oxygen molecules, and photoelectrons and O + 2 ions were generated in the ionization process.During the experiment, the extraction electric field was set to 1.5 V/cm and the coil current was set to 4.0 A, which produced a magnetic field around 7 Gauss.
Figure 4a shows that most of the ions are O + 2 from the gas jet.It can be seen in Figure 4b that electrons and O + 2 ions had a clear correlation in TOF, which represents the momentum conservation between those two particles in the z direction.After selecting the events within the coincidence line and calculating the corresponding momenta, a one-dimensional spectrum was generated by summing the electron momentum and the ion momentum in Figure 4c, and a Gaussian-like distribution was obtained.By fitting the spectrum, as is shown with the red curve, a good agreement with the experimental data was achieved.The full-width-at-half-maximum (FWHM) of the momentum sum spectrum was 0.11 a.u., and this represents the convoluted momentum resolution of the two detectors.Thus 0.11 a.u. was the upper limit of momentum resolution for both electrons and ions in this test.With the position resolution of 0.1 mm, the resolution in the x and y directions could be thus estimated to be 0.13 a.u.
An electron-TOF-radius correlation is presented in Figure 4d, where the radius is the distance between electron impact position and the origin position on the detection plane.The period of the electron cyclotron motion was 49 ns and the corresponding magnetic field was 7.2 Gs.Taking into account the detection area of the electron detector with a diameter of 80 mm, the electron-momentum detection range could be calculated to be within 1.13 a.u.In subsequent tests, it was measured that the device could provide a stable magnetic field of 17 Gs, which corresponds to a momentum detection range of 2.81 a.u. and an energy detection range of 108 eV for electrons.
Coulomb Explosion Experiment of Oxygen Moleculse
In the oxygen Coulomb explosion experiment, a femtosecond laser with 1.0 × 10 15 W/cm 2 -intensity and a polarization direction along the y axis interacted with the oxygen molecules.Due to the larger momentum from the Coulomb explosion, the accelerating electric field was set at 15 V/cm to ensure that all ions could be detected by the detector.
The ion fragments produced from both supersonic jet and the background gas were detected and the TOF spectra are shown in Figure 5, which shows that the composition of background gas was mainly water and hydrogen molecules.In contrast to the peaks of H 2 O + , H + 2 and H + originating from the background, the O + 2 peak is obviously sharper due to the different thermal properties between the warm residual gas and the cold jet.The O 2+ peak is wider than O + 2 peak, which indicates that a Coulomb explosion resulting in a larger kinetic energy occurred.At the TOF position, corresponding to a charge/mass ratio of 16 a.m.u./e, a narrow peak and a wide peak, which correspond to the O 2+ 2 ions from the ionization processes and the O + ions from the Coulomb explosion processes, respectively, are overlapped.After the ion matching and data selection, more information on specific channel could be further analyzed.The released momenta were converted to their released kinetic energies.Figure 7 presents the peak distributions of kinetic energy releases for the four most distinct channels of (1,1), (1,2), (2,2) and (2,3), the peak positions of which were 11.3 eV, 20.5 eV, 35.1 eV and 49.6 eV, respectively.Compared with the previous results listed in Table 2, the measured KERs were generally smaller than those from electron and ion impact experiments, due to the stretched molecular bond length in the laser field, which is also consistent with other laser-induced fragmentation experiments [46,47].On the other hand, Wu et al. found that the KERs become larger with increasing laser intensity [47].
The larger KER values indicated a higher laser intensity in our experiment.For example, the measured KER of 11.3 eV from the channel (1,1) was relatively larger compared with most KER peaks obtained in the 8-fs laser experiment [48] and smaller compared with the results from the electron impact experiment using Doppler-free spectroscopy [49].For the Coulomb explosion process of O 2+ 2 , extensive studies have been performed, both theoretically and experimentally [46,54].Explosions of the parent dications are generally considered to occur near the equilibrium internuclear distance of the neutral molecule [50].Assuming that the oxygen atomic ions are in the ground state (O + ( 4 S) + O + ( 4 S)), it can be inferred unambiguously by the KER and the potential energy curves of O 2+ 2 [49] that the electronic state of O 2+ 2 is B 3 Π g , which is an excited state.The electronic states of other parent ions cannot be identified directly, due to the non-vertical ionizing transition process, which leads to an energy defect.For multi-charged molecular ions O x+ 2 , the repulsion of the charges drove the dissociation process, so the classic Coulomb explosion model was used to estimate the internuclear distances, R, where the explosion occurred [55].They were calculated to be 1.40 Å, 1.64 Å and 1.74 Å from the corresponding KERs for channels (1,2), (2,2) and (2,3).This result showed that the O-O bonds, prior to the Coulomb explosion, were elongated and the internuclear distances before the explosion increased with increasing charge state, which is consistent with previous results [47,56].
Conclusions
In this work, a new multi-particle reaction microscope, constructed for studies of AMO science at the SXFEL, has been explored The supersonic gas jet, the design of its spectrometer and the data acquisition system containing the detectors have been described in detail.A cold gas target with a density of 10 8 atoms/cm 2 has been shown capable of reacting with light at the center of the science chamber, where the vacuum is maintained at better than 1.2 × 10 −10 torr.To test the performance of the device, a single ionization experiment and Coulomb explosion experiment of O 2 have been performed with a strong laser field at 10 14 to 10 15 W/cm 2 intensities.The momentum resolution is shown better than 0.11 a.u.for both ion and electron detection, and the experimental results from the Coulomb explosion processes are consistent with previous studies.It has been shown that the instrument can perform multi-particle coincidence measurements at a wide range of energies and detect the momenta of ions and electrons at full-solid angles.From the kinetic energy releases of the explosion channels, the electronic state of O 2+ 2 and the internuclear distances of multi-charged ions O x+ 2 (x = 3-5) before the explosion have been deduced.The high resolution and good operation of the instrument provide us with a solid foundation for future atomic and molecular experiments at the Shanghai SXFEL facility.
Future Plan
In the near future, we plan to further improve the vacuum condition.Due to the strong light intensity of soft X-rays, a higher-vacuum environment is favorable for coincidence measurements.A pumping system with non-evaporate getter pumps is being prepared and is expected to increase the current vacuum by an order of magnitude.Additionally, we plan to design a new spectrometer that can switch between REMI mode and VMI mode in order to adapt to different experimental conditions.In addition, a fast analog-to-digital converter (fADC) system for data acquisition has been designed to improve the detection efficiency at a higher counting rate and retain the original data information completely, including the pulse heights and shapes of detector signals.
Figure 1 .
Figure 1.Overview of the REMI at SXFEL.A mutually perpendicular gas jet line (green arrow) and FEL line (purple arrow) meet in the intersection chamber, which contains the spectrometer and detectors.A pair of coils (orange) is used to constrain the electron trajectory.
Figure2.Schematic diagram of supersonic gas-jet system structure.The green arrow represents the propagation direction of the gas jet passing through six chambers from the nozzle to the dumping chamber.After passing through two skimmers and two pairs of slits, the jet collides with the laser (pink) in the center zone of the interaction chamber (purple dot) and is collected in the dumping chamber.
Figure 3 .
Figure 3. Schematics of REMI including the geometric dimensions (unit: mm) of the REMI.Trajectories of ions and electrons with different emission angles are shown as colored lines.
Figure 4 .
Figure 4. (a) TOF spectrum of a photo-ion for single ionization of oxygen; (b) TOF correlation spectrum of electron and O + 2 ion ; (c) sum of momentum of electron and O + 2 ion; (d) radius as a function of TOF for electrons.
Figure
Figure6apresents the TOF correlation between the first ion and the second ion and five bright curves correspond to five different Coulomb explosion channels coming from two-to-five-fold charged oxygen ions.It can be seen from Figure6bthat most of the reaction products were within the range of the detector, and several distinctly separated rings appeared as a fingerprint of the Coulomb explosion, occurring in the polarization direction mostly.By calculating the momentum of all kinds of ions and sifting through the momentum conservation conditions, five Coulomb explosion channels were obtained including (1,1), (1,2), (1,3), (2,2) and (2,3).
Figure 7 .
Figure 7. Kinetic energy releases of recoil ions in various channels.
Table 2 .
Kinetic energy release for the coulomb explosion of O x+ 2 (x = 3-5) in the present experiment, together with previously reported results. | 5,800.2 | 2022-02-10T00:00:00.000 | [
"Physics"
] |
Malaria in pregnancy alters l-arginine bioavailability and placental vascular development.
Reducing adverse birth outcomes due to malaria in pregnancy (MIP) is a global health priority. However, there are few safe and effective interventions. l-Arginine is an essential amino acid in pregnancy and an immediate precursor in the biosynthesis of nitric oxide (NO), but there are limited data on the impact of MIP on NO biogenesis. We hypothesized that hypoarginemia contributes to the pathophysiology of MIP and that l-arginine supplementation would improve birth outcomes. In a prospective study of pregnant Malawian women, we show that MIP was associated with lower concentrations of l-arginine and higher concentrations of endogenous inhibitors of NO biosynthesis, asymmetric and symmetric dimethylarginine, which were associated with adverse birth outcomes. In a model of experimental MIP, l-arginine supplementation in dams improved birth outcomes (decreased stillbirth and increased birth weight) compared with controls. The mechanism of action was via normalized angiogenic pathways and enhanced placental vascular development, as visualized by placental microcomputerized tomography imaging. These data define a role for dysregulation of NO biosynthetic pathways in the pathogenesis of MIP and support the evaluation of interventions to enhance l-arginine bioavailability as strategies to improve birth outcomes.
INTRODUCTION
An estimated 125 million women become pregnant in malaria-endemic regions every year, with more than 85 million at risk of Plasmodium falciparum malaria (1)(2)(3). Pregnant women, particularly first-time mothers, are more likely to be infected with falciparum malaria and to experience complications including maternal anemia, pregnancy loss, and low birth weight (LBW) resulting from small-for-gestational age (SGA) outcomes and/or preterm birth (PTB) (4)(5)(6)(7). Malaria in pregnancy (MIP) leads to the sequestration of malaria-infected red blood cells in the intervillous space of the placenta and the recruitment of mononuclear cells, generating a localized immune response at the maternal-fetal interface (8,9). MIP-induced immune responses in the placenta can disrupt normal angiogenic processes, resulting in placental insufficiency and the inability of the placenta to support rapid fetal growth in the third trimester, ultimately leading to SGA, PTB, and LBW (10).
Despite the negative impact of MIP on global maternal-child health, there are currently limited intervention strategies to improve maternal and neonatal outcomes. Evidence from pre-eclampsia and other causes of adverse pregnancy outcomes suggests that interventions to promote placental angiogenesis may improve birth outcomes (11)(12)(13)(14)(15)(16). l-Arginine is an essential amino acid in pregnancy and an immediate precursor in the biosynthesis of nitric oxide (NO) via a family of nitric oxide synthase (NOS) enzymes (17)(18)(19)(20). NO plays a central role in endothelial growth and function as a critical regulator of the vascular endothelial growth factor (VEGF) family of proteins, including placental growth factor (PGF), the angiopoietins (ANG-1 and ANG-2), and their respective soluble receptors (SFLT-1 and STIE-2) (21). The VEGF family of proteins is essential for proper placental vascularization, vessel growth, and remodeling throughout pregnancy (12,22,23). NO production increases pro-angiogenic VEGF-A and PGF in human trophoblast cultures, whereas inhibition of NO synthesis results in elevated SFLT-1 and hypertensive responses in pregnant mice (21,24,25). NO also reduces the expression of endothelial adhesion receptors and proinflammatory cytokines, which contribute to increased monocyte accumulation and MIP pathogenesis (26,27).
There is considerable evidence that reduced bioavailable NO contributes to the pathophysiology of severe malaria (28)(29)(30). Malariainduced hemolysis depletes l-arginine and NO, contributing to hypoarginemia and impaired NO synthesis (28). NO bioavailability is further impaired by the generation of endogenous inhibitors of NO biogenesis, asymmetric dimethylarginine (ADMA) and symmetric dimethylarginine (SDMA). ADMA is a competitive inhibitor of NOS, whereas SDMA enhances inflammation and oxidative stress and, at high concentrations, impairs arginine transport into cells (31,32). A reduced ratio of l-arginine to ADMA (a measure of l-arginine bioavailability) has been reported in children and adults with severe malaria (33,34). NO may also be depleted by NO scavenging by cellfree hemoglobin as a result of malaria-induced hemolysis (33). In severe malaria, reduced NO bioavailability contributes to endothelial dysfunction and can be reversed in both human infection and experimental models by parenteral l-arginine infusion (35,36). Pregnancy can also contribute to hypoarginemia because arginine is continuously metabolized to meet the high NO demands required to support placental vascular growth and remodeling (37). Moreover, diets deficient in l-arginine are common in low-resource and malaria-endemic regions and may further deplete bioavailable l-arginine during pregnancy (38,39). We hypothesized that maternal circulating concentrations of l-arginine, ADMA, and SDMA would be altered in women (Table 3). An increase in l-arginine or the l-arginine/ADMA ratio was positively associated with MUAC after adjustment for maternal age and gestational age at enrollment (P = 0.005 and P = 0.003, respectively; Table 3), suggesting that higher l-arginine concentrations are associated with improved nutritional status. There was a strong negative relationship between ADMA and SDMA and maternal hemoglobin after adjustment for maternal age and gestational age at enrollment (P < 0.0001 and P = 0.001, respectively; Table 3).
Increased ADMA at enrollment is associated with adverse birth outcomes We investigated the association between the l-arginine pathway and adverse birth outcomes. A total of 167 (43.5%) women had an adverse birth outcome consisting of PTB or SGA. Adverse birth outcomes (PTB or SGA as a composite outcome) were associated with increased ADMA at enrollment compared to normal birth outcome [mean (SD), term/AGA, 0.43 M (0.07); adverse birth outcome, 0.46 M (0.09); P = 0.007 (Student's t test)]. Using log-binomial regression, ADMA was associated with an increased relative risk of having an SGA infant [adjusted relative risk, 21.2 (95% CI, 2.27 to 197.9); P = 0.007] after adjustment for maternal age, gestational age at enrollment, BMI, socioeconomic status, smear-positive malaria at enrollment, and treatment group (Table 1).
MIP is associated with lower concentrations of l-arginine and higher concentrations of ADMA across pregnancy
To evaluate the kinetics of the l-arginine pathway across pregnancy, we quantified longitudinal concentrations of l-arginine, ADMA, and SDMA in the plasma of 94 of the 384 women included in this study, who had between two and five samples collected before delivery (mean of 3.3 visits; n = 603 samples tested). We observed an increase in plasma concentrations of ADMA (Fig. 1A) and SDMA (Fig. 1B) over the course of pregnancy (P = 0.01 and P < 0.0001, respectively, linear regression of biomarker concentrations by gestational age). There was no change in concentrations of l-arginine (Fig. 1C) or l-arginine/ADMA (Fig. 1D) during pregnancy (P > 0.05 for both outcomes).
Because increased ADMA at enrollment was associated with increased relative risk of SGA, we explored this relationship further, comparing ADMA concentrations over the course of pregnancy (Fig. 3). We used linear mixed-effects modeling to evaluate the relationship between the ADMA concentrations and the SGA outcome, ‡Submicroscopic malaria is defined as polymerase chain reaction (PCR)-positive and microscopy-negative. Participants who were positive by microscopy were excluded (n = 91, where n = 15 were positive by microscopy but negative by PCR), as were those who were missing microscopy results (n = 5) or missing PCR results (n = 4). by guest on April 8, 2019 http://stm.sciencemag.org/ Downloaded from adjusting for gestational age, maternal age, BMI, malaria status, socioeconomic status, and the interaction between treatment arm and gestational age. Those who subsequently delivered SGA newborns had significantly higher concentrations of ADMA during pregnancy than those who later had AGA births [ 2 (2) = 8.76, P < 0.02]. The groups converged over time, as demonstrated by the addition of the interaction term between SGA and gestational age [ 2 (1) = 4.62, P < 0.04; table S1]. Thus, increased ADMA early in pregnancy (reflecting reduced NO biosynthesis) is associated with SGA, but the effect di-minishes over the course of gestation, and those with lower baseline concentrations of ADMA show increases later in pregnancy.
Dietary l-arginine supplementation improves fetal weight and fetal viability in an experimental model of MIP On the basis of the findings of altered pathways of l-arginine and NO biosynthesis in humans, we next explored the mechanism and interventions in an experimental MIP (EMIP) mouse model. EMIP recapitulates several features of human MIP, including placental Table 2. Association between the nitric oxide biosynthetic pathway and malaria at enrollment. Malaria at enrollment is determined in maternal blood at enrollment, data are presented as means (SD) and analyzed using Student's t test, and log-binomial regression was used to obtain relative risk and corresponding 95% confidence intervals adjusting for maternal age and gestational age at enrollment. *Submicroscopic malaria is defined as PCR-positive and microscopy-negative. Women were excluded if they were missing microscopy results (n = 5) or PCR results (n = 4) or were microscopy-positive (n = 91). Table 3. Regression analysis examining the nitric oxide biosynthetic pathway and maternal nutritional status. Linear regression analysis was used, adjusting for maternal age and gestational age at enrollment. (10,41). We used the EMIP model to examine the impact of dietary l-arginine on birth outcomes. l-Arginine supplementation to dams did not influence maternal peripheral parasite densities (G19) or litter size (table S2). l-Arginine supplementation did not alter birth weight or fetal viability in offspring from uninfected, control litters ( Fig. 4, A and B). However, in malaria-infected dams, l-arginine supplementation increased fetal weight (P < 0.05; Fig. 4A
EMIP decreases circulating l-arginine
We performed mass spectrometry on serum collected at G19 to quantify circulating concentrations of l-arginine, ADMA, and SDMA in malaria-infected and uninfected pregnant dams. At G19 (day 6 of infection), concentrations of l-arginine were significantly reduced in malaria-infected dams (P < 0.001; Fig. 5A). Malaria-infected dams receiving l-arginine supplementation showed reduced serum ADMA (P < 0.01; Fig. 5B) and SDMA (P < 0.05; Fig. 5C), although there was no significant difference in l-arginine/ADMA ratio (P > 0.05; Fig. 5D) compared with the control group.
l-Arginine supplementation in EMIP alters inflammatory and angiogenic mediators in the placenta
We hypothesized that l-arginine supplementation increases fetal weight and viability by reducing malaria-induced inflammation in the placenta and by promoting the placental vascular development and remodeling required for healthy pregnancy outcomes. Therefore, we examined the expression of inflammatory and angiogenic factors in placental tissue from viable pups collected at G19 (Fig. 6). EMIP resulted in increased placental expression of the proinflammatory C5a receptor (C5ar; P < 0.001; Fig. 6B), Icam-1 (P < 0.001; Fig. 6C), and pro-angiogenic Ang-2 (P < 0.01; Fig. 6F). l-Arginine supplementation did not alter gene expression in placental tissue from uninfected dams compared to control uninfected dams. In malaria-infected dams supplemented with l-arginine, we observed reduced gene ex pression of inflammation-related proteins C5 (P < 0.001; Fig. 6A) and Icam-1 (P < 0.01; Fig. 6C). l-Arginine supplementation in malaria-infected dams also resulted in changes to angiogenic mediators, with an upregulation of Tie-2 (P < 0.01; Fig. 6D) and Ang-1 (P < 0.05; Fig. 6E) and a down-regulation of Ang-2 (P < 0.05; Fig. 6F). In addition, there was reduced expression of the pro-angiogenic factor Vegf-a (P < 0.01; Fig. 6G) and its negative regulator Flt-1 (P < 0.05; Fig. 6H) in placental tissue from malaria-infected dams receiving l-arginine supplementation compared to malaria-infected control dams. Overall, l-arginine supplementation during EMIP resulted in a more balanced angiogenic response expected to favor vessel remodeling in the placenta. To examine whether changes observed in placental tissue expression of angiogenic factors associated with l-arginine supplementation were related to functional changes in placental vascular development, we performed micro-computed tomography (micro-CT) imaging of placentas collected before the onset of the LBW and stillbirth phenotypes (10). In light of previously reported malaria-induced changes in placental vascular development in association with enhanced C5a-C5aR signaling (10), we hypothesized that l-arginine supplementation, similar to C5aR blockade, would increase placental vascularization and improve birth outcomes. In uninfected litters supplemented with l-arginine, we did not observe differences in placental vascularization compared to uninfected control litters (Fig. 7). In contrast, malaria-infected dams receiving l-arginine had an increased total number of placental vessel segments compared with l-arginine-treated uninfected controls (P = 0.02; Fig. 7, A and B). Placentas from l-arginine-treated malaria-infected dams showed higher numbers of vessel segments in vessels with a diameter of <50 m compared with placentas from vehicle control-treated malaria-infected litters (P < 0.001; Fig. 7C).
l-Arginine supplementation increases fetal weight and viability in the context of an l-arginine-deficient diet Pregnant women in malaria-endemic areas are particularly vulnerable to hypoarginemia due to diets that are relatively deficient in l-arginine because staple foodstuffs (maize, plantains, yams, and cassava) are low in dietary l-arginine (42). Therefore, we modeled this scenario by placing dams on an l-arginine-deficient diet and hypothesized that this would increase the impact of l-arginine supplementation on birth outcomes in EMIP. Compared with controls receiving regular chow, offspring of uninfected dams on the l-arginine-deficient chow had lower birth weight (P < 0.01; Fig. 8A), and supplementation with l-arginine reversed the LBW phenotype (P > 0.05; Fig. 8A). Litters born to malaria-infected dams on the deficient chow that were receiving l-arginine supplementation had increased birth weight (P < 0.01; Fig. 8A) and fetal viability (P < 0.05; Fig. 8B) compared with infected control litters on the deficient chow.
DISCUSSION
MIP is a leading global cause of maternal morbidity and adverse pregnancy outcomes. The World Health Organization recommends the use of intermittent presumptive treatment and insecticide-treated nets for the prevention of MIP; however, escalating drug and insecticide resistance threaten this approach (1,40). We also lack effective and safe interventions to prevent or reduce malaria-associated placental pathology that directly contributes to poor birth outcomes, especially in early pregnancy. Here, we investigated the l-arginine-NO biosynthetic pathway in the pathogenesis of MIP and provide several lines of evidence supporting this axis as a potential therapeutic target. First, in a prospective study of pregnant women in Malawi, we identified MIP-related decreases in circulating concentrations of l-arginine and increases in inhibitors of NO biosynthesis, ADMA and SDMA, and their association with poor birth outcomes. In an experimental model of MIP, we corroborated the human data showing that alterations in NO biogenesis were associated with adverse birth outcomes. We then used this preclinical model to explore the mechanism and interventions and show that l-arginine dietary supplementation improved fetal weight and markedly reduced stillbirth. The effect of supplementation on fetal weight was enhanced when dams were placed on an l-arginine-deficient diet, simulating diets prevalent in low-resource settings. The mechanism of l-arginine action involved reduced expression of placental inflammatory factors, normalized expression NO regulates essential mediators of placental vasculogenesis and angiogenesis, including the VEGF-A and the angiopoietin-TIE-2 pathways, and is critical to implantation, trophoblast invasion, and placental and embryo development (17,21,43,44). NO increases the expression of ANG-1 in endothelial cells, and NO production is necessary for VEGF-A-mediated angiogenesis (43,45). Pathological pregnancy outcomes, including pre-eclampsia, fetal growth restriction, and resulting SGA, have been linked to l-arginine deficiency, reduced NO bioavailability, and oxidative stress (17,46,47). In this prospective study of pregnant Malawian women, we demonstrated that MIP affects NO biogenesis by increasing concentrations of endogenous inhibitors, ADMA and SDMA, and decreasing l-arginine, resulting in decreased l-arginine bioavailability (a reduced l-arginine/ ADMA ratio) and conditions that enhance inflammation while impairing l-arginine bioavailability and intracellular influx (30-32, 37, 46). The impact of malaria on the l-arginine pathway was most evident in PCR-detectable infections at enrollment (16 to 28 weeks of pregnancy) and affected more than half of the women enrolled in this study. These changes occurred relatively early in gestation and could contribute to sustained changes in NO bioavailability over pregnancy. Consistent with this hypothesis, increased ADMA between weeks 16 and 28 of pregnancy was associated with impaired fetal growth, and this change was evident across pregnancy. Our results support a mechanistic role for altered l-arginine-NO biosynthesis and related placental insufficiency in malaria-induced SGA outcomes. However, other pathways may also contribute, including those that regulate the nutrient transport across the placenta (48).
Collectively, our results suggest that targeting NO biosynthesis in MIP may be an effective intervention to improve birth outcome. In support of this hypothesis, dietary l-arginine supplementation in the EMIP model normalized angiogenic and inflammatory pathways and enhanced placental vascular development. We observed reduced concentrations of circulating l-arginine in both treated and untreated malaria-infected dams. Although l-arginine supplementation did not increase l-arginine in plasma, it was associated with reduced ADMA and SDMA concentrations compared to malariainfected untreated dams. Plasma samples were collected via cardiac puncture at G19, when dams are ill because of malaria infection and drink less water, and therefore may ingest less l-arginine. Because l-arginine supplementation reduced the circulating inhibitors of NO biosynthesis, ADMA and SDMA, NO bioavailability may have increased even in the absence of increased l-arginine concentrations. Our findings are supported by previous studies reporting reduced concentrations of ADMA in association with l-arginine supplementation (49,50). Although the mechanism by which l-arginine reduces ADMA and SDMA is unknown, we speculate that l-arginine supplementation may decrease oxidative stress, the condition under which these endogenous inhibitors are generated (49,51).
Previous mechanistic studies in preclinical models have shown that MIP alters placental vascular development and results in increased placental arterial vascular resistance and adverse birth outcomes, including LBW offspring and stillbirth (10). Collectively, those findings support the hypothesis that MIP dysregulates placental angiogenesis and vascular remodeling, resulting in placental insufficiency and poor birth outcomes. Here, we confirm and extend those observations and implicate MIP-induced changes in l-arginine-NO biosynthesis as a putative mediator of the altered angiogenesis observed. Of translational relevance, these changes can be corrected, at least in part, by l-arginine supplementation of malaria-infected dams. l-Arginine treatment was associated with reduced placental expression of factors that destabilize blood vessels (C5a, Ang-2, and Vegf-a), as well as inflammatory cell adhesion molecules (Icam-1). Increased concentrations of these inflammatory factors and mediators (10,16,52,53). Expression of Ang-2, Tie-2, and Vegf-a is increased under hypoxic conditions, which may also occur during MIP (54,55). We posit that the enhanced Tie-2 expression we observed in l-arginine-supplemented dams promotes microvascular stability in the context of malariainduced inflammation and vascular injury (56). We observed increased Vegf-a expression in the malaria-infected nonsupplemented dams, which was reduced with l-arginine supplementation. Together, these results are consistent with the hypothesis that l-arginine supplementation improves birth outcomes by reducing the expression of proinflammatory factors and by normalizing angiogenic processes and promoting placental function and fetal growth.
To link the observed l-arginine-related changes in inflammatory and angiogenic factors to a functional vascular correlate, we used micro-CT to visualize the impact of dietary supplementation on placental vascular structure and development. Consistent with previous studies, malaria infection was associated with altered vascular branching in the smaller vessels (10). Abnormal placental vascular development has previously been linked to poor birth outcomes, including fetal growth restriction and pre-eclampsia (57,58). Here, l-arginine supplementation in malaria-infected dams was associated with an increase in the total number of vessel segments, especially in smalldiameter vessels (<50 m). These small terminal capillaries are the primary sites of vascular remodeling later in pregnancy (58) and therefore represent a biologically relevant site of action for the l-arginine-NO pathway. Collectively, the results suggest that l-arginine supplemen-tation contributes to increased fetal weight and viability via expansion of the vascular network of the placenta, allowing for increased blood volume and surface area for nutrient exchange. In a previous preclinical study, MIP was associated with increased arterial resistance and poor birth outcomes, which were reversed by disruption of C5a signaling, and we report similar results here with l-arginine supplementation (10). However, l-arginine dietary supplementation represents a more feasible, safe, inexpensive, and acceptable intervention strategy for pregnancy compared to biologics for C5 blockade (59).
Altered angiogenesis may represent a common pathway of injury resulting in adverse birth outcomes associated with multiple pathological conditions in pregnancy, including pre-eclampsia, and l-arginine supplementation during pregnancy may improve birth outcomes in high-risk women (16,17,46). Several lines of evidence support this hypothesis. In many malaria-endemic regions, malaria-induced reductions in l-arginine may be further compounded by the lack of dietary l-arginine intake (60). Most regions with high rates of poor birth outcomes also have high rates of malnutrition due, in part, to low daily protein intake and, therefore, low l-arginine intake (39,61). Low dietary intake of l-arginine has been linked to an increased risk of PTB in Tanzanian women (61). Moreover, a previous randomized trial used medical food to supplement l-arginine in the diet (62) and reported reduced incidence of pre-eclampsia in a high-risk cohort of women receiving l-arginine supplementation. Here, the beneficial impact of l-arginine supplementation was most marked in animals on an l-arginine-deficient diet, suggesting that l-arginine supplementation may be most efficacious in women in low-resource settings who are Although the mouse model can provide important mechanistic insights into the pathophysiology of MIP, it also has limitations. The model replicates important components of P. falciparum malaria infection in pregnancy, including the induction of an inflammatory response in the placenta, shared placental vascular development and placental pathology, and associated adverse birth outcomes including intrauterine growth restriction and decreased fetal viability. However, there are also differences, including higher parasitemia in the mouse model, which is not observed in multigravid clinical cohorts, and the lack of VAR2CSA-mediated blinding of parasitized erythrocytes in the placenta. Notably, the mouse model used in this study most closely models infection in nonimmune primigravid women, where higher parasite burdens and the greatest risk of adverse birth outcomes are observed. Moreover, although Plasmodium berghei adhesion in the placenta is not mediated by the same receptors as P. falciparum, binding and accumulation of parasitized erythrocytes in the placenta are observed (41).
In summary, we provide evidence supporting the role of l-arginine-NO biosynthesis in the pathophysiology of MIP. In a prospective study of women with MIP, alterations in this pathway were associated with adverse birth outcomes. We dem onstrate that similar changes occur in a preclinical model of MIP and use this model to demonstrate that strategies to enhance l-arginine bioavailability improve birth outcomes, at least in part by reducing placental inflammation, regulating angiogenesis, and enhancing placental vascular development. We propose that interventions aimed at promoting regulated angiogenesis in the placenta may improve birth outcomes and reduce the global burden of MIP. A and B) Representative micro-computed tomography images of fetoplacental arterial vasculature at gestational day 18 in placentas from malaria-infected vehicle control-treated (A) and l-arginine-treated (B) mice color-coded by vessel diameter. (C) Cumulative distribution of vessel diameters in placentas from uninfected vehicle control-treated (n = 7), uninfected l-arginine-treated (n = 7), malariainfected vehicle control-treated (n = 8), and malaria infected l-arginine-treated (n = 7) litters. Cumulative vessel segments are depicted as median and SEM of vessels larger than the threshold diameter (0.035 mm) with results of two-way ANOVA and Dunn's multiple comparison post hoc test; ***P < 0.001.
Clinical cohort study design and ethics
The objective of the clinical study was to quantify plasma concentrations of l-arginine, ADMA, and SDMA in a cohort of pregnant women in association with malaria infection. Samples were collected as part of a multisite, open-label, two-arm, randomized superiority trial in southern Malawi (Pan African Clinical Trials Registry PACTR20110300280319 and ISRCTN Registry ISRCTN69800930), which took place between 2011 and 2013, as previously described (40). Briefly, eligibility criteria included HIV-negative women with an estimated gestational age between 16 and 28 weeks of gestation by ultrasound, last menstrual period (LMP), or both; hemoglobin >7 g/dl at baseline; a willingness to deliver in hospital; and not having received a dose of SP in pregnancy. Women were randomized to receive one of the following over the second and third trimester of pregnancy: (i) three or four doses of SP (IPTp-SP) or (ii) screening with malaria rapid diagnostic tests (RDT) (First Response Malaria pLDH/ HRP-2 Combo Test, Premier Medical Corporation Ltd.) and treatment of RDT-positive women with a standard 3-day course of DP (ISTp-DP; 40 mg/320 mg of tablets; Eurartesim, Sigma-Tau). We randomly selected 384 primigravi dae for the assessment of l-arginine, SDMA, and ADMA provided they met the following inclusion criteria: live birth with known birth weight and singleton delivery. Of the 384 women included, 379 had an enrollment sample tested and 94 had multiple samples tested over pregnancy for longitudinal assessment of l-arginine, SDMA, and ADMA. Written informed consent was obtained for all study participants. This study was reviewed and approved by the Liverpool School of Tropical Medicine, the Malawian National Health Science Research Committee, and the University Health Network Research Ethics Committee.
Sample size calculation for the clinical cohort study
Our primary endpoint for the human cohort study was the association between the arginine pathway and adverse birth outcomes in primigravidae. Using pilot data from the enrollment visit, we estimated a sample size of 323 women, assuming a mean difference in ADMA of 8 ng/ml and an SD of 19, with 20% of women expected to have an adverse birth outcome ( = 0.80, = 0.05). In case the data were not normally distributed, we adjusted our sample size upward by 15% to generate a final minimum sample size of 372 women.
Assessment of l-arginine, ADMA, and SDMA EDTA plasma samples were tested for l-arginine, ADMA, or SDMA using high-pressure liquid chromatography electrospray tandem mass spectrometry, as described below. The coefficients of variation for arginine testing were 5.2% for l-arginine, 2.0% for SDMA, and 1.4% for ADMA. Concentrations of l-arginine, ADMA, and SDMA were quantified as nanograms per milliliter, and the ratios are expressed as l-arginine/ADMA, l-arginine/SDMA, and ADMA/SDMA (63,64). All samples were analyzed blinded to the malaria infection status of the participants.
Statistical analysis of the clinical cohort
For the human study, relative risk was calculated using a log-binomial model, including all variables with P < 0.20 by bivariate analysis. In addition, treatment arm, maternal age, and malaria status at enrollment (by microscopy) were included in the model. To compare the association between markers of NO biosynthesis and nutritional status (maternal BMI, MUAC, and hemoglobin), we used linear regression, adjusting for maternal age and gestational age at enrollment. For longitudinal analysis, we used linear mixed-effects modeling with the lme4 (65) package in R (66) to evaluate the relationship between longitudinal ADMA concentrations and the SGA outcome. We first constructed a null model with six fixed effects: the linear effect of gestational age, maternal age, enrollment BMI, enrollment malaria status, socioeconomic status, and the interaction between gestational age and treatment arm. This interaction term adjusted for the possibility that the rate of change of ADMA was affected by either treatment. Using likelihood ratio tests, we then assessed whether adding SGA as a fixed effect significantly improved the model fit, followed by adding the interaction between SGA and gestational age (table S1).
For random effects, all models included a by-participant intercept and a by-participant slope for the effect of gestational age. Biomarker concentrations were transformed using the natural logarithm to stabilize their variance. No deviation from homoscedasticity or normality was apparent on the residual plots. Similarly, but without adjusting for other covariates, linear mixed-effects (LME) models were used to assess the relationship between malaria status at enrollment (by microscopy and PCR) and gestational changes in ADMA, SDMA, and l-arginine concentrations.
EMIP study design and animal use protocols
The objectives of the studies using the EMIP model were to examine the impact of l-arginine supplementation on in utero development (viability and weight) in malaria-infected dams, as well as the impact of l-arginine supplementation on placental vascular development.
The EMIP model used in this study is a validated murine model of MIP, which replicates key pathogenic factors of human MIP (41). Female wild-type BALB/c mice between 6 and 8 weeks of age were mated with male wild-type BALB/c mice (8 to 9 weeks of age, obtained from the Jackson Laboratory). Naturally mated pregnant mice were infected on G13 with 10 6 P. berghei ANKA (PbA)-infected erythrocytes in RPMI 1640 (Gibco) via injection into the lateral tail vein. Control pregnant females were injected on G13 with RPMI 1640 alone. Thin blood smears were taken daily and stained with Giemsa stain (Protocol Hema3 Stain Set, Sigma-Aldrich) to monitor parasitemia. Investigators were not blinded to the experimental group during treatment because the investigators had to prepare the inoculum and l-arginine-supplemented water. However, investigators were blinded during sample processing and outcome assessment, including tissue collection (G19, assessment of weight and viability), processing of samples [placental tissue for reverse transcription PCR (RT-PCR) and serum for mass spectrometry], and assessment of vascular development by micro-CT. All experimental protocols were approved by the University Health Network Animal Care Committee and performed in accordance with current institutional regulations.
Dietary l-arginine supplementation
On the day of pairing, mice were randomly assigned to one of the following treatment groups: (i) vehicle control (regular drinking water) or (ii) 1.2% l-arginine in drinking water (l-arginine monohydrochloride A6969, Sigma-Aldrich). Mice received l-arginine-supplemented drinking water (or vehicle control) beginning before pregnancy and a minimum of 13 days before malaria infection (depending on what day they became pregnant after pairing). A dose of 1.2% was selected because it represents about twice the daily intake of l-arginine in regular chow (50 mg/day, assuming a daily intake of 3 to 5 g of chow with 1% l-arginine), based on the assumption that mice drink 5 to 6 ml of water per day (60 mg/day intake via supplemented water). There was no difference in the daily intake of water between dams receiving the vehicle control and l-arginine-supplemented water at a dose of 1.2% l-arginine. All mice received treatment via ad libitum access to bottled drinking water throughout pregnancy. All supplementation treatments were given in autoclaved water and water bottles.
l-Arginine-deficient chow Dams that received l-arginine-deficient chow were placed on a diet of exclusively deficient chow (Harlan Laboratories) beginning at G9 (confirmation of pregnancy) until tissue collection. Dams were kept on their regular chow (Harlan Teklad) diet until this time (G9) to minimize disruptions to their environment (change in diet) during pairing and early pregnancy. Mice were assigned to the treatment groups, as defined above.
Tissue collection
The EMIP model followed the protocol outlined above. Dams were sacrificed at G19 using carbon dioxide inhalation, yolk sacs were dissected from uteri, fetuses were removed and weighed, and placentas were snap-frozen and stored at −80°C until analysis. Fetal viability was determined by assessing pedal withdrawal reflex. Nonviable fetuses (lacking the pedal withdrawal reflex) were considered stillbirths. All fetuses were weighed at this time. RNA extraction was performed on snap-frozen fetal placenta tissue collected at G19. Serum from mice was collected from cardiac punch and stored at −80°C until analysis.
Placenta transcript analysis
Only placentas collected from viable fetuses were used in the transcript analysis. Tissue was homogenized in TRIzol (0.5 ml/100 mg tissue; Invitrogen) according to the manufacturer's protocol, and RNA was extracted. Extracted RNA (2 g per sample) was then treated with DNase (deoxyribonuclease) I (Ambion) and reverse-transcribed to complementary DNA (cDNA) with SuperScript III (Invitrogen) in the presence of oligo(dT) 18 High-pressure liquid chromatography-electrospray tandem mass spectrometry Concentrations of l-arginine, ADMA, and SDMA were assayed by mass spectrometry, as previously described (67). Briefly, the chromatographic conditions included a 125 × 3 mm Nucleosil 100-5 silica column with a 4 × 2 mm silica filter insert. Mobile phase A consisted of 1 liter of water mixed with 0.25 ml of trifluoroacetic acid and 10 ml of propionic acid. Mobile phase B consisted of 1 liter of acetonitrile mixed with 0.25 ml of trifluoroacetic acid and 10 ml of propionic acid. Isocratic elution with one part mobile phase A and nine parts mobile phase B was delivered at a flow rate of 0.5 ml/min at a temperature of 30°C. Samples were prepared with 60 l of serum and 20 l of the respective internal standard. Samples (10 l) were injected automatically, and the electrospray ion source run time duration was 3 to 6.5 min under the following conditions: 32 (arbitrary units); auxiliary gas, 20 (arbitrary units); needle voltage, +4.5 kV; capillary temperature, 300°C.
Placental micro-CT scans
Detailed methods for preparing the fetoplacental vasculature for micro-CT imaging have been described previously (68). Briefly, uteri were extracted from dams at G18 and anesthetized via hypothermia [immersion in ice-cold phosphate-buffered saline (PBS)]. Each individual fetus was then extracted from the uterus while maintaining the vascular connection to the placenta. The embryo was briefly resuscitated via immersion in warm PBS to resume blood circulation. Embryos that could not be resuscitated were not perfused and were removed from the study. A catheter was then inserted into the umbilical artery, and the fetus was perfused with saline [with heparin (100 U/ml)], followed by radiopaque silicone rubber contrast agent (Microfil, Flow Technology). After perfusion, specimens were postfixed with 10% formalin and imaged using micro-CT. Specimens were scanned at 7.1 m resolution for 1 hour using a Bruker SkyScan 1172 high-resolution micro-CT scanner. A total of 996 views were acquired via 180° rotation with an x-ray source at 54 kVp (kilovolt peak) and 185 A. Three-dimensional micro-CT data were reconstructed using SkyScan NRecon software. The structure of the vasculature was identified automatically using a segmentation algorithm, as previously described in detail (69). The leaves of the vascular tree were pruned to 0.035 mm (threshold diameter) to improve data consistency. Analysis was performed on wild-type [unexposed (n = 7) and malariaexposed (n = 8)] offspring of control (nonsupplemented) dams and unexposed (n = 7) and malaria-exposed (n = 7) offspring of l-argininesupplemented dams. Each group contained a minimum of three dams per group and one to three specimens per litter.
Statistical analyses of EMIP-based studies
Statistical analysis was performed using Stata v14 (StataCorp), R v3.2.1 (R Core Team, 2015, R Foundation for Statistical Computing), and GraphPad Prism v6 (GraphPad Software Inc.). Student's t test, oneway analysis of variance (ANOVA) (nonparametric Kruskal-Wallis, P < 0.05), post test (Tukey test), independent samples t test, 2 test, and relative risk were used to examine the statistical significance of differences between experimental groups. Analysis of the cumulative distribution of vessel diameters for each placenta was fit with a natural spline with eight degrees of freedom. A two-way ANOVA was conducted to determine whether there was an effect of treatment group on the spline parameters. There was a significant interaction between spline coefficient and group (P < 0.001), and therefore, a post hoc analysis was performed to compare pairs of treatment groups. Post tests on all groups were conducted using Dunn's multiple comparison test (P < 0.05). | 7,940.8 | 2018-03-07T00:00:00.000 | [
"Biology",
"Environmental Science",
"Medicine"
] |
Modeling of information on the impact of mining exploitation on bridge objects in BIM
The article discusses the advantages of BIM (Building Information Modeling) technology in the management of bridge infrastructure on mining areas. The article shows the problems with information flow in the case of bridge objects located on mining areas and the advantages of proper information management, e.g. the possibility of automatic monitoring of structures, improvement of safety, optimization of maintenance activities, cost reduction of damage removal and preventive actions, improvement of atmosphere for mining exploitation, improvement of the relationship between the manager of the bridge and the mine. Traditional model of managing bridge objects on mining areas has many disadvantages, which are discussed in this article. These disadvantages include among others: duplication of information about the object, lack of correlation in investments due to lack of information flow between bridge manager and mine, limited assessment possibilities of damage propagation on technical condition and construction resistance to mining influences.
Introduction
Traditional model of managing bridge infrastructure on mining areas has many disadvantages, which are discussed below in a few next points. Basic problem is the lack of proper connection of many information related to the bridge objects and the lack of information flow between entities interested in maintaining the bridge properly (e.g. road manager, local mine, commune/city office, technical infrastructure administrators).
In Poland and in many other countries methods of modeling information about building objects, know under common password BIM (Building Information Modeling) are widely popular [e.g. 1÷5]. The application of BIM or the implementation of some of the assumptions are mandatory (i.e. required by law) e.g. in Great Britain, USA, Norway, Denmark, Singapore. In Poland several BIM pilot projects in the field of design, construction and maintenance of bridges are implemented and are being implemented.
In the next few years, BIM will be gradually introduced in all countries associated in the European Union, in accordance with the requirements of the European Directive 2014/24/EU announced in February 2014 on competitive tendering. In the first place, BIM will include public investments, and thus bridge objects. Regardless of law regulations imposed, BIM is a very efficient method of information modeling, allowing optimizing management activities throughout the entire life cycle of bridges. BIM technology can be implemented for mining damages, using available methods, algorithms and software. The article discusses the advantages of BIM technology in relation to bridge objects located on mining areas. It is shown how the correct modeling of information about the object helps to reduce mine costs related to the removal of mine damages and protection of objects against mining influences and reduces the costs of administration relate to the maintenance of the objects. Other advantages of having a common advanced BIM model of the bridge are also presented, such as the possibility of remote monitoring, improvement of safety, coordination of mining operations and managing of the bridge objects in the field of renovation and modernization, improvement of the relationships among the bridge manager, coal mine and local commune/city office.
Basic problems related to bridges on mining areas
Administrators of roads generally do not have full information about the impact of mining deformations on bridge objects over time, e.g. deformation indicators (horizontal strains , etc.) for every months or the continuous function of settlements w, but only the end values of indicators at the end of a certain period of time (e.g. the Three Years' Mine Exploitation Plan). The inability controlling the growth of deformation causes distrust to mines and makes difficult the proper supervision of the object, e.g. during the appearance of mining influences on the surface of the area.
Mines have a little knowledge about the construction of bridges and investment plans of Road Administration. In areas of mining influences, certain information should be available for mines; such data could be helpful in planning coal mining exploitation in a given mining wall over time, in activities related to removal of mining damages and mining prevention. Due to the improper information flow, the mines many times prepared the documentation of removal mining damages and adaptation of the structures to taking over future mining influences for objects planned to replacement by the Administrator due to insufficient load capacity or insufficient gauge. In many cases, the object from an economic point of view could be protected for a shorter period of time with only significant damages removed, if, for example, the rebuilding is planned by the Administrator in three years.
The static and dynamic resistance of the bridge objects to mining influences depend on its technical conditions and kinematic capabilities. Mines make the assessment of the resistance of objects to mining influences once every three years as part of so-called Mine Exploitation Plan. During that time, several mining walls may be exploited in the area of the bridge, and damages may occur in the object -that reduces resistance to mining influences, e.g. significant cracks in the zone of joining the pillars and spans in the frame constructions [6,7]. The static and dynamic resistance of bridge structures to mining influences is not constant, but changes in time. The assessment of the technical condition is, in the case of mining areas, also the assessment of damage propagation and the assessment of the impact of damages on the "work" of the structure. Using BIM, damages can be easily taken into account in the bridge model and updated if necessary.
Using tools and technologies that are easily combined with BIM, such as e.g. AR (Augmented Reality), we can overlap the images from the last and current bridge review (tablet, projection glasses) during visit on the object, which allows quickly and reliably control critical sections directly on the real construction (e.g. [4,5]).
The article shows the problems with information flow in the case of bridge objects located on mining areas and indicates the benefits of advanced information models (BIM), e.g. the possibility of automatic monitoring of structures, improvement of safety, optimization of maintenance activities (e.g. repairs) by time and scope (cost reduction). Bridge objects management can be understood as combination of all technical, administrative and management activities of the life cycle after the completion of the construction of the object.
BIM as information model
At present, flat 2D drawings dominate in relation to bridge infrastructure in mining areas (e.g. [6 ÷ 13]) and popular are mathematical formulas given by professor Rosikoń [14]. More possibilities are given by use of 3D spatial model, extended by additional parameters, so that the rotations and displacements of spans and bridgeheads can be modeled as rigid solids [15]. Spatial models of the objects and deformations allow directly on include information from monitoring systems into the bridge models and using measurements from various sources and methods. Efficient tools are provided by BIM technology [1 ÷ 5]. By supplementing the 3D BIM model with additional parameters, we can quickly obtain the 4D model (work schedule) and 5D (investment cost). In this way it is easy to analyze and compare cost-effectiveness and organisation of several repair solutions (currently technical-economic comparisons are rarely carried out). BIM 6D enables quick realization of accurate energy analyzes for buildings, but bridges do not required such analyzes. BIM 7D is used by managers in the process of managing the building objects at the stage of its exploitation. BIM tools allow on describing in a certain period of time/future, the investment plans of the bridge objects manager and planes activities of the mine in the scope of damage repair and prevention, in such case the correlation of investments is easier.
BIM technology is a process, during which a model common for all industries is created. This model combines additional information related to the individual elements of bridge (e.g. expansion joints, bridge bearings). In such process, a number of additional parameters may be included such as, for example, results of geodetic measurements, data from automate measurement (monitoring) systems and other technical information needed by each of the participants (e.g. investment plans of the bridge administration, activities planned by the local mine). All information that are included in this model can be used by all participants, such as the bridge administration, local commune or city, local mine, technical infrastructure administrators running after (under) the bridge (e.g. waterworks, gas pipeline, power cables) or an institution responsible for water management in the case of watercourses. When talking about the process, we should keep in mind the organization of flow of information in the amount needed by each participant.
The traditional model of managing bridge infrastructure on mining areas is inefficient, it is necessary to contact each participant with each other, in order to obtain information ( Fig. 1.). Moreover each participant may have different information, often out of date. These disadvantages cause distrusts between the bridge administration, the mines and the local self-government units. These disadvantages make also difficult planning of exploitation of coal deposits in the area of the bridge, negatively affect on safety and comfort of use, and increase the cost of maintaining objects and the removal of damages.
BIM is a technology based on a common model to which all participants of the process can access in real time (Fig. 2.). The model works properly if all participants update the data which they have provided to the model. Participants can expand the model with additional data at any time. Interoperability of participant's technology platforms is important here, i.e. providing data in a form that all participants can use. It is necessary to standardize the procedures and details of the model, e.g. marking and description of layers, because all participants of the process have access to the same model. Ready solutions in the field of cooperation (interoperability) are provided by, for example, the British family of BS 1192 standards, defining the framework of BIM processes.
Duplication of information about the bridge object
Every three years, in the scope of Mine Exploitation Plan, each of the mines performs documentation containing the assessment of static and dynamic resistance to mining influences of all bridge structures located in its mining area, which are located in the reach of the planned mining exploitation. The contractor for such elaboration is selected in the open tender. Practically every time, the bridge geometry and the damages of structure are inventoried. In the case of BIM technology, the model of the bridge with current structural damages would be available for downloading, so the workload for such assessment would drop by more than 50% (the major part of the documentation are the same information obtained all over again by another company). In addition, it would be possible to analyze the propagation of damage over time (e.g. indication of active cracks, etc.), so the quality of the documentation would be significantly improved.
Documentation of mining damage removal and protection against the projected mining influences is performed if the object is not able to take over the predicted mining impacts or there are any significant damages of the structure, reducing its load capacity and/or resistance to mining influences. A significant part of such project is doubled with the above-mentioned resistance assessment. A designer of renovation/reconstruction must therefore obtain a lot of the same information. BIM technology would provide ready digital bridge model for such the designer. After the completion of the project, the designer should provide data for updating the BIM model on a common platform (i.e. the model available to all participants of the process). Visit on the bridge object can be used in this case to validate the BIM model. The digital bridge model can be quickly implemented into new analyzes, what significantly reduces the necessary time and cost of analytical calculations and reduces the risk of errors.
Incorrect correlations in investments
The mine orders making a design documentations regarding on elimination of mining damages and preventive actions and provides reconciliation of this documentations by bridge administrator/owner. Unfortunately, in many cases, bridge administrators do not inform local mines about activities in the future regarding to the object, e.g. replacement or significant reconstruction of the bridge object in the near future. In BIM, data on investment plans of managers/administrators of bridges or watercourses as well as planned activities by mines in the field of road and rail infrastructure could be described on the timeline as an additional parameter. Some examples of the lack of correlations in investments are provided below.
Three-span bridge frame has been protected against mining influences for the entire estimated mining exploitation, i.e. for about 20 years. The mine incurred significant costs. Three years after renovation, the bridge's administrator decided to demolish the object and build a new one due to insufficient load-bearing capacity of the structure and insufficient width in relation to the parameters of the modernized road [6,7].
Mine repaired the railway viaduct in December, and in June of the following year, the corroded spans of this viaduct were replaced with new ones, and the abutments and pillars partially demolished and seriously rebuilt as part of the revitalization this PKP line [16].
Mine wanted to liquidate floodplains formed on the river as a result of the area subsidence and commissioned making documentation which allow on lowering the river bed to a specified level. It was necessary, among others, to protect the foundations against undercutting by river, as well as pillars and abutments against the loss of stability in the case of four bridges [10]. It was necessary to correct the shoreline. It took about two years to make this documentation. Documentation ultimately was not sent for implementation, because on the request of the watercourse administrator, a project to strengthen the river bed on the 500 m section at the connection of the river and the local stream was initiated, so that to prevent flooding and silt up bottom -this watercourse administrator project was part of a larger task where a large EU subsidy was obtained (the change in the "EU" project threatened to return the subsidy for the entire task) [10].
Technical conditions and resistance to mining influences
Static and dynamic resistance of bridge structures to mining influences is not constant, but changes over time. Mining exploitation has a significant impact, e.g. by stretching/squeezing expansion joints, introducing significant displacements into bearings [9,12,13,14,15,16,17]. The mining turning and tilting of abutments may lead to cracking on the abutments and spans [5,6,9,11,12,14,15]. The change in the curvature of the area has a negative effect on the construction of abutments -significant cracks may be created that destroy the integrity of the solid of bridgehead [8,9,11,12,15]. The uneven support subsidence, associated with the curvature, generates additional internal forces in continuous multi-span structures [6,7]. In hyper static structures (e.g. multi-span continuous beams, frames), appears scratches which can develop into significant cracks [6,7,8,9,12].
Resistance of bridge structures to mining influences also changes as a result of normal using and impact of environmental (atmospheric) factors, e.g. bearings made of steel plates may be rusty and blocked [9,13].
Damage controls in traditional bridge maintenance systems are often ineffective, because such controls are usually performed too rarely and are not related to the occurrence of mining influences on the surface of the area. Bridge calculation models for assessment of resistance to mining influences are rarely verified for damages. In BIM technology, damage can be included into the model. The great advantage of BIM technology is the ability to use (in order to create, validate and update the model) many modern techniques that allow remote, automatic acquisition of information about the object, damages, displacements, e.g. by using electronic sensors, drones, satellites, cameras. Especially significant possibilities, in the case of bridge objects located on a deformed area, are given by laser scanning. Laser scanning is a technology that allows on the visualization of surrounding space by using point's cloud. Model of the object is created from the point's cloud. Laser scanning can be realized from ground (from permanent or mobile positions with a device mounted on a vehicle) or as an airborne from planes or drones.
BIM enables the use of series of advanced digital technologies to assessment of technical conditions. Augmented reality (AR) is particularly interesting in the case of mining damages [e.g. 5]. During bridge inspection operator can display damages recorded on the object during the previous inspection and compare damages on tablet (in the case of larger financial expenditures, the operator can be equipped with 3D projection glasses). By comparing images it is easier to compare damages and reliably to assess the propagation of damages -reduced are the time required for a good review and the cost of human work. In case of several bridge objects in a given mining area, bridges to frequent reviews (e.g. in case of significant mining influences or objects sensitive to mining influences), this operation (i.e. AR) may be cheaper than sending several people to describe the damages, moreover the review report is performed much faster (with less work).
Acquiring information from monitoring systems
The basic disadvantage of traditional measurements is their low frequency, generally 2-4 measurement cycles are carried out during the year. Rarely perform measurements do not give full information about mining influences in the area of the object -stretching and compression in many situations follow each other and partly abolish, extremes of deformation and the greatest threats often appear between measurement cycles and are not documented in any way. The results of traditional geodetic are not available in digital form, so automatically implementation of measurements in computer systems is difficult. Changes in measured quantities are manually calculated and referred to the range of bearing work at a certain point of time considered to be initial, then the displacement are calculated. The next step is to determine the displacements which will be caused by another, next mining exploitation and analyzes whether the bridge bearing will transfer these influences.
The solution to the above problems is remote, frequent measurements carry out with a certain frequency (e.g. every hour). There are several pilot remote monitoring systems installed in Poland on bridge structures on mining areas [16,17]. Bridge administration has access to measurements perform by these systems, but the lack of appropriate digital models of bridges, force manual data processing.
The starting point during projecting of monitoring system should be real object. The parameter values measure by the system should be related to the values determined earlier on the basis of the numerical model solution and on the basis of mining prediction. The measurement, without comparison of the measured value, is just a simple observation. The next stage is defining the object's resistance to mining influences, e.g. analysis of: the layout and ranges of bearings, the widths of expansion joints, the width of the bearing benches. BIM technology enables the inclusion of measurement data into the model and the use this data also automatically. After defining boundary conditions for an emergency situation, the system can automatically generate warnings, which facilitate supervision of the object and directly translate into improved safety.
Summary and final conclusions
The article discusses problems related to the maintenance of bridge objects on mining areas. It is pointed out that in the field of BIM technology are currently ready solutions and tools for obtaining, modeling and managing information. Advantages of BIM technology uses to optimize the process of maintaining and managing bridges are described: • reduction of mine costs relates to easy access to a good digital bridge model, currently identical information are obtained by different authors independently (e.g. geometric inventarisation of object, identification of bearings and material data), • increase the quality of documents on the resistance of bridge structures to mining influences and documents related to the removal of damage and bridge protection, • actual information about the mining influences that occurred, are occurring now and are predicted in future, related to the relevant time periods, • information about the mining influence which bridge received and can receive in future, • the possibility of adjusting the scope of protection solutions to mining influences and the scope of repairing works related to the removal of mining damage to the plans of the bridge administrator, reduction costs of mines, • adaptation of investment plans (e.g. repairs, maintenance, reconstruction) to current and predicted mining exploitation and associated with it costs reduction which are bearded by the administrator of the bridge, • avoidance of accident situations, general improvement of safety, • increased trust between the participants of the process thanks to the transparency of each actions, • improvement of the climate for mining exploitation in region of the object, theoretically it will be easier for mine to obtain agreement for the exploitation of larger number of coal walls, get permission for larger limit values of deformation indicators, • the ability to control the damages development and including the damages into the digital bridge model, • improving the quality of information about the bridge objects, its technical condition and current kinematic possibilities (e.g. ranges of bearings and expansion joints), availability and easy interpretation of the results of geodetic measurements or data from automatic monitoring systems, thanks to the connection of the measurement with the information about the object and boundary conditions of the displacements. | 4,905.6 | 2018-04-01T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
Sum Rate Optimization of IRS-Aided Uplink Muliantenna NOMA with Practical Reflection
Recently, intelligent reflecting surfaces (IRSs) have drawn huge attention as a promising solution for 6G networks to enhance diverse performance metrics in a cost-effective way. For massive connectivity toward a higher spectral efficiency, we address an intelligent reflecting surface (IRS) to an uplink nonorthogonal multiple access (NOMA) network supported by a multiantenna receiver. We maximize the sum rate of the IRS-aided NOMA network by optimizing the IRS reflection pattern under unit modulus and practical reflection. For a moderate-sized IRS, we obtain an upper bound on the optimal sum rate by solving a determinant maximization (max-det) problem after rank relaxation, which also leads to a feasible solution through Gaussian randomization. For a large number of IRS elements, we apply the iterative algorithms relying on the gradient, such as Broyden–Fletcher–Goldfarb–Shanno (BFGS) and limited-memory BFGS algorithms for which the gradient of the sum rate is derived in a computationally efficient form. The results show that the max-det approach provides a near-optimal performance under unit modulus reflection, while the gradient-based iterative algorithms exhibit merits in performance and complexity for a large-sized IRS with practical reflection.
Introduction
Intelligent reflecting surfaces (IRSs) have drawn enormous attention from the academy and industry as a cost-effective building block for 6G wireless communication networks demanding high spectral and energy efficiency [1][2][3]. An IRS constructed with a large number of passive reflection elements can reconfigure a wireless propagation channel to be favorable for information or energy transfer by controlling their reflecting patterns. In doing so, the IRS avoids a large power consumption with passive elements and achieves a full-duplex gain without complicated signal processing such as interference cancellation and demodulation. In this aspect, various IRS-assisted wireless communications have been explored for their own purposes, from basic multiuser or/and multiantenna communication systems [4][5][6][7][8][9] to more complicated system configurations, as surveyed in Ref. [10].
For multiuser communications, nonorthogonal multiple access (NOMA) craving for higher spectral efficiency and massive connectivity has been considered as a promising candidate for 6G networks [11][12][13][14][15][16]. Hence, many recent IRS studies have been devoted to IRS-assisted NOMA for further improvement in spectral efficiency, energy efficiency, and reliability [17][18][19][20][21][22][23][24][25][26][27]. One of the major concerns of IRS-aided NOMA networks resides in how to reflect a superimposed NOMA signal appropriately to meet the design goal of a given network. In the downlink, joint optimization of IRS reflection and power allocation was studied with a single-antenna base station (BS) to minimize the transmit power of a two-user NOMA signal [18] and maximize the sum rate of a multiuser NOMA signal [19]. For a multiantenna BS in the downlink, transmit beamforming and IRS reflection were optimized without or with power allocation to maximize the sum rate [20], minimize the transmit power [21][22][23], and maximize the minimum rate [24]. On the other hand, in the uplink, IRS-aided NOMA was studied to maximize the sum rate achieved with a singleantenna receiver by optimizing the IRS reflection vector [27]. The sum rate maximization problem was also extended to IRS-aided NOMA accompanied by wireless power transfer with a single-antenna receiver [28,29] and a multiantenna receiver [30].
The recent studies on IRS-aided NOMA networks have assumed unit modulus reflection for IRS since the phase control is more readily implementable than the amplitude control in implementing passive IRS elements. It should also be noted that IRS optimization under unit modulus reflection has resorted to semidefinite relaxation (SDR) solving a semidefinite program (SDP) [31] for the downlink with a given transmit beamforming and for the uplink with a single-antenna BS, where the SDP deals with linear functions of semidefinite matrices [20,23,24,[27][28][29]. However, the SDR approach is less favorable for an IRS requiring a large number of passive elements to compensate for double fading due to its complexity increasing polynomially with the number of elements. To reduce the complexity of the SDR, an iterative algorithm based on a second-order surrogate function similar to the gradient descent was proposed in Ref. [5] for the simple objective function given by the trace of a matrix.
For the uplink NOMA with a multiantenna BS [30], the sum rate is given in a more complicated log-determinant expression similar to a multiple-input multiple-output (MIMO) capacity [7]. In this case, the SDP, having linear functions of semidefinite matrices in the objective and constraints, is no longer applicable. A few studies have dealt with such a complicated objective function in IRS reflection optimization [7,30], which was optimized through a suboptimal sequential optimization method, optimizing one IRS element while fixing the other elements without knowing its performance gap to the optimal one. In addition, practical IRS reflection models with phase-dependent amplitudes observed in practical circuits [32] have not been studied for IRS-aided NOMA networks yet. In this regard, we consider a sum rate maximization problem of an IRS-aided uplink NOMA network with a multiantenna BS and tackle the problem under a generalized IRS reflection model encompassing unit modulus and practical reflection [32]. The main contributions are summarized as follows: • We formulate the sum rate optimization problem by incorporating the optimal receive beamforming for a given IRS phase vector in the objective function, which is characterized by the log-determinant of a matrix as in the MIMO capacity expression [7,30]. As a result, only the IRS phase vector needs to be optimized in the uplink, while the transmit beamforming and IRS phase vectors have been optimized alternately in the downlink [5]; • For a moderate-sized IRS with unit modulus reflection, we propose an extended SDR approach that converts the sum rate maximization problem into a determinant maximization (max-det) problem [33]. The max-det solution not only provides an upper bound on the optimal sum rate of the IRS-aided uplink multiantenna NOMA, but also leads to a rank-one feasible solution resulting in a near-optimal performance. This approach can also be employed to obtain an upper bound on the IRS-aided MIMO capacity under unit modulus reflection; • For a large-sized IRS under generalized reflection, we reformulate the problem as an unconstrained nonlinear optimization problem that can be solved by using gradient-based iterative algorithms [34]. In particular, we use the more sophisticated Broyden-Fletcher-Goldfarb-Shanno (BFGS) and limited-memory BFGS (L-BFGS) algorithms [34,35], while the gradient-descent approach is used in [5]. For an efficient implementation of such iterative algorithms, we derive the gradient of the complicated objective function in a computationally efficient form under the generalized reflection; • We analyze the computational complexity of the iterative algorithms when the derived gradient is used to update the search point. The results show that the iterative algorithms reduce the complexity of the extended SDR with max-det optimization significantly. In addition, the iterative algorithms provide a performance gain over the conventional sequential optimization method [7] at a reduced computational time.
Notation: The sets of n × m complex-valued and real-valued matrices are denoted by C n×m and R n×m , respectively, with C n = C n×1 and R n = R n×1 , while the set of n × n positive semidefinite Hermitian matrices is denoted by S n + . The transpose, Hermitian, and trace are denoted by (·) T , (·) H , and tr(·), respectively. We use diag(a) for the diagonal matrix with a diagonal vector a, [a] n for the nth entry of a vector a, and, [A] n,m for the (n, m)th entry of a matrix A, and CN (µ, Σ) for complex Gaussian distribution with mean vector µ and covariance matrix Σ.
System Model
We consider the uplink of a single-cell network described in Figure 1. The network consists of a BS equipped with M antennas, K devices equipped with a single antenna, and an IRS comprising N reflection elements. The channels from device k to the BS and to the IRS are denoted by v k ∈ C M and f k ∈ C N , respectively, for k ∈ K {1, 2, · · · , K}. The channel from the IRS to the BS is denoted by G ∈ C M×N . The IRS reflection vector is denoted by θ = [θ 1 , θ 2 , · · · , θ N ] T ∈ C N , where θ n = e jφ n for n ∈ N {1, 2, · · · , N} under unit modulus reflection. To address the amplitude distortion of practical IRS control circuits depending on the phase, we express the IRS reflection in a generalized form as [32] where with α ≥ 0, β min ≥ 0, and φ 0 ≥ 0. The values for parameters α, β min , and φ in (2) are determined by the specific circuit implementation, where (2) with α = 0 represents the unit modulus reflection with β n (φ n ) = 1. For the uplink transmission, we allow K devices to transmit their symbols simultaneously, where the number K of devices is larger than the number M of receiving antennas for NOMA. However, the following results are also applicable to space division multiple access with K ≤ M. The signal received at the BS is then written as where s k and p k are the symbol and transmit power of device k ∈ K, respectively, and z ∼ CN (0, σ 2 I M ) is the noise vector added at the BS. We can express the received signal (3) in a concise form as where H k is the equivalent channel from device k to the BS, which is given by Without loss of generality, we assume that the devices are indexed in the successive interference cancellation (SIC) order. The BS detects the device symbols from s 1 to s K sequentially by applying receive beamforming w k ∈ C M to the received signal after SIC in detecting s k , specifically by applying the receive beamforming w k to the received signal after {s 1 , s 2 , · · · , s k−1 } being detected and canceled, which is given bŷ whereŷ 1 = y. We obtaiñ from which s k can be detected. Thus, the signal-to-interference-and-noise ratio (SINR) in detecting s k fromỹ k is given by The optimal receive beamforming that maximizes the SINR is given by the minimum mean square error (MMSE) beamforming, expressed as [36] for k ∈ K and B K+1 = σ 2 I M . The maximum SINR of device k achieved with the optimal beamforming w o k is given by which leads to the achievable rate as follows: using the matrix determinant lemma, det(B + uu H ) = det(B) det(1 + u H B −1 u) for an invertible matrix B [15,36], and , which leads to (12). From (12), the sum rate of all devices is given by Refs. [15,36] where to the SIC order. Note that device rates in (8) depend on the SIC order since the SIC order affects B k for 2 ≤ k ≤ K. However, the sum rate (13) determined by B 1 and B K+1 does not depend on the SIC order. Finally, the sum rate is expressed as a function of transmit power p = [p 1 , p 2 , · · · , p K ] T of the devices and IRS phase shifts whereθ is a function of φ.
Problem Formulation
This paper aims to maximize the sum rate of the IRS-aided uplink NOMA by optimizing the transmit power p = [p 1 , p 2 , · · · , p K ] T of the devices and the phase shifts Since the sum rate (14) is a non-decreasing function of p k irrespective of φ, the optimal power of problem (15) is given by that results in the sum rate as with and Finally, the sum rate optimization problem (15) becomes It should be noted that problem (19) under unit modulus reflection is equivalent to a subproblem of the IRS-aided MIMO capacity optimization problem [7], which was solved by the customized sequential optimization method. In the following, we will provide alternative methods providing either a better performance under unit modulus reflection or a faster computation under generalized reflection than the conventional method [7].
Determinant Maximization for a Moderate-Sized IRS
This subsection tackles the problem (19) for a moderate N by extending the SDR approach. For this purpose, let us rewrite the signal matrix (18) as where Ξ = diag([ξ 1 , ξ 2 , · · · , ξ K ] T ), H = [H 1 , H 2 , · · · , H K ] ∈ C M×K(N+1) , and ⊗ denotes the Kronecker product. The signal matrix is a Hermitian semidefinite matrix and is linear withX whenX =θθ H . We defineX =θθ H ∈ S N+1 + , of which the diagonal entries satisfỹ X n,n = |θ n | 2 ≤ 1. In this case, we can transform (19) s.t.X n,n ≤ 1, n = 1, 2, · · · , N + 1, LetX o andR o =R(X o ) denote the optimal solution of (21) and the corresponding optimal sum rate, respectively. Since findingX o is intractable due to the rank constraint, we resort to an approximate solution by finding a rank-relaxed solution first and then estimating a rank-one solution from the rank-relaxed one as follows.
In the first step, by relaxing the rank constraint (21c), we can approximate (21) to the max-det problem defined in [33], which is expressed in the standard from as follows: Since (22) is known to be a convex optimization problem with linear matrix inequalities in (22b) and (22c), it can be solved with an existing convex optimization solver, which leads to the optimal solutionX andR =R(X ). Here, the optimal valueR of the rank-relaxed problem (22) can serve as an upper bound on the optimal valueR o of the rank-constrained problem (21), since the constraint set of (22) includes that of (21).
In the second step, from the solutionX of (22), we obtain a rank-one solution close toX o =θ oθ H o though the Gaussian randomization procedure [24,31]. Since any rank-one solutionX of (21) is decomposed asX =xx H , the Gaussian randomization procedure generates L zero-mean complex Gaussian samples as {x l ∼ CN (0,X )} L l=1 for a rank-one solution so thatx lx We then obtain a feasible candidateX l subject to [X l ] n,n ≤ 1 for the solution of (21), which is equivalent To generateθ l , we obtain the phases as [φ l ] n = ∠[x l ] n for n ∈ N. The phase vectors {φ l } L l=1 generated by the phases of complex numbers are feasible for problem (19) and {X l =θ lθ H l } L l=1 are also feasible for problem (21) since [X l ] n,n ≤ 1 and rank(X l ) = 1. Among all the feasible candidates {φ l } L l=1 (or equivalently {X l } L l=1 ), the Gaussian randomization procedure finds the best candidate as l = arg max 1≤l≤L R(φ l ) = arg max 1≤l≤LR (X l ) for its output. The sum rateR(X l ) with the best candidateX l is a non-decreasing function of the number L of random samples so thatR(X l ) is likely to move closer to the optimal sum rateR(X o ) as L increases. Later, we will empirically demonstrate that Gaussian randomization with a sufficient L provides a good approximate solution close to the optimal valueR(X o ) of our problem, as in the other SDR applications [31]. (22) consists of X with l = (N + 1)(N + 2)/2 complex variables, whilst Y ∈ S M + and F ∈ S N+1 + are in the constraints. Thus, the problem can be solved by an interior-point algorithm with O((M 2 + (N + 1) 2 )l 2 ) operations per search point and a worstcase complexity of O( √ N + 1) iterations [33]. In short, the complexity solving (22) is given by O(N 6.5 ), which becomes unacceptably large as N increases.
Gradient-Based Iterative Algorithms for a Large-Sized IRS
This subsection provides a suboptimal approach to solving problems (19) in a faster way for a large N by transforming the problem into an equivalent unconstrained nonlinear optimization problem. For this purpose, we remove the phase constraints in (19) as which has the same optimal value with (19) due to the periodicity of R(φ) for all the entries of φ with period 2π. For ease of exposition, we convert (23) to a minimization problem as min φ∈R N f (φ) (24) by defining f (φ) = −R(φ). Due to the nonconvexity of R(φ) and its complicated form, it is almost impossible to obtain the optimal solution of (24) even for unit modulus reflection. Instead of the sequential optimization optimizing one IRS element at a time [7,30], we solve the problem through an iterative algorithm minimizing a local approximation (or a surrogate function) at each iteration to update its IRS phases φ t for t = 0, 1, · · · simultaneously [34]. To accommodate a large N, we adopt the algorithms based on second-order Taylor series approximations but relying on the gradient ∇ , · · · , ∂ f ∂φ N ] T in their implementation. The algorithms include the gradient descent (GD) [5,34] and quasi Newton methods such as BFGS and L-BFGS [34,35], which are briefly summarized in the following.
The algorithms are based on second-order Taylor approximations that can be expressed in a generic form as where ∆φ t = φ − φ t , g t = ∇ f (φ)| φ=φ t , and A t ∈ C N×N is chosen by an algorithm. The solution is updated by minimizing f A (φ, φ t ) as The GD with the update rule for δ t > 0 at complexity O(N) is obtained with a choice of A t = 1 δ t I M , where the step size δ t is determined by the Armijo rule [5]. The BFGS and L-BFGS update the search point as where Q t is an estimate of the inverse Hessian A −1 t with A t = ∇ 2 f (φ)| φ=φ t to reduce the complexity of the Newton method computing the Hessian and its inverse. The BFGS estimates Q t+1 with Q t , u t = φ t+1 − φ t , and r t = g t+1 − g t at complexity O(N 2 ) [34] whilst the L-BFGS having m B memories estimates Q t+1 with {u i , r i } t i=t−m B +1 at complexity O(m B N) for [35].
Simulation Results
The performance of the IRS-aided uplink NOMA is evaluated when the maximum transmit power is set to P max k = 23 dBm for k ∈ K and the noise power is set to σ 2 = −100 dBm. Practical IRS reflection is modeled with α = 1.6, β min = 0.2, and φ 0 = 0.43π as in Ref. [32]. We set the tolerance of the algorithms to 10 −5 up to maximum 500 iterations. The simulation setup is illustrated in Figure 2, where the (x, y, z) coordinates are given in meter. The BS and IRS are located at (0, 0, 10) and (50, 50, 10), respectively, whilst the devices are uniformly distributed in the shaded rectangular region bounded by (100, −20, 0), (100, 20, 0), (250, −20, 0), and (250, 20, 0). The channels are modelled as where the subscripts LoS and NLoS represent the line-of-sight (LoS) and non-LoS (NLoS) components, respectively, and ω x,y and κ x,y denote the path loss and Rician factor between nodes x and y for x, y ∈ K ∪ {R, B} with R for the IRS and B for the BS. The path loss is given by ω x,y = 10 −3 d −ν x,y x,y at distance d x,y with path loss exponent ν x,y , where ν R,B = 2.2, ν k,R = 2.8, and ν k,B = 4. The Rician factor is set to κ R,B = κ k,R = 2 and κ k,B = 0. We model the LoS components with a uniform linear array for the BS and an N v × N h uniform planar array constructed by N v = 8 and N h = N/8 for the IRS as [7] where a B (ϕ) ∈ C M and a R (ϕ, ϑ) ∈ C N are the array response at the BS and IRS, respectively, defined in [7] with the azimuth (elevation) angle-of-arrival ϕ A x,y (ϑ A x,y ) and angleof-departure ϕ D x,y (ϑ D x,y ) from node x to node y. Specifically, for the antenna and IRS with half-wavelength element spacing, we have for ϕ ∈ [0, 2π) and ϑ ∈ [−π/2, π/2). The NLoS components are modeled to be uncorrelated complex Gaussian.
The average sum rate of the network is shown as the number N of IRS elements increases in Figure 3 when M = 2 and K = 4. We provide the results with unit modulus reflection and practical reflection drawn with solid and dashed lines, respectively. Bound represents the upper boundR =R(X ) obtained with the solutionX of (22), and Maxdet denotes the performance of a feasible solution φ l derived fromX through Gaussian randomization with L = 50. Bound and Max-det are shown up to N = 128 due to the formidable computational time in solving (22) for a large N. BFGS, L-BFGS with m B = 10, and GD implemented with the derived gradient are compared with ConvSeq denoting the sequential optimization in Ref. [7]. The results with random IRS phases denoted by Random are also added to serve as a lower bound. Clearly, the average sum rate increases steeply with N by optimizing the IRS reflection. In cases of unit modulus reflection, Max-det provides the best performance close to the optimal one estimated by Bound for a moderate N. The iterative algorithms provide almost the same performance under unit modulus reflection, but BFGS, L-BFGS, and GD provide a slight gain over ConvSeq increasing with N under practical reflection. However, the gradient-based iterative algorithms are observed to reduce the computational time, as shown in Figure 4, which provides the average evaluation time per sample in obtaining the results of unit modulus reflection in Figure 3 with Intel(R) Xeon(R) Gold 6226R CPU @ 2.90Hz. Clearly, the iterative algorithms exhibit a significant reduction in computational time over Max-det at the cost of a performance loss. The computational time of GD is comparable to that of ConvSeq since both GD and ConvSeq require a large number of iterations until their convergence. L-BFGS provides the best computational time, with less complexity in updating the solution than BFGS and with a smaller number of iterations than GD by finding a better search point through inverse Hessian estimation. Thus, L-BFGS with the derived gradient would be a choice of practical merit for a large N. Figure 5a and N = 256 in Figure 5b. The average sum rate increases as the number K of devices increases. In addition, the gradient-based methods and ConvSeq provide a similar performance except for practical reflection with N = 256. When N = 64 in Figure 5a, the performance of the gradient-based algorithms becomes close to the optimal one as K increases for unit modulus reflection. When N = 256 in Figure 5b, the gradient-based algorithms outperform ConvSeq by about 0.3 dB for practical reflection. Again, L-BFGS provides a good performance among the gradient-based algorithms. Hence, we compare the performance of L-BFGS and ConvSeq for different numbers M of antennas with N = 256 in Figure 6; we set M = 2 in Figure 6a and M = 4 in Figure 6b. The sum rate becomes almost doubled by doubling the number of antennas. Again, L-BFGS provides a similar or slightly improved performance compared with ConvSeq, which is obtained at a computational time about 11% and 18% of ConvSeq with M = 2 and 4, respectively, for most K values in the figures.
Discussion : It is noteworthy that the nonlinear optimization problem in (24) with respect to the IRS phase vector φ is a non-convex optimization problem that exhibits multiple local minima in general. Hence, the iterative algorithms considered herein do not guarantee a convergence to the global optimal point, but to one of the local minima. The gradient-based algorithms updating N variables simultaneously for the next search point tend to find a similar local minimum at a different convergence rate. However, ConvSeq, updating one variable at a time, tends to find a worse point in particular for practical reflection since it resorts to limited information for the next search point. To improve the performance, we may run an iterative algorithm with different initial points, resulting in different local minima so that a better solution can be found. However, it is observed that the gain is trivial for this problem. From this, devising a new algorithm filling the gap to the optimal performance with a complexity between those of Max-det and gradient-based algorithms would be an interesting topic for further study.
Concluding Remarks
We have considered a sum rate maximization problem for the IRS-aided uplink multiantenna NOMA under a generalized reflection model including unit and phase-dependent amplitudes. We have solved the problem through extended SDR to obtain an upper bound on the sum rate and a near-optimal solution for a moderate-sized IRS. We have applied the gradient-based iterative algorithms for a large-sized IRS by providing the gradient in an explicit form under generalized reflection. The results show that, among the gradient-based algorithms, L-BFGS implemented with the derived gradient provides a more competitive solution than the conventional method in both computational time and performance. | 6,042.4 | 2022-02-01T00:00:00.000 | [
"Computer Science"
] |
On the Operator Method for Solving Linear Integro-Differential Equations with Fractional Conformable Derivatives
: The methods for constructing solutions to integro-differential equations of the Volterra type are considered. The equations are related to fractional conformable derivatives. Explicit solutions of homogeneous and inhomogeneous equations are constructed, and a Cauchy-type problem is studied. It should be noted that the considered method is based on the construction of normalized systems of functions with respect to a differential operator of fractional order.
Introduction
There are many different ways of defining fractional operators, unlike in classical calculus, where there is only one way to define the derivative operation. The most common derivatives are Riemann-Liouville and Caputo derivatives, which were successfully used in the modeling of complex dynamical processes in physics, biology, engineering and many other fields [1][2][3][4][5].
It should be noted that questions related to the theorems of existence and uniqueness of solutions of Cauchy-type and Dirichlet-type problems for linear and nonlinear differential equations of fractional order have been developed in sufficient detail, whereas explicit solutions are only known for certain types of linear differential equations of fractional order.
One of the most widely used methods for constructing solutions to differential equations of fractional order is the method of integral transformations. A detailed description of this method can be found in [2,4,5] and other works. An effective method for constructing explicit solutions and solving the Cauchy problem for differential equations of fractional order is based on the Mikusinski operational calculus. In the papers of Yu. Luchko et al. [10][11][12][13][14][15], this method was applied to solve linear differential equations of fractional order with constant coefficients and with derivatives of Riemann-Liouville and Caputo type and the general fractional derivative. This method was later used for a general equation with a Hilfer-type operator [16]. In the paper [17] A. Pskhu formulated and solved the initial problem for linear ordinary differential equations of fractional order with Riemann-Liouville derivatives. He reduced the problem to an integral equation and constructed an explicit solution in terms of the Wright function. We also note that in [18,19] the Cauchy problem for differential equations of fractional order has been studied using the Adomian decomposition method.
In this paper, we consider an operator method for constructing solutions to fractional differential equations. This method is based on the construction of normalized systems with respect to operators of fractional differentiation. The method of normalized systems was introduced in [20] and used to construct exact solutions to the Helmholtz equation and the polyharmonic equation. The method of normalized systems was used to solve the Cauchy problem for ordinary differential equations with constant coefficients [21], as well as to construct solutions to differential equations associated with Dunkl operators [22,23]. Later, in [24][25][26][27], this method was applied to the construction of an explicit form of solution of fractional differential equations.
Let us first consider the definition of fractional-order integro-differentiation operators that will be used in this paper.
Let 0 < a < b < ∞. For function f (x) ∈ C 1 [a, b], we define the operator In case α ∈ (0, 1), this operator corresponds to an integral operator of the form Let 0 < α, β and n a T α = a T α · a T α · ... · a T α n , n = 1, 2, .... In [8], the following integrodifferential operators were considered: Cβ In case α = 1, operator β a J α coincides with the integration operator of the β order in the Riemann-Liouville sense, whereas β a D α and Cβ a D α coincide with the differentiation operators of the β order in the Riemann-Liouville and Caputo sense [2].
It should be noted that the methods for solving fractional differential equations with derivatives β a D α and Cβ a D α have been studied by many authors, in particular, in works [28][29][30][31][32].
In [28] the theorem on the existence of a unique solution to the Cauchy problem is proved by the method of successive approximations. In [29], using the generalized integral Laplace transform for the case 0 < β ≤ 1, explicit solutions to the following Cauchy problems were constructed Similar results were obtained in [30][31][32]. The use of differential equations of fractional order with derivatives β a D α and Cβ a D α in the modeling of biological processes (fractional analogue of the Bergman model), electrical circuits, motion of electrons under the action of the electric field (fractional analogue of the Drude model), as well as in the analysis of applied dynamic models (Rabinovich-Fabrikant attractor), is described in [33][34][35][36][37].
Further, in the work of A.A. Kilbas and M. Saigo [38], on the basis of the formula for the composition of operators of integration of fractional order with a three-parameter Mittag-Leffler function E β,m,l (z), an algorithm for solving an integral equation of the type In this paper (Section 3), this result is generalized for integral equations with the operator β a J α . In this case, the solution to the integral equation is constructed by a constructive method, i.e., by the method of normalized systems, and it is proved that the solution to the integral equation is represented in terms of Mittag-Leffler-type functions E β,m,l (z). The solution to the integral equation is constructed in a closed form when the right-hand side of the equation is a quasi-polynomial. In the particular case of parameters of the considered integral operator, the results obtained in this work agree with the results obtained in [38].
In Section 4 of this work, the method of normalized systems is used to construct solutions to iterated differential equations of fractional order. In contrast with our work [25], in this case, fractional-order differential equations with degeneration are considered. The construction of solutions to such equations has not been studied by other authors. It should be noted that in constructing solutions to these equations, a new class of special functions E p+1 β,m,l (z), representing a more general form of three-parameter Mittag-Lefflertype functions E β,m,l (z), arises.
In the fifth and sixth sections of the work, application of the method of normalized systems to the construction of an explicit solution of one class of fractional-order differential equations with operators β a D α and Cβ a D α is considered. Homogeneous and inhomogeneous equations are studied. The considered equations and, therefore, the results obtained, generalize the results obtained in [30][31][32], as well as the results obtained in the work of A.A. Kilbas and M. Saigo [39].
At the end of the section, an example of solving an equation for electrical circuit simulation is given.
Further, we present some well-known information about the method of normalized systems.
Let L 1 and L 2 be linear operators, acting from the functional space X to X, L k X ⊂ X, k = 1, 2. Let functions from X be defined in a domain Ω ⊂ R n . Let us give the definition of normalized systems [20].
having the base f 0 (x), if, on this domain, the following equality holds: If L 2 = E is a unit operator, then a system of functions f − normalized with respect to (L 1 , I) is called f − normalized with respect to L 1 , i.e., If f (x) = 0, then the system of functions { f i (x)} is just called normalized. The main properties of the systems of functions f -normalized with respect to the operators (L 1 , L 2 ) on Ω have been described in [20]. Let us consider the main property of the f -normalized systems.
The following proposition allows us to construct an f −normalized system with respect to a pair of operators (L 1 , L 2 ). Proposition 2 ([27]). If for L 1 there exists a right inverse operator L −1 1 , i.e., L 1 · L −1 1 = E, where E is a unit operator and L 1 f 0 (x) = f (x), then a system of functions f i (x) = L −1 1 · L 2 i f 0 (x), i ≥ 1 is f −normalized with respect to a pair of operators (L 1 , L 2 ) on Ω.
Properties of Integro-Differential Operators
Let us consider some properties of operators Proof. By the definition of the operator The lemma is proved.
Let n − 1 < β < n, n = 1, 2, .... Then, by the definition of the operator β a D α and taking into account (5), we get If in the latter equality the parameter s takes one of the values Similarly, if s takes one of the values Then, for these values of s j , we get The lemma is proved.
The following assertion was proved in [8].
is valid.
Construction of a Solution to an Integral Equation
Let α, β > 0, m = 1, 2, .... Let us consider in the domain x > a the following integral equation It should be noted that for the case of the Riemann-Liouville operator, i.e., for α = 1, integral Equation (9) was studied in [38]. In this work, in the case when α = 1, based on the properties of a special Mittag-Leffler type function an algorithm for constructing a solution to Equation (9) was proposed for the cases when f (x)is a polynomial or a quasi-polynomial. The properties of the function E β,m,l (z) were also studied in [39][40][41][42]. In our case, to construct a solution to Equation (9), we use the method of normalized systems. For this purpose, we introduce the notations where L 1 is the unit operator. Then, Equation (9) can be rewritten in the form (4).
It is known (see, for example, [8]) that the operator β a J α is bounded from the space C[a, b] to the space C[a, b], and therefore, for each k = 1, 2, ..., an inclusion It is obvious that Hence, the system of functions ϕ k (x) from (11) is f -normalized with respect to the pair of operators The following assertion is valid.
and ϕ k (x) be defined by equality (11). Then, the function is a solution to Equation (9) from class C[a, b].
Proof. Let f (x) ∈ C[a, b]. Then, formally applying operators L 1 and L 2 to the series (11), we have Hence, function ϕ(x) from (12) formally satisfies Equation (9). It remains to study the convergence of series (11). For this, let us estimate functions ϕ k (x).
For k = 1 we get For k = 2, we get In the general case, using the method of mathematical induction, one can prove that the inequality This implies an absolute and uniform convergence of series (12) and the inclusion ϕ(x) ∈ C[a, b]. The theorem is proved. Now, let us construct explicit solutions of Equation (9) for particular cases of function f (x).
Then, the solution to Equation (9) is the function Proof. Under the conditions of this theorem, system (11) can be written as Find the explicit form of ϕ k (x). For k = 1, we get For k = 2, we get In the general case, for an arbitrary k ≥ 1, we get Hence, for the solution of Equation (9), we obtain representation (13). The theorem is proved.
where f k is a real number. Then, the solution to Equation (9) is written as where E β,γ (z) is a Mittag-Leffler type function [2].
Construction of Solutions for Homogeneous Fractional Differential Equations
Let where (15) can be rewritten in the form (4), and to construct a solution to this equation we have to construct a 0-normalized system with respect to operators B α,β γ , λ . In this case, we will use the method proposed in [25].
Definition 2 ([25]).
Operator D µ is called generalized-homogeneous of the µ order with respect to the variable t, if where 0 < a ≤ µ is a real number, C µ,a is a constant.
Let s ∈ R and D µ be a generalized-homogeneous operator of order µ. Let us suppose that operator D µ can be applied to the monomial t µk+s . Based on equality (16), we introduce the following coefficients Let us assume that C(µ, s, i) = 0, i ≥ 1.
As in case (17), consider the coefficients By virtue of equality (7) for k ≥ 1, we get Hence, By analogy with (18), we construct the functions Further, as then, introducing notations m = 1 + γ β , = s j +γ β for coefficients 1 C(α(β+γ),s j ,i) we get: If we now change the index k to k + 1, we finally obtain the equality Hence, function y j (x) in (20) satisfies the representation Thus, the following assertion is valid. We can similarly transform the functions y j,p (x) from (21). We get where function E p+1 β,m, (z) is defined by the equality , n ≥ 1.
Solutions to differential equations with the operator RL B γ α,β = (x − a) −αγ β a D α are constructed in a similar way.
The following assertion is valid.
Construction of Solutions to Inhomogeneous Differential Equations of Fractional Order
In this section, we consider a method for constructing a solution to inhomogeneous differential equations of fractional order with operators β a D α and Let us introduce the notations Then Equation (23) can be rewritten in the form (4).
First, we construct a solution to the homogeneous equation. To do this, we will construct 0-normalized systems with respect to the pair of operators Cβ a D α , λ(x − a) αγ · ν a J α . From Proposition 2, it follows that for this purpose, it is necessary to find all solutions of the equation Let f 0,j (x) = (x − a) αj , j = 0, 1, ..., n − 1. Consider a system of functions Let us find an explicit form of the system of functions f i,j (x).
The following assertion is valid. where Proof. By virtue of equality (8), we get Hence, for function f 1,j (x), we obtain Further, let equality (25) hold for a natural number r. Then, for r + 1, we get Thus, equality (25) also holds for the case r + 1. Obviously, for the given values of parameters α, β, γ, ν, for any i ≥ 1, the inequality C β,ν (β + ν + γ, j, i) = 0 is satisfied. The lemma is proved.
Then for all values j = 0, 1, ..., n − 1 the system of functions (25) is 0-normalized with respect to the pair of operators Cβ a D α , λ(x − a) αγ · ν a J α in the domain x > a.
Proof. Consider the function
Since function (27) is an integral function, it is obvious that and Cβ a D α y j (x) = λy j (x) ∈ C[a, b], for j = 0, 1, ..., n − 1. Therefore, functions y j (x) from (28) are solutions to the homogeneous Equation (23). The proof of the linear independence of solutions (28) will be shown below in Theorem 11. The theorem is proved.
Further, we will consider a method for constructing a solution to the inhomogeneous equation. Let f (x) ∈ C[a, b]. Then, by the proposition of Lemma 3, the function f 0 (x) = β a J α f (x) satisfies the equality Consider the system ]. Then the system of function (29) is f (x)-normalized with respect to the pair of operators Proof. Let f (x) ∈ C[a, b], then Further, we use the notation M = Hence, for any i ≥ 1, the estimate is valid: Let us calculate the value of the function g i (x) = β a J α · (x − a) αγ · ν a J α i (x − a) αβ . Due to equality (25), we get (30) are valid. Moreover, where a J α f i−1 (x) also belongs to class C[a, b] and the equality is satisfied: It is obvious that . Thus, in the class of functions X = C[a, b], the equalities (29) is f − normalized with respect to the pair of operators The lemma is proved.
a J α f (x) and function f i (x), i ≥ 1 are defined by equality (29). Then, the function is a particular solution of Equation (23) As the latter series converges uniformly in the domain a ≤ t ≤ b, the sum of this series, and hence the function y f (t), belong to class C[a, b]. The theorem is proved.
Let us investigate the representation of function (31) for some special cases of function f (x).
Proof. In this case, for f i (x) from (29), we get The lemma is proved.
This lemma implies the following assertion.
Then the particular solution of Equation (23) is written as Remark 4. In case α = 1 representation (32) of a particular solution of Equation (23) coincides with the result of [27].
Further, let us investigate the following Cauchy-type problem where d k are real numbers. First, let us consider the homogeneous problem (33), (34).
From Theorem 10 the following theorem can be derived.
Proof. For functions y 0 (t), y 1 (t), ..., y n−1 (t) we introduce an analogue of Wronskian: W α (x) = det m a T α y j (x) n−1 m,j=1 , a ≤ x ≤ b. As in the case of the theorem for linear differential equations of order n, the following statement can be proved.
In the simple case when γ = 0 from the last formula and from the statement of Theorem 10 the following assertion follows.
Corollary 6. Let f (t) be a smooth function. Then, the solution to the Cauchy problem Cβ a D α y(x) = λ ν a J α y(x) + f (x), a < x, is the function In particular, for n = 1, ν = 0 we get This formula for a = 0 and 0 < α ≤ 1 was obtained in [30].
In conclusion, we will consider an example of applying the results obtained to the equation in the theory of electrical circuits. Example 1. Let 0 < β ≤ 1, α, γ ≥ 0. Consider the following Cauchy problem Cβ a D α V (t) + ρ(t − a) γα · ν a J α V(t) = A, t > a, V(0) = V 0 . | 4,354 | 2021-08-02T00:00:00.000 | [
"Mathematics"
] |
Doped and undoped graphene platforms: the influence of structural properties on the detection of polyphenols
There is a huge interest in doped graphene and how doping can tune the material properties for the specific application. It was recently demonstrated that the effect of doping can have different influence on the electrochemical detection of electroactive probes, depending on the analysed probe, on the structural characteristics of the graphene materials and on the type and amount of heteroatom used for the doping. In this work we wanted to investigate the effect of doping on graphene materials used as platform for the detection of catechin, a standard probe which is commonly used for the measurement of polyphenols in food and beverages. To this aim we compared undoped graphene with boron-doped graphene and nitrogen doped graphene platforms for the electrochemical detection of standard catechin oxidation. Finally, the material providing the best electrochemical performance was employed for the analysis of real samples. We found that the undoped graphene, possessing lower amount of oxygen functionalities, higher density of defects and larger electroactive surface area provided the best electroanalytical performance for the determination of catechin in commercial beer samples. Our findings are important for the development of novel graphene platforms for the electrochemical assessment of food quality.
Heteroatom doped graphene has been lately considered as an ultimate candidate for numerous applications due to the possibility to tailor the material characteristics and to improve the physicochemical, optical, structural and electronic properties [1][2][3][4][5][6][7][8] . It has been recently demonstrated that heteroatom doping can endow graphene materials with improved electrochemical properties [9][10][11] . The effect of doping on the electroanalytical performance of graphene platforms has been investigated for various dopant types and concentrations, and it has been shown that both p-type and n-type graphene can provide an improved electrochemical response depending on the different application [12][13][14][15] . In fact, it was found that doping with heteroatoms with different electronegativity can favour the thermodynamic interaction between the graphene platform and the analysed probe, thus providing an enhanced electroanalytical signal 12,15 . Parallel to doped graphene, a comparison with undoped material should always be performed when studying the effect of doping on the behaviour of a graphene electrochemical platform. Specifically, the material characteristics such as amount of oxygen functionalities, presence of defects and value of surface area should be carefully evaluated in order to establish whether the increased response is due to the former properties or to the kind and amount of dopant. To date, a very limited number of studies provide a comprehensive investigation on these aspects. Hence, there is urgent need for more systematic studies in which all material and analyte features are taken into account.
In this work we investigate the effect of heteroatom doping on the detection of catechin, a polyphenol generally used as an index of food and beverage quality. A part from traditional techniques based on tedious and expensive chromatographic analysis [16][17][18] , catechin has been also detected by electrochemistry, using carbon platforms such as single-walled and multi-walled carbon nanotubes 19,20 . To the best of our knowledge there are no studies in the literature reporting the electrochemical detection of catechin on doped-graphene materials. In this study, we employ two graphene platforms doped with heteroatoms showing different electronegativity namely boron doped 1 Scientific RepoRts | 6:20673 | DOI: 10.1038/srep20673 graphene (p-type doping) and nitrogen doped graphene (n-type doping), and we compared their electrochemical performance with that of a thermally reduced undoped graphene for the detection of catechin. We chose for the comparison an undoped material with specific structural characteristics such as low concentration of oxygen functionalities (given by a high C/O ratio from XPS analysis), large amount of structural defects (corresponding to low D/G ratio obtained by Raman spectroscopy) and large electroactive surface area. We wanted to address the question whether the presence of dopant could still provide an enhanced electrochemical performance as compared to the chosen undoped graphene.
We found that, for the examined case, the best electroanalytical response was provided by the undoped graphene which was the material possessing the highest C/O ratio and the largest D/G ratio and electroactive surface area as compared to both heteroatom doped graphene materials. This opens new possibilities in the choice of the best suited graphene platform for electrochemical applications.
Experimental
Materials and Apparatus. Glassy carbon (GC) electrodes, (diameter = 3 mm), Ag/AgCl reference electrode and platinum counter electrode were obtained from CH Instruments (Austin, TX, USA). Boron -doped diamond electrode with a doping level of 1000 ppm of B and an H terminated surface was purchased from Windsor Scientific.
A μ Autolab type III electrochemical analyzer (Eco Chemie, The Netherlands) connected to a personal computer and controlled by General Purpose Electrochemical Systems, GPES Version 4.9 software (Eco Chemie) was used to perform differential pulse voltammetry measurements.
Preparation of Thermally Reduced Graphene. Thermally reduced graphene was prepared first by synthesising graphite oxide through the Staudenmaier method 21 before thermal exfoliation/reduction at 1050 °C was performed. Graphite oxide was obtained by adding 27 mL of nitric acid (98%) and 87.5 mL of sulphuric acid (98%) into a flask containing a magnetic stir bar. The mixture was cooled to 0 °C before the addition of 5 g of graphite. To ensure homogeneous dispersion and to avoid agglomeration, the mixture was stirred vigorously. Subsequently, 55 g of potassium chlorate was added slowly to the mixture at the maintained temperature of 0 °C. Once potassium chlorate was completely dissolved, the cap of the flask was loosened to allow any produced gas to escape. The mixture was stirred continuously for 72 hours at room temperature for a complete reaction. After which, the mixture was decanted and poured into 3 L of distilled water. The formed graphite oxide was then dispersed in hydrochloric acid (5%) and repeated centrifugation and re-dispersion into distilled water was performed with silver nitrate and barium nitrate until there was a negative reaction to chloride and sulphate ions. Finally, the obtained slurry was dried at 50 °C for 48 hours in a vacuum oven.
Thermally reduced graphene was prepared by introducing 0.2 g of graphite oxide into a quartz capsule connected to a magnetic manipulator, which was inside a vacuum tight tube furnace with controlled atmosphere. The magnetic manipulator created a temperature gradient of over 1000 °C min −1 . The sample was flushed with nitrogen repeatedly before it was inserted by the magnetic manipulator into the preheated furnace and held for 3 minutes. The nitrogen flow rate was maintained at 1000 °C min −1 to remove any exfoliation by-products from the procedure.
Preparation of boron-doped graphene (BDG).
Boron-doped graphene (BDG) was prepared from graphite oxide synthesized through Staudenmaier method. Graphite oxide was thermally exfoliated in the presence of a boron precursor, namely boron trifluoride diethyl etherate (BF 3 Et 2 O). Exfoliation was performed in a bubbler filled with the liquid boron precursor at 20 °C and 1000 mbar. Nitrogen carrier gas with a flow rate of 100 mL/ min was used and dilution was performed with 1 L/min nitrogen and hydrogen/nitrogen mixture (0.5 L/min N 2 and 0.5 L/min H 2 ). The reactor was continuously flushed with nitrogen and the flow of boron precursor was stabilised for 5 minutes before it was introduced into the hot region of the reactor. Exfoliation was then performed for 12 minutes at 1000 °C.
Preparation of nitrogen-doped graphene (NDG).
Nitrogen-doped graphene was prepared from graphite oxide synthesized through Hummers method 22 before exfoliation was performed under ammonia atmosphere.
Graphite oxide was prepared by adding 2.5 g of sodium nitrate, 5 g of graphite and 115 mL of sulphuric acid (98%) into a flask under continuous stirring. The mixture was cooled in an ice bath before the addition of 15 g of potassium permanganate. Vigorous stirring was maintained for 2 hours to obtain a homogenous solution. After that, the mixture was cooled down and then reheated to 35 °C for 30 minutes. Subsequently, the mixture was diluted with 250 mL of deionised water and it was further heated to 70 °C. The temperature of the mixture was maintained for 15 minutes before it was further diluted with 1000 mL of deionised water. The removal of unreacted manganese dioxide and potassium permanganate was carried out by adding hydrogen peroxide (3%) into the mixture and decanting. Repeated centrifugation and redispersion into distilled water was performed with barium nitrate until a negative reaction to sulphate ions was observed. Graphite oxide slurry was then dried at 60 °C for 48 hours in a vacuum oven.
Nitrogen-doped graphene was prepared by exfoliation of produced graphite oxide in ammonia atmosphere. A quartz glass capsule was filled with 100 mg of graphite oxide before it was connected to a magnetic manipulator and placed in a horizontal quartz glass reactor. The reactor was flushed continuously with nitrogen before it was introduced into the hot region. Then the nitrogen flow was changed to ammonia. The temperature of the mixture was maintained for 12 minutes at 600 °C and the ammonia flow rate of 300 mL/min was used to remove any exfoliation by-products. Fig. 1. shows a schematic of the preparation of undoped and doped graphene materials.
Electrochemical measurements. The synthesised thermally reduced graphene (TRG), boron-doped graphene (BDG) and nitrogen-doped graphene (NDG) were ultrasonicated for few minutes before each use. After ultrasonication, 1 μ L of the material was deposited onto the surface of a glassy carbon (GC) working electrode. The solvent was left to evaporate at room temperature in order to obtain a randomly distributed film of the desired material on the electrode surface. After each measurement the surface of GC was cleaned by polishing with 0.05 μ m alumina powder on a polishing cloth.
Electrochemical experiments were performed in a 4 mL voltammetric cell at room temperature (25 °C) using a three electrode configuration.
Differential pulsed voltammetry parameters used for the experiment were applied as follows: 3 s equilibration time, 50 ms modulation time, 0.5 s interval time, 25 mV modulation amplitude, and 4 mV step. The raw data obtained were treated by a baseline correction with a peak width of 0.01, using GPES software. Measurements were performed in a 4 mL solution containing various concentrations of standard analyte in 100 mM phosphate buffer solution (PBS) at pH 7.3. Catechin hydrate in increasing concentrations from 1.2 μ M to 12.0 μ M was used for the measurements, similarly to previous findings 23 .
The analysis of commercial lager beer sample was performed by using a dilution factor of 1:10. Standard addition method was used for the analysis of real samples.
Results and Discussion
In this study we wanted to compare the electrochemical performance of undoped and doped graphene platforms namely thermally reduced graphene (TRG), boron doped graphene (BDG), and nitrogen doped graphene (NDG) for the detection of catechin, an important polyphenol which is correlated to food quality. Unmodified glassy carbon (GC) electrode and boron doped diamond (BDD) electrode were also used as reference materials. We wanted to investigate if the presence of heteroatoms with different electronegativity could have an influence on the electrochemical response provided by the graphene platform, as it could be expected from previous works [12][13][14][15]24,25 , or if the material characteristics would play a major role.
In order to gain more insight into the material properties, characterization was performed by XPS, Raman spectroscopy and prompt gamma-activation analysis 25 and the results were collated in Table 1. The C/O ratio provided by XPS analysis gives an indication on the amount of oxygen functionalities which are present on the material surface, being the higher C/O ratio indicative of a lower amount of oxygen containing groups. From the obtained results we can conclude that a larger amount of oxygen functionalities is present on NDG surface, followed by BDG and finally TRG (for the detailed XPS spectra please refer to Figure S1 in Supporting Information). Raman characterization provides information on the structural disorders on the material surface. The D band at around 1350 cm −1 is correlated to the presence of defects due to sp 3 hybridized carbon whilst the G band at around cm −1 1560 indicates the sp 2 hybridized carbon. The ratio between the intensities of D and G band provide information on the degree of disorders in the carbon structure of the material. As depicted in Table 1, a larger amount of defects is present on TRG surface while BDG is the material containing less structural disorders. In addition, SEM characterization confirmed the successful thermal exfoliation of all graphene materials, showing a typical exfoliated structure (see Figure S3, Supporting Information).
The oxidation of catechin occurs sequentially at the catechol and resorcinol group respectively 27 . The first oxidation is a reversible process taking place at the catechol 3′ , 4′ -dihydroxyl electron-donating groups, while the second oxidation is an irreversible process occurring at the hydroxyl group of resorcinol group (see Fig. 2).
The oxidation process is pH dependant, with a shift towards lower oxidation potentials when the pH of the solution is increased from 3.5 to 8.0 28 . For this reason the measurement was performed in the higher pH range, in order for the oxidation to occur at lower potentials, which in turn contributes to a better selectivity for real sample analysis. Figure 3 shows a preliminary study comparing the oxidation peaks of catechin on the five different materials for 12.0 μ M concentration, and Table 2 shows the collated data from Fig. 3. With reference to Fig. 3 and Table 2 all graphene materials, either doped or undoped, show an improved electrochemical response in terms of peak intensity when compared to GC bare electrode, while the oxidation potential is similar for all materials including GC. On the other hand, BDD shows a poorer response in terms of both peak intensity and peak potential. In fact, the oxidation of catechin on BDD happens at a much higher potential as compared to the rest of materials, as also depicted in Fig. 4.
The observed trend could be attributed to the structure of BDD which contains sp 3 hybridized carbon and therefore lacks of the sp 2 network that would be necessary to form π -π stacking interactions with the aromatic polyphenol used as probe. Such interactions, which are very likely to occur on both doped and undoped graphene materials, are able to promote an accelerated heterogeneous electron transfer 12 .
The response from the oxidation of catechin on bare GC, BDD, TRG, BDG and NDG was studied between 1.2-12.0 μ M and the voltammograms were displayed in Fig. 5. Calibration curves of peak current (nA) versus concentration (μ M) were plotted to study the sensitivity, selectivity and linearity of the response of each material towards the oxidation of catechin. The slope of calibration curve, the correlation coefficient (R 2 ) and the peak width at half height (W 1/2 ) are consolidated in Table 3. From the extracted data, the calibration sensitivity for the oxidation of catechin is the highest at 143.22 nA μ M −1 on TRG, followed by BDG at 88.283 nA μ M −1 , NDG at 81.282 nA μ M −1 , GC at 68.267 nA μ M −1 and finally BDD at 1.6946 nA μ M −1 .
Overall, both undoped and doped graphene materials showed enhanced sensitivity on the detection of catechin as compared to bare GC, whilst among the doped graphenes, BDG showed a better sensitivity than NDG. All materials presented good linearity of response with R 2 ≥ 0.9797 for all graphene platforms. The influence of different materials on peak width at half height (W 1/2 ) for 12.0 μ M catechin was also investigated to correlate the parameter to the selectivity of the materials in the presence of interferences. With the data collated, BDD has the highest W 1/2 while the other materials have similar W 1/2 with slight improvement for TRG and NDG as compared to GC.
As observed form the material characterization, TRG showed the lowest content of oxygen functionalities, the highest amount of structural disorders and the largest electroactive surface area. All these factors contributed to improve the material electroanalytical performance, thus resulting in enhanced sensitivity of the electrochemical signal. As for a comparison between the doped graphenes, BDG showed better calibration sensitivity as compared to NDG despite the lowest amount of defects and electroactive surface area presented by the former. Clearly, among the doped graphenes, the electrochemical response is mostly influenced by the kind of heteroatom rather than the properties of the materials. In fact, as recently demonstrated, the favourable thermodynamic interactions between the electron withdrawing boron and the electron donating oxygen groups of the analysed probe strongly influences the oxidation process 15 .
Given the best electrochemical performance of TRG in terms of sensitivity, selectivity and linearity of response, the material was chosen for the application to real sample analysis.
The results obtained for three commercial beer samples and represented as catechin equivalents are depicted in Table 4. The results reveal the dissimilar polyphenol content of the three beer samples due to their composition (relative ratio of malted barley and hops) and brewing process 29 . In addition, a good linearity (R 2 ≥ 0.9589) and repeatability of results (RSD ≤ 10.44%) were achieved.
Finally, in order to confirm the selectivity of the response towards catechin in real samples, a study on TRG was performed by measuring the concomitant current response of luteolin, another polyphenol present in beer. As it can be seen in Figure S4 (Supporting Information), a significant signal separation of about 120 mV was recorded between catechin and luteolin.
Conclusions
We investigated the influence of structural properties of doped and undoped graphene materials on their electrochemical performance for the assessment of catechin, a standard polyphenol commonly used as index of food quality. We observed that in general, graphene materials show an enhanced electroanalytical response when compared to bare glassy carbon and boron doped diamond electrodes. This is because of the larger electroactive surface area they possess, together with the sp 2 network which favours the interactions with the analyte by π -π stacking. As a result of that, an increased intensity of the peak current and a lower oxidation potential was observed on both undoped and doped graphene platforms. Overall, the undoped graphene namely thermally reduced graphene (TRG) provided the best analytical performance in terms of sensitivity, selectivity and linearity of response due to the intrinsic properties of the material such as lower content of oxygen functionalities, higher amount of structural disorders and larger electroactive surface area as compared to doped graphenes. We demonstrated that in the reported case, the outstanding material properties play a major role towards the oxidation of catechin rather than the nature of heteroatom used for the doping. In addition we found out that within the heteroatom doped materials, the best performance was provided by the boron doped graphene because of the favourable effect of boron in promoting the thermodynamic interactions between the analytical probe and the graphene platform. Finally, we demonstrated the suitability of TRG platform for the real sample analysis by determining the amount of polyphenols, expressed as catechin equivalents, in three commercial beer samples. These findings provide an insight into doped and undoped graphene suitability for food science application. Table 4. Catechin equivalents (CE) in beer samples measured by using TRG platform. CE value was extrapolated by using standard addition method for each beer sample. (CE = milligrams of catechin per 100 ml). | 4,536 | 2016-02-10T00:00:00.000 | [
"Materials Science"
] |
Direct detection of electroweak dark matter
TeV-scale dark matter is well motivated by notions of naturalness as the new physics threshold is expected to emerge in the TeV regime. We generalise the Standard Model by including an arbitrary SU(2) multiplet of dark matter particles in non-chiral representation. The pseudo-real representations can be viable DM candidates if one considers a higher-dimensional operator which induces mass-splitting, and avoids the tree-level inelastic scattering through Z-boson exchange. These effective operators give rise to sizable contributions from Higgs mediated dark matter interactions with quarks and gluons. A linear combination of the effective couplings named λ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\lambda $$\end{document} is identified as the critical parameter in determining the magnitude of the cross-section. When λ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\lambda $$\end{document} is smaller than the critical value, the theory behaves similar to the known renormalisable model, and the scattering rate stays below the current experimental reach. Nevertheless, above the criticality, the contribution from the higher-dimensional operators significantly changes the phenomenology. The scattering amplitude of pseudo-real models will be coherently enhanced, so that it would be possible for next generation large-exposure experiments to fully probe these multiplets. We studied the parameter space of the theory, taking into account both indirect astrophysical and direct search constraints. It is inferred that multi-TeV mass scale remains a viable region, quite promising for forthcoming dark matter experiments.
Introduction
Astronomical measurements from sub-galactic to large cosmological scales require the existence of huge amount of obscure non-luminous matter in the universe which is not contained in the Standard Model (SM) of particles [1,2].Dark matter (DM) constitutes great majority of the total mass density of the cosmos [3], and takes a key role in the characteristics of large and small astrophysical structures [4].
Among various hypotheses concerning the nature of DM and its interactions, it is compelling to couple the dark sector to the SM in a such minimal way that no new gauge field is introduced.In this method, the positive features of the Standard model is preserved and no new forces are added to the physics.Since weak bosons are the only fundamental force carriers that can mediate interactions with dark matter, this construction can be realised simply through introducing a non-trivial electroweak multiplet.
We generalise the Glashow-Weinberg-Salam (GWS) Theory by an extra fermionic n-tuplet of SU( 2)×U(1) y symmetry group in the non-chiral representation, whose neutral component is referred to as electroweak dark matter (EWDM).The important feature of the fermionic version is that it limits the renormalisable interactions of the dark sector with the SM only through the electroweak gauge bosons.This property allows the limited number of free parameters of the model to be determined robustly; so that the theory can provide accurate phenomenological predictions for a wide range of DM experiments.It also avoids emergence of dangerous decay operator that make the system unstable.
Extensions of the SM involving an additional weak multiplet have been studied in a range of contexts such as supersymmetry [5], little Higgs model [6], inert Higgs models [7,8], neutrino mass generation [9], Kaluza-Klein theory [10], etc.These extensions mostly contain lower dimensional representations of SU(2) doublet and triplet.However, larger multiplets have been proposed in recent works such as inert Higgs doublet-septuplet [11], exotic Higgs quintuplet [12], quintets in neutrino mass mechanism [13,14], SO (10) and E 6 scalar and fermionic multiplets in grand unified theories stabilised by a remnant discrete symmetry [15][16][17]; and finally fermionic quintuplet and scalar septuplet in minimal dark matter model.The latter remains stable due to an accidental symmetry in the SM gauge groups and Lorentz representations leaving no renormalisable decay mode for DM [18,19].
Direct detection (DD) as one of the primary means in the search for dark matter, looks for nuclear recoil as the signal of an exotic particle collision with the detector.DM interactions with visible matter are so weak that there is a small probability to detect dark matter scattering off nucleus in the target volume.Nevertheless, if such a rare event is observed, DD experiment would reveal important properties of dark matter including its mass and coupling strength with the SM.
The past literature on direct detection mainly focused on the real representations of the electroweak dark matter, because the phenomenology was straight forward in comparison with the complex models.In addition, the effect of the mass-splitting and higher-dimensional couplings on the DD cross-section has been overlooked in the past.
In this work, an Effective Field Theory (EFT) approach is employed to describe the non-renormalisable interactions of dark sector with the SM at low energies.We include the lowest-dimensional effective operators that are allowed by the symmetries of the electroweak theory.These operators encapsulate the effects of heavy particles, non-perturbative contributions and ultraviolet completions at higher energies.The coefficients of these operators can be constrained by the available data from today's relic density, indirect searches, direct detection and collider experiments.
These effective terms in the Lagrangian break the initial U(1) D symmetry of the dark sector down to Z 2 .Introducing new off-diagonal components to the mass matrix, they split the pseudo-Dirac dark matter into two Majorana fermions.This mechanism eliminates the dangerous tree-level DM coupling to the nucleon and therefore recovers the pseudo-real representations of EWDM theory, which otherwise would have been ruled out by current constraints.
In the previous work [20], the thermal masses of all possible EWDM models were computed using freeze-out mechanism.We also studied the gamma-ray probes of the theory in a variety of astronomical sources including the Milky Way's black hole, inner Galaxy and dwarf satellites for continuum and line spectra.
In the current paper, we include the non-renormalisable interactions in the full theory to study the impact of the higher dimensional operators on the effective scattering of EWDM off nucleus.The coupling of SU (2) multiplet to Higgs boson induced by five-dimensional operators, gives rise to new scattering diagrams.We evaluate the Wilson coefficients corresponding to these processes, and analyse the behaviour of the spin independent (SI) cross-section with respect to the changes in the higher dimensional coupling constants.Finally, the phenomenological results are confronted with the latest experimental data as well as projected sensitivities of the future DD experiments.
This article is organised as follows: In the next section, we provide a generalised framework to assess how DM particles can couple to the SM electroweak gauge bosons, and will introduce the electroweak theory of dark matter.The chiral and pseudo-real representations of such DM theory are reviewed, and mass splitting between components of EWDM multiplet is analysed.It will be explained that, as a general rule, any pseudo-real model can avoid bounds from direct detection experiments, by including a dimension five operator that splits the neutral Dirac state into two Majorana components.
After a brief review of the low-energy effective interactions of electroweak dark matter with nuclei in section 3, we evaluate matrix elements of effective operators at parton level, and discuss the elastic spin-independent cross-section for the scattering process.In section 4, using Feynman diagram matching, we provide a detailed calculation of the Wilson coefficients for both quark and gluon interactions, and explain the scale dependence of these coefficients.Section 5 is devoted to numerical computation of the spin-independent cross-section for EWDM scattering off nuclei.The results will be compared with current experimental bounds and projected sensitivities.Next, we combine the constraints from direct detection with astronomical indirect probes to explore the parameter spaces for real and complex models.Finally, we conclude the study in section 6.
Details of the interaction Lagrangian of the dark sector and relevant Feynman rules are presented in the Appendix A. We provide in outline, the loop integrals required to compute the Wilson coefficients of the effective theory in the Appendix B.
EWDM Theory
In this section, we study the extension of the SM by adding an arbitrary fermionic n-tuplet charged under SU(2)×U(1) y .A fourth chiral generation is severely disfavoured by electroweak precision [21,22] and Higgs production experiments [23,24], so it is conceivable that dark matter belongs to a non-chiral representation of the electroweak sector.We will briefly review the pseudo-real representations of this theory, investigate the mass split between different components, and explain the phenomenological consequences.
Non-chiral Dark Matter
Non-chiral or vector fermion has the property that its right and left chirality components transform in the same way under gauge symmetry [25,26].As a result a gauge-invariant mass term χχ is allowed.Their masses are unbounded as the mass is not obtained through EWSB mechanism [27].Their coupling to the electroweak bosons Z and W ± are purely vector χγ µ χ hence having the left and right currents on an equal footing.
Vector fermions are not subject to bounds resulting from the electroweak precision data.A degenerate non-chiral multiplet gives no contribution to the value of S, T and U oblique parameters [28].
The multiplets are labelled by dimension of the representation and the hypercharge (n, y).Considering the normalisation convention q = y + t (3) for Gell-Mann-Nishijima relation, the hyper-charge is the offset of the electric charge from the range of the weak isospin values.So the electric charge of the i th element in the multiplet reads: Since dark matter is electrically neutral, then its weak isospin must have the same value as the hyper charge −t (3) 0 = y.Therefore a generic electroweak gauge n-tuplet takes the form: where χ q denotes the element with electric charge q.The 1 2 (n + 1) + y neutral component χ 0 is the actual dark matter candidate. 1 This means that for representations of odd dimension, y takes integer values; while even-dimensional multiplets have half-integer hypercharge.In addition, the total number of n-tuplets containing a DM candidate equals the dimension of the representation n = 2t + 1 where t is the highest isospin weight. 2 1 It can be seen that the element n+1-i, in the multiplet with hypercharge -y, has the opposite charge of -(n+1)/2+i-y.So, the opposite hypercharge multiplet can be considered as the reversed order multiplet with elements having opposite electric charges.As an example compare the dark matter quadruplet χ ++ , χ + , χ 0 , χ − T with hypercharge 1/2 with the one χ + , χ 0 , χ − , χ −− T with y = −1/2.This becomes a useful property when the conjugate multiplet is defined in (5) as the later is proportional to the conjugate of the former.
2 For example a DM quadruplet with highest weight t = 3/2 could refer to four possible multiplets χ +++ , χ ++ , χ + , χ 0 T , The Standard Model is believed to be a self-consistent description of the physics up to the Plank scale M pl .As a result the gauge couplings need to remain perturbative up to that cut-off scale.Adding extra SU(2) multiplets will accelerate running of these marginal couplings to the non-perturbative regime which might lead into appearance of Landau pole (LP) before the Plank scale.Landau pole is thought to be associated with some new physics mechanisms that violate accidental symmetries of the SM.So we demand LP to be above M pl which will lead into an upper bound on dimensionality of the fermionic EWDM multiplet to n ≤ 5.This result holds true for renormalisation group equations solved up to two-loop level [18,29].
Due to different theoretical properties and phenomenological consequences, non-chiral dark matter is usually classified into real and complex representations.We intend to keep focus of this discussion on the pseudo-real models. 3
Pseudo-real Representation
For DM in pseudo-real representation of SU( 2)×U(1) y , the hypercharge is non-vanishing y = 0.In this case, all components of the multiplet including the neutral DM are Dirac fermions.
The SM is extended by adding the dark sector Lagrangian: where Λ is the mass scale, λ 0 and λ c are coefficients, and T a and T a = σ a /2 are SU(2) generators in the n-dimensional and fundamental representations respectively.The covariant derivative reads D µ = ∂ µ +iyg y B µ + ig w W a µ T a .The conjugate representation is defined as X C ≡ CX c where X c denotes the multiplet with charge conjugated fields.C is an anti-symmetric off-diagonal matrix with alternating ±1 entries, normalised so that it equals −iσ 2 in 2 dimensions i.e.H C = −iσ 2 H * .More precisely: So for the generic multiplet 2 the conjugate multiplet reads: Note that the scalar product of the adjoint vectors in the the second line, will preserve electric charge only for hyper-charge y = 1 2 , (c.f.equation 8).This also means that such an operator only exists for even-dimensional multiplets.It can also be seen that other terms in the Lagrangian respect a U(1) D symmetry of the form X → e iθ X where θ is an arbitrary real parameter.The mentioned effective operator enables coupling of two conjugated DM multiplets X * with Higgs boson, and therefore explicitly breaks this symmetry down to Z 2 under which dark matter particles are odd X → −X. 4
Mass Splitting
The effective operator proportional to λ c is responsible for mass splitting between the neutral and charged components at tree-level.After electroweak symmetry breaking, H → 0, (h + ν)/ √ 2) T , the mass splitting can be cast as: where q indicates the tree-level mass of the component ψ q , and m In addition, self energy corrections via coupling to the SM gauge bosons, at one loop order, induce radiative mass splitting between charged and neutral particles which takes the form [19]: q is the loop-induced mass of ψ q field, and radiative mass splitting ∆ is given by ∆ = α w m w sin 2 (θ w /2) ≈ 166 MeV [36]. Figure 1: Left 1a: Mass splitting between the dark matter candidate χ 0 and the singly charged states χ + 1 (in blue) and χ + 2 (in red) as a function of the non-renormalisable coupling λc/Λ for the complex quadruplet.The figure shows masssplitting for two values of the other non-renormalisable coefficient λ0/Λ.The curves corresponding to λ0/Λ = 10 −6 GeV −1 are in solid lines, while those at the minimum (λ0/Λ)min = 10 −8 GeV −1 (24) are plotted in dashed-dotted style.The light (dark) blue shaded regions are excluded at minimum (10 −6 GeV −1 ) value of λ0/Λ as the theory cannot be considered as a dark mater model ∆ (1) + < 0 according to 14 (15).Right 1b: The right panel illustrates the mass splitting of these fields ∆ + (red) with respect to the effective coupling λ0/Λ for the same quadruplet.The solid lines show the result in case the other coupling has dλc/Λ = 1.1 × 10 −5 GeV −1 deviation from the mean value, and the dashed-dotted curves correspond to the average value of (λc/Λ)av = 1.2 × 10 −5 GeV −1 .In the light blue coloured region, the χ + 1 field becomes the lightest particle ∆ + < 0 as shown in (16), when λc deviates from the mean, beyond the range specified in (14).The grey shaded area is ruled out by DD experiments due to inelastic scattering of DM off target nucleus via exchange of Z boson at tree-level (24).Note that in both figures, the blue coloured regions still remain a valid beyond the Standard Model (BSM) proposal.
Interactions of DM with gauge fields are encoded in the kinetic term of the Lagrangian 3. The covariant derivative after EWSB takes the form Due to DM vector coupling to the neutral weak gauge field at tree-level, it can scatter coherently off the nuclei by Z-boson exchange [37].The resultant cross-section is so large that the pseudo-real electroweak dark matter would be excluded, based on current experimental data [? ].
However, complex dark matter scenario can be resurrected if DM-Z boson interaction at tree level could be avoided, by adding a new mechanism.This can be done through introducing an effective operator which violates the would-be U(1) D symmetry and allows for splitting of the Dirac neutral state, as will be explained.
After EWSB, the non-renormalisable operator proportional to λ 0 reduces to: Vacuum expectation value (VEV) of Higgs field makes an additional contribution to the mass matrix of the neutral components, and their mass term will change to: So, the the scalar product of the adjoint vectors 8 splits the masses of the neutral components through the δ 0 = n ν 2 λ 0 /8Λ term.As will be discussed |δ 0 | ≪ m, and DM mass eigenstates up to zeroth order in O(ℑ δ 0 /m) can be written as: with masses m 0 = m (t) 0 + 2ℜ δ 0 , and m 0 = m (t) 0 − 2ℜ δ 0 respectively.The mass eigenstates χ 0 and χ 0 are Majorana fermions into which pseudo-Dirac state ψ 0 is split.Without loss of generality, we take the imaginary field χ 0 to be the lightest DM candidate.Equation 8shows that λ 0 term also introduces an off-diagonal contribution proportional to δ q ≡ (−1) q ν 2 n 2 − 4q 2 (λ 0 /8Λ) causing mixing of the charged states ψ q and (ψ −q ) c . 5 Then using the relations ψ −q ψ −q = (ψ −q ) c (ψ −q ) c and (ψ q ) c ψ −q = (ψ −q ) c ψ q , the mass term for the charged particles with q > 0 can be cast as: After diagonalisation of the mass matrix, the following new eigenstates are obtained: χ q 1 = −e i λ0/2 s q ψ q + e −i λ0/2 c q (ψ −q ) c , χ q 2 = e i λ0/2 c q ψ q + e −i λ0/2 s q (ψ −q ) c .(12) where the hat in the exponent should be understood as argument λ 0 ≡ arg λ 0 .With corresponding masses: here m q is the mass of the charged field χ q , and without loss of assumption, we take m (1) q .We also define m q ≡ m (t) 0 + ∆ R q , and d q ≡ 2yc −1 w ∆ − (λ c /4Λ)ν 2 q.The radiative mass splitting takes the form ∆ R q ≡ m q −m 0 = q 2 ∆ [18].The splitting between the neutral and χ ± components is given by ∆ = α w m w sin 2 (θ w /2) ≈ 166 MeV [36].
The dark matter candidate χ 0 needs to be the lightest member of the multiplet m q > m 0 .This condition does not always hold true for the lighter charged field χ q 1 .Demanding m q to be less than DM mass i.e. 4|δ q | 2 + d 2 q < ∆ R q + 2ℜδ 0 2 , further imposes constraints on the value of the non-renormalisable coupling constants λ c and λ 0 .
It turns out that if λ c stays within the following range, then there will be no limit on the value of the neutral coupling λ 0 : with an average of (λ c /Λ) av = 4∆/c w ν 2 ≈ 1.2 × 10 −5 GeV −1 .This requirement further sets the limit for the scale of new physics that induces the charged-neutral splitting to Λ/λ c ∼ 10 5 GeV.
Figure 1a illustrates the mass splitting of the charged states χ + 1 and χ + 2 as a function of the non-renormalisable coupling λ c , for pseudo-real quartet.Note that the range of acceptable values of the charged coupling d( increases as the other coupling λ 0 gets stronger.It can be seen that the mass difference between the states ∆ (2) + increases as the coupling deviates from the mean value.It has a minimum of 4|δ + | 2 at the average λ av c , and the particles with the same charge q become degenerate m (1) (14), then there will be a lower bound on the value of the neutral coupling: The left panel 1b of the figure shows changes in the mass splitting of theses states with respect to the coupling λ 0 .It can be observed that difference between the masses of particles increases with coupling strength going up.The minimum allowed value of λ 0 grows as the other coupling λ c stays away from the average.
The highest weight particle χ n/2 is unique and has a mass of: The requirement for χ n/2 to be lighter than DM candidate χ 0 demands: Figure 2: Left 2a: Mass splitting of the highest weight state χ + with respect to λc/Λ coupling in the pseudo-real doublet model.The neutral coefficient is fixed to its minimum value of (λ0/Λ)min = 10 −8 GeV −1 and also λ0/Λ = 10 −5 GeV −1 for solid and dashed-dotted lines respectively.The light (dark) blue region is excluded at minimum (10 −5 GeV −1 ) due to the requirement (18) for DM χ 0 to be the lightest particle.Right 2b: Mass difference of the χ + field as a function of λ0/Λ coupling.The dashed-dotted line correspond to the average strength of (λc/Λ)av = 1.2 × 10 −5 GeV −1 , and solid line corresponds to the deviation dλc/Λ = 1.2 × 10 −5 GeV −1 .The grey patch is ruled out as a result of the direct detection constraint (24).The light blue area is excluded since χ + becomes heavier than DM candidate, according to (18).
This imposes an upper bound on the strength of λ c .If the charged coupling is less than there is no restriction on the value of the other coefficient λ 0 .In the event that λ c exceeds this limit, the neutral coupling λ 0 will be bound from below.
In all the complex scenarios except the doublet case, the bounds on the non-renormalisable couplings from the lighter charged state (15), (16) prove to be stronger than that of the highest weight (18).So in practice, the conditions above is only applicable to C2 model.
Figure 2a plots the mass splitting of the χ + state as a function of the charged coupling in the pseudo-real doublet.It can be seen that the maximum allowed value of λ c increases with strength of the other coupling λ 0 .
The right panel of this figure 2b illustrates the changes in the mass splitting of the highest weight field with respect to the neutral coupling λ 0 .The lower bound on λ 0 goes up as λ c gets further from its mean value.
EWDM-Nucleon Scattering
The scattering event rate in direct detection experiment is very sensitive to the actual form of coupling between dark matter particle and parton constituents of the hadron.At the beginning of this section, we provide a brief review on inelastic scattering process, and the restrictions imposed on the coupling strength.Then, we consider the full set of relativistic operators that contribute to the effective interactions of EWDM with quarks as well as gluons at the leading order.Finally, the matrix elements of these operators are evaluated at different scales, and the spin-independent scattering cross-section will be discussed.
Inelastic Scattering
Interactions of the neutral dark particles are obtained from expansion of the gauge-fermion kinetic term in 3: where g z ≡ g w /c w .The full Lagrangian of the dark sector is lengthy and is presented in Appendix A, together with the Feynman rules for couplings to EW gauges and scalar Higgs.
It can be seen that dark matter particle χ 0 has no interaction with Z-boson, so elastic scattering off the nuclei is forbidden at tree level.Elastic scattering still can occur via non-renormalisable DM-Higgs interactions (72) or through different loop-induced processes.The relevant cross-section is either suppressed by the mass scale of the theory or loop-suppressed, and therefore expected to be small.In the remainder of this paper, we perform an explicit evaluation of the Feynman diagrams, and exact calculation of the cross-section to qualitatively and quantitatively predict the direct search results for these processes.
However, there exists a coupling between χ 0 and χ 0 via Z-boson exchange which leaves a possibility for DM-nucleon inelastic scattering χ 0 N → χ 0 N that would be excluded by many orders of magnitude by direct search experiments. 6nelastic scattering of dark matter off nucleus is an endothermic process in which DM kinetic energy loss )/2 is converted into the mass difference of the outgoing odd particle ∆ 0 and recoil energy of the target nucleus E R : One needs to Impose momentum conservation 2m which is a difficult to measure quantity.Solving for the unknown parameter, outgoing DM velocity v ′ χ we obtain: where θ is the scattering angle.For an endothermic reaction, there exist a threshold energy K (th) χ below which no solution is available for the equation above [38,39], and thus scattering is not possible: So keeping the kinetic energy of the incoming particle less than the threshold χ , dark matter χ 0 cannot up-scatter to the excited state χ 0 .In the high mass regime, we finally arrive at the condition: It means if the mass splitting ∆ 0 is sufficiently large, then the inelastic nucleonic scattering will be kinematically forbidden.Therefore, by setting ∆ 0 > O(100) keV the tree-level coupling to nuclei is completely suppressed.
In addition, this restricts the value of the neutral pseudo-Dirac splitting coefficient to In other words, the scale of the new physics responsible for breaking U(1) D symmetry is set to Λ/λ 0 ∼ 10 8 GeV.
So, in the following discussion, we focus our study on elastic direct detection processes.
Effective Interactions
The effective Lagrangian describing EWDM -nucleon scattering at parton level is composed of two parts: L R which is constructed only from the renormalisable interactions of the UV theory, while L NR allows for non-renormalisable couplings of DM to the SM Higgs (72): The low-energy Lagrangian of the electroweak dark matter composed of renormalisable couplings, has already been studied in the literature.We provide a review in the appendix C to calibrate our results with previous publication.
The effective Lagrangian which includes non-renormalisable interactions, containing DM bilinear operators of up to dimension-4, in leading order, is given by:7 f Tq
where q and G a µν denote quark field and gluon field strength tensor.
As will be explained, the dominant loop momenta in the Feynman diagrams of the full theory are generally around the weak scale.So, we set the UV scale at µ UV ≈ m z , and therefore consider an effective theory with n f = 5 active quarks lighter than this characteristic mass m q < µ UV , that are d, u, s, c, and b.
Since the quark scalar operator breaks the chiral symmetry of QCD, it is suppressed by quark mass m q .Electroweak dark matter scatters off gluons at loop level, therefore the effective interactions are suppressed by strong structure constant α s ≡ g 2 s /4π.As will be discussed later, in order to make the quark and gluon Wilson coefficients C q and C g comparable, we multiply them by factors of m q and α s /π respectively. 8ll the operators in the Lagrangian above, are scalar type, and thus only generate Spin-Independent (SI) interactions.
Matrix Elements
Nucleon indeed obtains the bulk of its mass M N through spontaneous chiral symmetry breaking, even in the limit of vanishing quark masses.However, a small fraction is attributed to the quark σ-terms which cause explicit breaking of the chiral symmetry.The contribution of valance and sea quarks to the nucleon mass is parametrised by the quark mass fraction: where the matrix element qq ≡ N |qq|N , evaluates the scalar operator on the nucleon state |N .For light quarks, this quantity can be determined experimentally from σ terms of the nucleon-pion scattering (for up and down) and kaon-nucleon scattering (for strange).The pion-nucleon sigma term σ πN = (m u + m d ) ūu + dd /2, can be read off π − N scattering amplitude at Cheng-Dashen point [41] using dispersive analysis [42,43].
Alternatively, we can use chiral perturbation theory (χPT) where σ πN depends on a set of low energy constants which can be determined by fitting to the experimental π−N scattering data [44].Different extensions of this theory to the baryonic sector have been studied for this purpose, including heavy baryon χPT [45,46], infrared BχPT [47][48][49], and covariant BχPT with extended-on-mass-shell scheme [50].
Determination of strangeness content of the nucleon is theoretically more involved [51].One can input the value of σ πN to BχPT, and use the relationship between SU(3) flavour violation parameter and strangeness fraction to obtain f T s [52,53].
For heavy quarks, neither a theoretical framework nor any phenomenological experiment exist, and hence a need for lattice QCD non-perturbative simulations.Lattice calculations are performed using two different techniques [54].
In the indirect method, the quark matrix element is obtained from variation of the nucleon mass with respect to the quark mass [55,56] qq = ∂M N /∂m q through the Feynman-Hellmann theorem [57,58].
In an alternative way, one can obtain the scalar matrix element from the ratio of the three-point to two-point functions of the nucleon.The 3-point function arises from two types of diagrams.The connected part contains the propagator of the valence up and down quarks.On the other hand, in the disconnected diagram, the sea quarks form a vacuum blob.The σ πN receives contribution from both parts, but for the sigma term of other quarks, only the disconnected piece is present [59][60][61].
In this work, we use the quark mass fractions listed in table 1 which are computed using lattice QCD simulation by [40].It can be observed that heavier flavours have larger fraction of the nucleon mass.
Finally, worth noting is that scalar quark operator is independent of the scale to all orders (∂/∂µ) m q qq = 0.
QCD symmetric and gauge-invariant energy-momentum tensor (EMT) can be derived from the Noether current associated with space-time translation invariance [62]: This tensor gives rise to the same physical momentum and generators of the Lorentz symmetry as the canonical one does [63].It can be shown that Θ µν is identified with derivative of the dilatation current which corresponds to the scaling transformation [64].
Like any rank-two tensor, energy-momentum tensor can be decomposed into traceless and trace parts, in d dimensions as [65,66]: where the symmetric traceless parts [67]: are known as spin-2 twist-2 operators for quark and gluon respectively.Here, twist is defined as the difference between mass-dimension and spin.The QCD covariant derivative reads D µ = ∂ µ + ig s A µ , where A µ ≡ A a µ γ a /2 with γ a meant to be Gell-Mann matrices here. 9 Using equation of motion, the classical trace of EMT simplifies to Θ µ µ = q m q qq, which vanishes in the limit of zero quark mass.This, in fact, indicates that dilatation current is conserved and QCD is therefore scale invariant at classical level.
However, scale symmetry is broken due to running of the coupling constant which is an intrinsic quantum effect.Working in d = 4 − ǫ dimensions, the quantised Θ µ µ differs from the classical version by a divergent term [69] that should be renormalised −(ǫ/4)G µν G µν = (β/4α s )(G µν G µν ) R − γ m q m q qq.This is known as trace anomaly and is composed of contributions proportional to beta-function β, and mass anomalous dimension γ m [70].
Therefore, the renormalised trace of the full energy momentum can be expressed as [71]: where the beta function and quark mass anomalous dimension to leading order are given by: with N c = 3 being the number of colours.
Either by applying the viral theorem in a stationary state [72,73], or using Poincare invariance in fourmomentum eigenstates [74], one can show that matrix element of the trace of energy-momentum tensor generates the nucleon mass Θ µ µ = m N [75].Taking the expectation value of the operator expression (31) within the nucleon state, for n f = 3 flavours, and to the leading order O(α 0 s ), we get: It can be seen that the terms on the left hand side has order of O(m N ), hence factors of m q and α s /π for quark and gluon scalar operators in the effective Lagrangian (26). 9In general one can define spin-n twist-2 operators for quark and gluon by symmetrised traceless relations [68]: where A {µ B ν} ≡ (AµBν + Aν Bµ) /2 for arbitrary operators A and B. Note that spin-n twist-2 gluon operator equals a total derivative for odd n.
Moreover, as mentioned before, the quark scalar operator m q qq is renormalisation group invariant.The nucleon mass m N is a physical quantity and hence scale independent.As a consequence, only using the current choice of factors, the gluon operator (α s /π) G a µν G aµν can be scale-invariant to the leading order of O(α 0 s ).Therefore, the Gluon matrix element can be expressed as: where f Tg ≡ 1 − q=u,d,s f Tq .By differencing the trace anomaly expression (31), the matrix element of the heavy quark Q can be shown to induce scalar gluon interactions of the form [76]: which is independent of the heavy flavour mass.This is equivalent to closing the heavy quark external loops in the scattering diagrams, and replacing them by one-loop coupling to gluons.
In case of charm quark, we can see that the theoretical prediction of ( 35) is close to the numerical value shown in table 1, which is computed in by lattice QCD. 10
Scattering Cross section
The non-renormalisable interactions only contribute to the scalar amplitude at the nucleon level: The effective non-relativistic amplitude for dark matter-nucleon scattering is derived from evaluation of the effective Lagrangian between initial and final nucleonic states: As discussed, the scalar matrix elements are evaluated at hadronic scale where only the three light quarks d, u and b are active.
Pseudo-real EWDM scattering off nuclei proceeds through two kinds of interactions.f R N amplitude only contains renormalisable couplings of DM and the SM, whereas f NR N includes higher dimensional coupling to Higgs boson: The elastic cross-section for spin-independent interactions of EWDM and nucleon N can be written as [78]: where m χN = M χ m N /(M χ + m N ) is dark matter -nucleon reduced mass. 11
Wilson Coefficients
In what follows, the Wilson coefficients of EWDM -nucleon scattering will be computed in leading order of nonrenormalisable couplings λ 0 and λ c .Since the relic has a very slow speed, only a small fraction of the incident DM momentum is transferred to the target nuclei.Therefore we also assume a zero momentum transfer.
In order to fix the value of Wilson coefficients, we use Feynman diagram matching.It involves computing the scattering amplitude corresponding to each diagram in the full theory, and then comparing them with the same amplitude in the effective Lagrangian at UV scale.We find the coefficients for both effective operators of quark and gluon by integrating out the mediators in the full EWDM-quark and EWDM-gluon scattering processes.
Our understanding of the nuclear physics matrix element of the scalar operator is restricted to the hadronic scale.However, the Wilson coefficients of the quark and gluon scalar interactions are scale independent to leading order in the strong coupling constant, so there is no need to evolve them down to µ had using the renormalisation group equations (RGE's).Notwithstanding, as we cross the heavy flavour masses when integrating them out, the threshold corrections should be included in the gluon Wilson coefficient.The quark and gluon coefficients are matched by making a comparison between the two effective theories at ultraviolet and nucleon scales at O(α 0 s ) [79]: q=c,b,t Electroweak DM mixes with the SM Higgs through dimension-six operators introduced in (3).Since these interactions include both cubic and quartic DM-Higgs couplings, we need to consider the leading order diagrams which contain these two types of coupling.
EWDM -Quark Scattering
For cubic interactions (78), dark matter couples to quarks at tree level through exchange of Higgs boson (Fig. 3a).After integrating the scalar mediator out, the effective coefficient is derived as: where we have defined the linear combination of the non-renormalisable constants as: The effective interaction of EWDM with quarks induced by quartic Higgs coupling is generated at one loop level in leading order as depicted in figure 3b.This diagram gives rise to the effective coefficient: where k denotes momentum of quark, and the vertex C(χ 0 , χ 0 , h, h) is defined in (79).The two point functions B (2,1) 0 and B (2,1) 1 are evaluated in (86a) and (97).After performing the loop integration, we arrive at the Wilson coefficient: where α w ≡ g 2 w /4π is the weak structure constant, h ≡ (m q /m h ) 2 , and the quark mass function is defined as: where K-function is defined in (93).Figure 4 illustrates the changes in the absolute value of the mass function with respect to the scaling variable h.It also shows the data-points for the masses of different flavours.It can be noticed that in the limit of vanishing quark mass h → 0 which corresponds to the active light flavours, g q approaches zero.Therefore C h4 q cannot make a significant contribution to the total EWDM-nucleon scattering amplitude.
EWDM -Gluon Scattering
Since by definition, electroweak DM is colourless under SU(3) c , it cannot scatter off gluon fields at tree-level.The interactions of dark matter with gluon is therefore loop-induced.
Although gluon loop-level interactions generate a factor of α s , as discussed, due to order counting, it will be absorbed into the definition of the gluon operator (α s /π) G µν G µν .Consequently, these loop diagrams are not only suppressed, but can even dominate over the DM -quark reactions.
In general, the loop momentum is characterised by the masses of the virtual particles running in the loop as well as the external momenta.Accordingly, we classify the DM-gluon scattering diagrams into two types.If the momentum scale is dominated by mass of heavy particles like EWDM, gauge vectors and Higgs, then the process is referred to as short distance.On the other hand, long distance contributions arise from loop integrals whose momenta are governed by the quark masses [80].
Short distance diagrams should be evaluated explicitly using perturbative quantum chromodynamics machinery.This is also true of the long distance integrals involving the top quark.However, when light quarks i.e. up, down and strange run in the long-distance loops, both mass and momentum are below the QCD scale, so the process is characterised by the confinement dynamics.This contribution is already included in the quark scalar matrix element N |qq|N when computing the quark mass fraction f T q .Charm and bottom flavours are also close to Λ QCD , so the strong coupling is still large and non-perturbative effects are significant at their mass scale.Therefore, the contributions from softer quarks d, u, s, c and b should not be incorporated in the long-distance gluon Wilson coefficient C g , otherwise we would count them twice in the calculations [81].
Note that C g is of leading order of strong structure constant O(α 0 s ) in power counting, despite loopsuppression of the DM-gluon scattering [82].As discussed, this is due to the factor of α s /π for the gluon scalar operator in the effective Lagrangian.
Computation of the effective interactions of gluonic operators is a tedious task that requires constructing the tensor structure of the gluon field strength.When the field is weak which means the external momentum is much smaller than the characteristic scale of the process, then the gluon field strength can be treated as background field.In this case, it is more convenient to choose the Fock-Schwinger gauge [83,84]: In this gauge, the Wilson coefficient for gluon interactions can be easily extracted, since the coloured propagators are already defined in terms of the background field strength tensors.The origin is singled out in relation (47), and the gauge condition is not translational invariant.Accordingly, one should be careful when computing gauge dependent quantities like propagators, because for example, forward S(x, 0) and backward S(0, x) propagation have different forms, as will be shown explicitly.However, the translation symmetry will be restored in gauge-independent physical quantities like correlation functions [85].
The most important property is that the gauge field can be directly replaced by the field strength tensor [86]: where the higher order derivative terms are irrelevant and therefore neglected.As a result, the gluon field strength bilinear G a αµ G a βν will appear in the amplitude of the effective scalar interactions with EWDM.The gluon scalar operator can be easily projected out of the field strength bilinear, using the identity G a αµ G a βν = G a ρσ G aρσ (η αβ η − µν − η αν η βµ )/12 + . . ., where other terms in ellipsis are not relevant and thus omitted.
The DM-gluon interactions involving the cubic coupling are generated by the triangle-loop diagram of figure 5a.To compute the scattering amplitude in this gauge, we need the EWDM two-point function in the gluon background.This requires computing the contribution of Higgs tadpole Γ q in the gluon external field.
Clearly the triangle loop in this diagram where virtual quarks are circulating will only give rise to a longdistance integral.
Higgs tadpole has the form: q (ℓ) The scalar one-point functions A (3) 0 and A (4) 0 are defined in (80a).The second order correction to the propagator of a coloured fermion when two external gluons are inserted reads [87]: is the usual fermionic Feynman propagator in the vacuum.Since the gluon field contains derivative of Dirac delta function, the integration over the background field momenta can be carried out trivially.In practice, it reduces to differentiation of the propagators located after the vertex.
Using the relation C h3 g = i C(χ 0 , χ 0 , h) m t Γ t /(ν m 2 h ), with the vertex factor C(χ 0 , χ 0 , h) defined in (79), the amplitude of Feynman diagram 5a gives rise to the following coefficient: It can be checked that DM -gluon coupling arising from the hard triangle loop is related to the top quark contribution through C h3 g = −C h3 t /12.This behaviour can be explained by heavy quark expansion of the trace anomaly of the energy-momentum tensor [88].In short distances of order 1/m t , one can expand the virtual top state in powers of m −2 t .To the first order in α s , top scalar operator converts to the gluon operator as m t tt → (−α s /12π) G a µν G µνa [89].
Now, we move on to the EWDM-gluon interactions that include the quartic non-renormalisable Higgs coupling.As shown in figure 5, the diagrams are generated at two-loop level.
At first step, one needs to evaluate the quantum corrections to the Higgs self energy Π q induced by virtual quarks, in the gluon external field.
When each quark propagator emits one gluon which is the case for diagram 5b (right), the self energy can be written as: q (ℓ + q) S (1) (2,2) 0 The loop integrals B (1,2) 0 and B (2,2) 0 are explicitly defined in (86b) and (87b).The first order correction to the fermionic propagator which corresponds to one gluon field in the background, is given as: Due to violation of the translation invariance in the gauge condition (47), propagation of an antiparticle in the opposite direction has a different form [85]: It can be seen that the quark masses and the external momentum q contribute on equal footing to the loop momenta in (52).Since the dominant value of the external momentum is at Higgs mass scale, the quark box receives short-distance contributions, and therefore all quark flavours should be taken into account.
Executing the integrals, we finally find: where y ≡ (m q /q) 2 , and K-function is defined in (93).When the momentum is smaller than twice quark mass y > 1/4, one can use the identity (93) to avoid root of negative numbers and logarithm with complex arguments.
Figure 6a depicts the behaviour of the normalised Higgs self-energy Π q ≡ (α w α s /4m 2 w )G a µν G aµν Π q , with respect to y.The data-points corresponds to the values for different quark masses at the dominant momentum q ≈ m h .The two-point function vanishes when the momentum is either much larger y → 0, or much smaller y → ∞ than the quark mass.It is finite at y = 1/4 if approaching from below, but explodes when taking the limit from right.
In case two gluon fields are attached with the same propagator as in diagram 5b (left), the self energy is properly obtained as: q (ℓ) (1,3) 0 The loop integrals B (1,3) 0 and B (1,4) 0 are evaluated in (88b) and (89b).The fermionic propagator for the antiparticle emitting two gluons is given by: The dominant contribution to the box integral ( 56) is provided by the quark mass.As discussed before, when computing such long-distance diagrams, due to the implicit infrared cut-off at Λ QCD in the loop momenta, only top quark should be considered.
Carrying out the integrals, we arrive at the following expression: For y < 1/4, using the identity (93), the result can be expressed in terms of the inverse cotangent.Figure 6: (left 6a) The normalised two-point function of Higgs boson in two gluon background fields with respect to y = (mq/q) 2 with q being the external momentum.The blue curve corresponds to i Π (1) q correlation function, where each internal quark propagator emits one gluon (c.f.figure 5b-right).While the red curve represents i Π (2) q function, in which two gluons are attached to the same internal quark propagator (c.f.figure 5b-left).The data-points present the value of the two-point functions at the mass of up (light blue), down (dark blue), strange (light green), charm (dark green), bottom (red) and top (black) quarks.(right 6b) Changes in the gluon mass functions in dependence of h = (mq/m h ) 2 .The blue curves indicate the analytical and numerical gluon mass functions g (q) g and I (q) g where softer i.e. up, down, strange, charm and bottom flavours run in the loops.Whereas when top quark is the only virtual particle, the relevant mass functions g As illustrated in figure 6a, the Higgs two-point function in two gluon background field Π (2) q has the same asymptotic behaviour as that of Π q .It has only a one-sided limit at y = 1/4 when approaching from left.It is also in clear view that we only need to take into account the input from the three heaviest quarks c, b and t, in the calculations, and can safely ignore other lighter flavours.
The total Higgs self energy therefore can be written as: q + Π t (59) where Π t ≡ Π (1) t .At this stage, we need to evaluate the second loop using the Higgs correlator Π h in order to find the Wilson coefficient of the gluon operator.
Using the Higgs self energies we can finally compute the effective coefficient as: The first two terms are contributions of top quark, and the last two terms are the input from charm and bottom.
The numeric integral g is defined as: where l ≡ (q (E) /M h ) 2 , and Euclidean momentum is related to the actual loop momentum through a Wick rotation q 0 = iq (E) 0 .The typical momentum for this integral is around the Higgs mass q ≈ M h .(Right panel 7b): Spin independent scattering cross-section off nuclei for doublet (green) and quartet (black) complex representations as a function of dark matter mass.The pairs of curves are plotted at three indicative values of the coupling constant that are λ = −10 -5 GeV -1 (dashed), 10 -6 GeV -1 (solid), and 10 -4 GeV -1 (dash-dotted).
g (t)
g has the analytical form of: In addition, I (q) g is expressed by the following integral: This loop integral is also dominated by momenta at Higgs mass scale q ≈ M h .The mass function g g is given by: Figure 6b illustrates changes of the mass functions g , I g and g g , involved in gluonic interactions through quartic coupling.The lighter flavour functions I (q) g and g (q) g vanish in the massless quark limit, and the top flavour ones I (t) g and g (t) g approach infinity in the heavy quark region.We can also safely ignore the non-analytical light quark term g .It is noticed that the computational I g analytical g g terms in the effective coupling (61) has the same order of magnitude, but opposite signs.This leads to an accidental cancellation which deceases the contribution of C h4 g coefficient to the effective amplitude by an order of magnitude.
Direct Detection Constraints
Having derived all the required theoretical ingredients, in this section we numerically compute the EWDMnucleon SI cross section, and compare our results with the latest direct search data and future sensitivities.
In the previous section, it was found that the effective amplitude for all the main channels have positive sign, therefore all the diagrams containing the non-renormalisable couplings contribute constructively to the total cross-section through (38).
At leading order, all the effective couplings induced by higher dimensional operators have a factor of λ.We can therefore define the critical coupling constant λ cr where the non-renormalisable amplitude equates the renormalisable one f NR N = f R N , so that: Figure 7a presents the critical coupling for the two pseudo-real EWDM representations.It can be observed that λ cr is almost independent of DM mass, particularly in TeV mass region.In addition, the value of the critical parameter in quartet model is about an order of magnitude above that of doublet dark matter.That is due to the factor of [n 2 − (4y 2 + 1)]/8 in the charged weak induced amplitude (107) and lack of the light charged EWDM mediated diagrams in C2 model.
In fact the behaviour of the pseudo-real models of the electroweak dark matter crucially depends on the strength of the coupling λ.In order to further study this, in figure 7b, we compare the performance of the complex models with three indicatory values of the coupling.
Below the critical coupling, the direct search observables of the model behave similar to those of the renormalised theory, as shown for λ = 10 −6 GeV −1 case.Since the coupling is less than the minimum possible critical value λ < λ C2 cr , there are distinct spectra for different representations.Obviously this can happen if both λ 0 and λ c are small.In addition, if the relationship between the two couplings is in direct proportion that is λ c /λ 0 = 2n, then effect of the higher dimensional operators cancel out each other.In such cases, the effective theory will produce the same signal for direct detection experiments as the renormalisable low-energy Lagrangian (101).
In the event that 2n ℜλ 0 > λ c , the coupling constant will take negative values λ < 0. It causes a measurable destructive interference between the renormalisable and non-renormalisable amplitudes, which leads to the suppression of the total SI cross-section.This is illustrated in the figure, for λ = −10 −5 GeV −1 curve.At the critical point of λ = −λ cr , the scattering amplitude will totally vanish in the leading order.
On the contrary, above λ cr , the scattering amplitude is governed by the non-renormalisable terms.Since the effective coupling induced by the higher dimensional operators is independent of the representation, the cross-section curves of all pseudo-real models converge.This behaviour can be verified for the indicative value of λ = 10 −4 GeV −1 .As the coupling to higher dimensional operators strengthens, the non-renormalisable couplings enhance the scalar effective amplitude, and thus the total scattering cross-section of pseudo-real modules increases significantly.
In general, the SI cross-section has small dependence on EWDM mass, especially above TeV scale.That is due to the fact that the non-renormalisable effective couplings are totally independent of dark matter mass, and the renormalisable contribution becomes mass independent in the heavy dark matter limit (c.f.section C.2).
The solid lines illustrate the cross-section curves of the real theories as well as the complex models for λ = 10 −6 GeV −1 which is an indicative value for the coupling strength being below the criticality.
In this region, the predicted SI cross-section is far below the present direct detection bound.The future DARWIN experiment might fully probe the multiplets of dimension n > 3 in TeV mass scale, although the higher mass range will remain unconstrained.
At very small cross-sections the potential DM signal would be saturated by the background atmospheric, solar, and diffuse supernova neutrinos colliding with target nuclei [94].The discovery limit is defined as the crosssection where there is 90% probability that experiment can detect the true DM with a minimum significance of 3-σ [95].We refer to this lower limit as neutrino floor which is shown in figure 8 as the border of the yellow shaded area [96].Within this parameter region, it would be difficult to discover dark matter events.
It can be seen that, below the clerical value λ cr , scattering cross-section enhances with the dimension of representation.The reason is that W -boson mediated renormalisable interactions are proportional to [n 2 − (4y 2 + 1)]/8 (c.f. ( 107),( 110)).This factor increases when going either from a complex representation Cn to a higher dimensional real model R(n + 1) or from a real n-tuplet Rn to a larger complex multiplet C(n + 1).The Z-boson induced interactions rise in proportion to y 2 /4; however, the effective EWDM-Z coupling is comparatively smaller than W contribution.
In contrast, for λ = 10 −4 GeV −1 , the coupling is above the maximum critical strength λ > λ C4 cr .As discussed, the non-renormalisable operators, in this case, provide a constructive contribution to the scattering amplitude which could increase the total SI cross-section up to orders of magnitude.Therefore, for high values of non-renormalisable coupling, the pseudo-real EWDM can reach the detection limit, and produce signals that potentially could be observed by direct detection experiments.
The pseudo-real doublet has the smallest dimension of the SU(2) representations, and misses the light charged component.Consequently the scattering cross-section arising from renormalisable interactions falls below the neutrino background limit for this model.Nevertheless, the coupling to Higgs boson via non-renormalisable operators can increase the scattering rate to the current direct detection energy thresholds.As a result future experiments would be promising to detect the doublet EWDM scenario.The spin-independent scattering cross-sections are plotted in blue for the real triplet, navy for the real quintet, green for the complex doublet, and black which corresponds to the complex quartet.The predicted curves for pseudo-real models are presented at two indicatory values of the coupling constant λ = 10 −6 GeV −1 (solid) and λ = 10 −4 GeV −1 (dash-dotted).The overlaid lines from top to bottom represent the experimental upper bounds from XENON1T (2018) [90], PandaX-4T (2021) [91] and LUX-ZEPLIN (LZ) (2022) [92].The red dash line indicates the projected sensitivity of DARWIN future experiment [93].Where the predicted cross-section curves enter the shaded areas, the corresponding mass values are excluded.The upper boundary of the yellow shaded area which is labelled Neutrino floor corresponds to discovery limit for dark matter [96].If scattering cross-section drops to this region, it would be unlikely to detect EWDM particle due to neutrino background.
Parameter Space
In this section, we update the observational bounds on the free parameters of the electroweak theory of DM, to include the constraints from both direct detection and indirect searches.Figure 9 illustrates the valid values of DM mass for the real models, as well as the neutral coupling -mass λ 0 − m χ plane for the complex representations.The vertical shaded patches are excluded by the three gammaray observations that are inner galaxy, dwarf satellites and photon lines.In addition, the top horizontal areas in the pseudo-real representations are disfavoured by elastic direct detection results, while the horizontal regions at the bottom are excluded due to inelastic scattering mediated by Z-boson (24).
Since nucleus recoiling constraints are weakened above 100 GeV scale, direct detection data favours TeV electroweak dark matter.Real models cannot be probed by present DD experiments, as their scattering amplitude does not receive any contribution from the non-renormalisable operators.
As discussed, due to the effective interactions with Higgs boson induced by higher dimensional operators, the scattering amplitude can be significantly enhanced in the pseudo-real dark matter representations.So, the direct search data will set an upper-limit for the value of non-renormalisable couplings.It can be seen that regions of parameter space above λ 0 ≈ 10 −4 GeV −1 are not accessible for complex models, with constraints on the pseudo-real doublet being slightly weaker.
Restrictions exerted by gamma-ray probes are noticeably representation dependant.For the real triplet (R3), the mass interval including and below the thermal value is ruled out, although higher DM masses are still acceptable.It can be observed that the real quintuplet (R5) is severely bounded by indirect detection, leaving only a few narrow mass ranges available.The complex quadruplet (C4) allows for a wider favoured areas of the parameter space particularly at larger masses and stronger scalar coupling.The pseudo-real doublet (C2) is not considerably constrained by gamma-ray searches of EWDM.
To summarise, the multi-TeV and higher mass regions of the parameter space with an intermediate coupling strength are favoured by a combination of the direct and indirect experiments and could be explored further by the future probes.
Conclusion
The electroweak sector was extended by adding a fermion multiplet in non-chiral representation charged under the SU(2)×U(1) y gauge group, so that successful features of the Standard Model were minimally impacted.The pseudo-real models are excluded by direct detection results due to the tree-level coupling to nuclei through the exchange of Z boson.We generalised the pseudo-real EWDM framework by introducing dimension five operators which couple the multiplet to Higgs boson.This mechanism revitalised the pseudo-real theory as the mentioned effective term splits the pseudo-Dirac dark matter into two Majorana states, therefore eliminating the tree-level Z-mediated interaction with nuclei.While EWDM does not scatter off the nucleon at tree-level, it does through non-renormalisable couplings, in addition to loop diagrams.in this paper, we studied the direct detection of electroweak dark matter as a suitable method to probe effects of ultraviolet operators on the pseudo real models at TeV scale.We formulated the effective scalar theory of EWDM -nucleon scattering at parton level which is quite useful in evaluating the non-renormalisable couplings in a systematic way.
All the diagrams that make a contribution to the EWDM -nucleon scattering to the leading order of O(Λ −1 ) were taken into account.We evaluated the tree-level (one-loop) process that gives rise to the DM -quark collision through the non-renormalisable cubic (quartic) Higgs coupling, in addition to the one-loop (two-loop) processes generating interactions with gluons.
The effective amplitudes for the main scattering channels all have positive values which leads to a constructive contribution of the non-renormalisable interactions to the predicted detection rate.
In this framework, we studied the SI cross section of the electroweak DM arising from effective operators of the lowest dimension.The behaviour of the scattering cross-section across different pseudo-real models is determined by the (square of) parameter λ which is a linear combination of the two non-renormalisable couplings (42).
There exist a critical value for this parameter at which the amplitudes for renormalisable and non-renormalisable effective couplings are the same.Below λ cr , different EWDM representations have distinct spectra which lies well below the present direct detection constraints.However, as λ gets stronger than the criticality value, the spectral curves for complex modules tend to become degenerate.The DD cross-section keeps increasing up to orders of magnitude, and therefore will be bounded from above by the measurement data.
If the charged and neutral couplings are proportional to each other, then λ approaches zero.At this minimum limit, the effective theory discussed in this paper, will behave similar to the renormalisable model from the viewpoint of experimentally observable results.
The pseudo-real doublet as the least constraint EWDM model lies far below the neutrino floor.It is therefore difficult to detect this model in current direct detection searches.However, the non-renormalisable effects can raise the annihilation rate to the level that hopefully will be detectable for the next generation experiments.
Finally, we combined the astrophysical indirect search and direct detection bounds to find the allowed parameter space for all the electroweak dark matter representations.In general, pseudo-real theories support a wider range of viable values of the parameters.The higher mass intervals and intermediate coupling regime are overall favoured by a combination of constraints imposed by ID and DD data.
In conclusion, in case, the non-renormalisable coupling of DM to the SM through Higgs field is negligible, the resultant cross-section would stay below the current experimental constraints.However, if the new physics responsible for the mass splitting of the dark spices is closer to the electroweak scale, the effect of the higher dimensional operators will escalate the scattering amplitude.This would potentially open the window for detection of electroweak dark matter in the near future experiments.
acknowledgement
I would like to acknowledge Natsumi Nagata, Rouven Essig, and Thomas Hahn for their discussions and contributions to this project.
A Lagrangian and Feynman rules
The complete interaction Lagrangian describing the couplings of the pseudo-real multiplet with the SM particles can be decomposed as: Interactions of dark sector particles and gauge fields are derived from expansion of the kinetic term in the general Lagrangian 3. Electromagnetic interactions are given by the Lagrangian: Interactions mediated by the neutral weak boson can be written as: + n/2−1 q=1 2c 2 w q + cos φ q χ q 1 γ µ χ q 1 + 2c 2 w q − cos φ q χ q 2 γ µ χ q 2 + sin φ q (χ q 1 γ µ χ q 2 + χ q 2 γ µ χ q 1 ) The odd particles couple to the charged weak gauges through: In special case of the complex doublet, the charged weak boson interactions read: Non-renormalisable interactions with Higgs boson can be cast as: + (−1) q 2|δ q | sin φ q χ q 2 χ q 2 +(−1) q 2|δ q | cos φ q (χ q 1 χ q 2 + χ q 2 χ q 1 ) Using the interaction Lagrangian above, one can readily obtain the Feynman rules for the coupling of the complex EWDM with gauge fields and scalars of the SM.
Feynman rule for the coupling of the charged particles to photon can be written as: For vertices involving neutral electroweak Z boson, one has: Note that Z boson can change the flavour of the odd particles with the same electric charge.
Interactions of the fields with different charges are mediated by W − gauge boson through: For the comple doublet we have: Factorising out the root of unity C µ ≡ −iC ′ γ µ , Feynman rules for the conjugate charged field W + can be easily obtained from: Feynman rules for cubic non-renormalisable couplings to the Higgs boson has the form: Quartic non-renormalisable interactions of two Higgs bosons and two DM particles can be cast as:
B Loop integrals
In order to compute the Wilson coefficients in the full theory of EWDM, we need to introduce and evaluate new loop integrals.
B.1 One-point function
The scalar one-point function is given by [99]: In d = 4 dimensions, for n = 1, 2 the integral diverges.In dimensional regularisation scheme, the dimension is taken to be d − 4 ≪ 1.The regularisation scale µ fixes the dimension of measure at 4. Therefore, to O(d − 4) we get: where the divergence is contained in ∆ ≡ ln 4π − γ E + 2/(4 − d) with γ E being Euler-Mascheroni constant.The scalar one-point correlator can be generalised to two mass scales: In this paper, we are interested in the special case: The one-point function can be further generalised to include another mass scale: As a useful example, we explicitly calculate:
In the event of mass degeneracy in the propagators, we use the parameter x ≡ (m/p) 2 .The Kallen function reduces to κ(x) = √ 1 − 4x, and the simplified K-function is defined in (93).The following special two-point functions facilitate computation of the loop diagrams encountered in the theory of EWDM:
C Renormalisable Couplings
In this section, we revisit the computation of the electroweak dark matter -nucleon scattering cross-section through the usual renormalisable couplings.This allows us to calibrate our results with previous publication before studying the higher-dimensional operator extension of the electroweak theory of dark matter.
The SI effective interactions of EWDM with nucleon arising from a UV Lagrangian which only contains renormalisable operators, in leading order, can be written as: where the Lagrangian is decomposed into a term representing interactions with quarks in the first line, 12 and another term in the second line for coupling to gluon [80]. 13 The order of loop momenta in the scattering diagrams of the renormalisable theory is again around the weak scale [103].This agrees with the similar observation in case of the non-renormalisable extension where we took the factorisation scale at µ uv ≈ m z .
In addition to the scalar interactions, the renormalisable Lagrangian (101) apparently includes terms that couple to the quark twist-2 operators.
The matrix element of twist operators are defined through the parton-distribution functions (PDF's) of quarks and anti-quarks.PDF q (N ) (x) expresses the probability for a given parton species q to carry a portion x, called longitudinal fraction, of the total momentum of hadron N .Due to the symmetries linking proton and neutron, the distribution functions of neutron can be obtained from those of proton by interchanging the up and down quarks, e.g.
The nth moment of the PDF can be defined as: The probability distributions are normalised so that the first moments q (N ) 1 give the net number of the partons in the hadron: The second moments q (N ) 2 return the averaged longitudinal fraction: One can expand the operator product of two quark currents in the deep inelastic scattering process e + N → e + X, where X is a generic hadronic final state.The leading terms in OPE is given by twist-2 operators, and higher twist operators get suppressed by factors of inverse momentum transfer.By equating the dispersion integrals with contours lying in physical and unphysical regions of 1/x complex plane, we drive [104]: These equations are termed as moment sum rules, and relate the moments of the distribution functions to the matrix element of twist-2 operators.By including the radiative correction through operator rescaling, it can be shown that moments of PDF, and thus matrix elements of twist-2 operator are logarithmically scale dependent. 14 In the theory of EWDM, we are especially interested in the matrix element of spin-2 twist-2 operator which is obtained from the second moments of quark and anti-quark distributions: 12 It should be noted that suppression by factors of M −1 χ in the twist-two coupling in (101), will be cancelled out by the derivative of the DM field when computing the dark matter matrix element.So, the effective amplitude arising from twist-2 operators has the same order of magnitude as those for scalar operators. 13The twist-2 operators of gluon does not contribute in leading order of O(α 0 s ) [102]. 14This result obtained from operator renormalisation analysis agrees with Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) equations for evolution of the parton splitting functions [105][106][107].
Figure 10: One-loop diagrams that contribute to the effective interactions of EWDM with quarks at leading order through renormalisable couplings.For real representations, only Z vector can mediate the interaction; whereas in pseudo-real models, both neutral Z and charged W ± gauge bosons are allowed.
The PDF's used in calculation of the matrix element are available at different scales, so we only need to decide about the appropriate energy level to evaluate them.It turns out that due to asymptotic freedom, in lower energy regions the uncertainty arising from the perturbative expansion in α s increases [108].Therefore, we evaluate the matrix element at the factorisation scale µ uv = m z .
We use the second moments of PDF's for proton provided by [101] evaluated at µ uv = m z which are presented in table 2. It can be seen that 2 nd moments of valence quarks are an order of magnitude larger than those of the sea quarks.
C.1 Wilson Coefficients
The Wilson Coefficients and mass functions have been actually computed by several research groups, but for some reason there are some discrepancies in the literature.We review the calculations to assess the accuracy of the published works, and present our results in this section for comparison. 15 In contrast to the scalar bilinears, the twist operators are scale dependant even at leading order of α s [112].However, there is no need to re-evaluate the twist-2 coefficients at low energy region.Since, as discussed, we adopted PDF's at UV scale µ uv = m z where the effective Lagrangian is matched with the full theory.This choice would also avoid the errors such as quark mass threshold, resulting from evolving down the Wilson coefficients.
C.1.1 EWDM -Quark Scattering
Since no tree-level interactions are allowed via renormalisable couplings to gauge bosons, dark matter scatters off quarks at one loop level.As shown in figure 10, this happens through Higgs-exchange Penguin (10a) and Box (10b) diagrams [110]. 16 The effective couplings are model dependent, and for a general n-tuplet with hyper-charge Y are given by: 2 ) g ti (z) , i = 1, 2 .(107b) 15 Unless explicitly specified otherwise, our calculations agrees with the [109] results, including the factor of 2 correction for Z-boson contribution in [102].More explicitly, we reached to a faintly different Higgs mass function g h , as explained in the footnote 17.
Regarding [110], In the large DM mass limit mv/Mχ → 0, both scalar and twist-2 quark coefficients give the same results as our calculation, although the mass functions f II and f III are formulated in a slightly different form.About gluon contribution, the 2-loop scattering diagrams are presented, but not evaluated explicitly.All the Wilson coefficients in their work has the same sign, therefore the accidental cancellation between different terms (c.f.section C.2 for details) does not occur.The higher-dimensional operator is explained, however it is not taken into account in computation of the scattering amplitude.
About reference [111], the quark scalar c (2) U/D Wilson coefficients lead to the same outcome as our results, in the limit of heavy dark matter particle mv/Mχ → 0; whilst the evaluated integrals take a different form.This is also true of gluon coefficients c (0) g except the top flavour contribution to the Z-bosons mediated interactions.It is the very last term in expression for c (0) g and does not match our calculations of C S g , in the same limit.As to [18], about the scalar operator, although the coefficient induced by the Box diagram is different, our results matches for the Higgs-mediated penguin diagram, up to an overall constant.The expression for the twist-2 operator is also different and has an opposite sign.The contribution from the gluon operator was not taken into account in this reference. 16There also other penguin diagrams with Z and γ mediators.It turns out that the amplitude for these diagrams vanish, and therefore they are not shown in the figure .χ 0 χ 0 g g χ ± ( χ 0 ) χ 0 χ 0 g g χ ± ( χ 0 ) W ± (Z) W ± (Z) q ′ (q) q χ 0 χ 0 g g χ ± ( χ 0 ) W ± (Z) W ± (Z) q ′ (q) q (c) where w ≡ (m w /M χ ) 2 and z ≡ (m z /M χ ) 2 .Quark-Z boson vector and axial-vector couplings are c v q ≡ T (3) q − 2Q q s 2 w and c a q = T (3) q respectively.The terms proportional to g H in (107a) are generated by the penguin diagram 10a mediated by Higgs boson , and the rest of the terms in equations ( 107) are induced by the box diagrams 10b.
Scalar Higgs g h , box g box , twist-1 g t1 and twist-2 g t2 mass functions are defined as: 17 In the limit of large dark matter mass x → 0, the mass functions take the following values: Consequently, even if dark matter is considerably heavier than electroweak gauge bosons M χ ≪ m w , the quark Wilson coefficients except C t2 q are not suppressed to leading order O(α s ).
C.1.2 EWDM -Gluon Scattering
Vector fields cannot mediate the Penguin diagram of figure 5a, so at the level of renormalisable couplings, EWDM will only interact with gluons through two-loop processes.Figure 11, presents the Feynman diagrams that yield the effective interactions of EWDM with gluons.The double-triangle diagram 11a obviously only gives rise to long-distance contributions as there is no other scale than mass of quarks circulating in the quark triangle integral.The same holds for double-box diagrams 11c where only one quark propagator emits the two gluon fields.That is because, the external momentum which is predominantly of order of the weak gauge boson mass cannot contribute to the quark box loop more than what virtual quark masses do.In contrast, the reaction 11b where each quark propagator is attached with one 17 The Higgs mass function g h is slightly different from g H in [109].It originates from a sign difference in the expression for reduction of the vector integral to scalar correlation functions 2M 2 χ B (1,2) 1 (1,2) 0 , where we used minus sign for the last term.In the large DM mass limit Mχ ≫ mw, there is a discrepancy of a factor of a few, so the final results are not significantly affected.
In the high DM mass limit x, y → 0, the mass functions reduce to: with r ≡ x/y.Therefore, the Wilson coefficient for gluon operator will not be suppressed, in the limit of large dark matter mass.
C.2 Cross-section
The effective amplitude for EWDM -nucleon scattering is obtained form the S-matrix element of the scalar and twist interactions induced by L R in the non-relativistic limit: Notice the difference in the number of active flavours for scalar and twist-2 quark contributions as the associated Wilson coeffifcents C s q and C ti q are evaluated at different scales of µ had and µ uv respectively.The spin-independent scattering cross-section with nucleon can be obtained from the effective amplitude: As discussed in the previous sections, the Wilson coefficients generated by renormalisable couplings depend on EWDM mass through the mass functions.However, when dark matter is much heavier than the electroweak gauge mediators M χ ≪ m w , these effective coefficients become independent of M χ .As a result, the spinindependent scattering cross-section will not be sensitive to EWDM mass.
It is observed that the effective coefficients generated by twist-2 operators are positive whereas other coefficients are negative.Additionally, their amplitudes are comparable with each other.This leads to an accidental cancellation among different contributions which reduces the scattering cross-section by an order of magnitude.As a consequence, the SI cross-section remains below the current direct detection bounds [82].
The complex doublet model has the lowest dimension of representation and thus receives the smallest Wboson contribution due to the factor of [n 2 − (4y 2 + 1)]/8.In addition, C2 multiplet lacks those diagrams that include the light charged dark propagator χ + 1 in the loops.As a result, the effective coupling significantly reduces, and the spectrum becomes more dependent on DM mass, especially below TeV scale.The scattering cross-section for the pseudo-real model, therefore stays well below the neutrino floor.
Figure 3 :
Figure 3: Feynman diagrams which generate the effective coupling of electroweak dark matter with quarks at leading order.The EWDM-Higgs cubic and quartic vertices are represented by dot (•) and square ( ) respectively.
Figure 4 :
Figure4: The absolute value of quark mass function gq in dependence of the parameter h ≡ (mq/m h ) 2 .For h > 1/4, this function is not defined.The data points correspond to the values at quark mass scales, namely up (light blue), down (dark blue), strange (light green), charm (dark green) and bottom (red).
red.Negative-valued functions are shown in dash-dotted style.
Figure 8 :
Figure8: Current and projected direct detection constraints on four fermionic multiplets of electroweak dark matter.The spin-independent scattering cross-sections are plotted in blue for the real triplet, navy for the real quintet, green for the complex doublet, and black which corresponds to the complex quartet.The predicted curves for pseudo-real models are presented at two indicatory values of the coupling constant λ = 10 −6 GeV −1 (solid) and λ = 10 −4 GeV −1 (dash-dotted).The overlaid lines from top to bottom represent the experimental upper bounds from XENON1T (2018)[90], PandaX-4T (2021)[91] and LUX-ZEPLIN (LZ) (2022)[92].The red dash line indicates the projected sensitivity of DARWIN future experiment[93].Where the predicted cross-section curves enter the shaded areas, the corresponding mass values are excluded.The upper boundary of the yellow shaded area which is labelled Neutrino floor corresponds to discovery limit for dark matter[96].If scattering cross-section drops to this region, it would be unlikely to detect EWDM particle due to neutrino background.
Figure 9 :
Figure 9: (Upper panels) Summary charts for the real triplet (left 9a) and quintet (right 9b) showing the allowed values of DM mass as the only free parameter.The vertical bars are ruled out by experimental constraints from the inner Galaxy (green), dwarf satellites (blue), and gamma-ray line searches (red).The vertical dash-line indicates the thermal mass which is fixed by the freeze-out mechanism.The vertical axis does not report any physical variable.It is provided for the real cases to make the comparison with complex models possible.(Lower panels) The mass-coupling mχ − λ0 two-dimensional parameter space of the pseudo-real doublet (left 9c) and quartet (right 9d).Here, the top horizontal grey region is excluded by elastic direct DM searches.In addition, the horizontal area at the bottom is disfavoured by inelastic EWDM-nucleon interactions through Z-boson exchange.
Figure 11 :
Figure11: Two-loop diagrams that contribute to the interactions of dark matter with gluon via renormalisable couplings.In real models, only Z-boson can mediate the reactions; whereas in pseudo-real representations, both neutral Z and charged W ± gauge vectors are involved. | 18,390.4 | 2022-11-21T00:00:00.000 | [
"Physics"
] |
Semimonthly oscillation observed in the start time of equatorial Spread-F
Using data from airglow an all sky imager and a coherent backscatter radar deployed at São João do Cariri (7.4S, 36.5W) and São Luís (2.6S, 44.2W), respectively, the start time of equatorial Spread-F were studied. Data from a period of over 10 years was investigated from 2000 to 2010. The semimonthly oscillations were clearly revealed in the start time of plasma bubbles from Oi6300 airglow images during three periods (September 2003, September-October 2005, November 2005 and 5 January 2008). Since the airglow measurements are not continuous in time, more than one cycle of oscillation in the start time of plasma bubbles cannot be observed from these data. Thus, coherent backscatter radar data appeared as an alternative to investigate the start time of the ionospheric irregularities. Semimonthly oscillation were observed in the start time of plumes (November 2005) and bottom type Spread-F (November 2008) with at least one complete cycle. Technical/climate issues did not allowed to observe the semimonthly oscillations simultaneously by the two instruments, but from September to December 1
Introduction
Equatorial plasma bubbles (EPBs) appear in the bottom side of the F-region in the equatorial ionosphere when there is an unstable F-layer.It generally occurs after the pre-reversal enhancement (PRE), after sunset.The pre-reversal enhancement consists of a rapid up shift of the F layer before the motion of the plasma be downward reverted.The main mechanism used to explain the development of the EPBs is the Rayleigh-Taylor (RT) instability.According to the theory, the RT growth rate is inversely proportional to the collision frequency between the neutral and ionic particles and it is proportional to the plasma density gradient.Thus, when the PRE is strong, it becomes more probable for EPBs occur.
Airglow measurements of the OI 630.0 nm (OI6300) have been recorded at São João do Cariri (7.4 o S, 36.5 o W) since September 2000.In this investigation data from September 2000 to December 2010 were used, which corresponds to the first generation of the all sky imager deployed in this observatory.
The all sky imager is composed by a fish eye lens, a telecentric set of lens, a filter wheel, a set of lens to reconstruct the image, a Charge Coupled Device (CCD) chip and a cooling system.This instrument has a field of view of 180 o of the sky.
Further details of this imager have been published elsewhere (e.g., Paulino et al., 2016).Airglow images of the OI6300 were taken by about 15 days around the New Moon with integration time of 90 s.Depending on the mode of operation, images can have 2-4 min of temporal resolution.The start and end times can be extracted directly from the image header after observing the appearance or disappearance of the structures.The start time was defined as the time when the plasma bubbles appeared in the images.It generally occurs in the Northwest part of the images.After that, the plasma bubbles start their development and dynamics.Figure 1 shows an example of the determination of the start time of EPBs on 27 January 2001.The supplementary short movie can help one to identify the time, in which the plasma bubbles start to extend to the southern part of the images.The altitude resolution is 2.5 km and noise band width of 120 kHz.These characteristics allow to observe irregularities of 5 m in the ionosphere.Further technical details of the São Luís' radar can be found in de Paula and Hysell (2004) and Rodrigues et al. (2008).
Start time of Plume and BTSF were defined in Cueva et al. (2013).Those parameters correspond to the exact time of appearance of plumes and BTSFs in the range time integration (RTI) maps.The temporal resolution of the start time of the spread-F calculated in the RTI maps were 12 min.Simulations have shown that the 16d planetary waves (PWs) have large amplitudes in the winter hemisphere at the lower levels of the atmosphere and high latitudes, but above the mesosphere there is a penetration of this wave to the summer hemisphere, which allows it to be observed in both hemispheres including in the equatorial region (Miyoshi, 1999).Forbes and Leveroni (1992) have pointed out that 16d oscillation in the E and F-region could be connected by the upward propagation of Rossby wave from the winter stratosphere.Although the 16d PW have a well defined seasonality in the lower atmosphere, according to the simulations, in the upper atmosphere the presence of this oscillation has been predicted to be more spread along the year (Miyoshi, 1999).It is also important to mention that the 16d oscillations were observed in the mesosphere and lower thermosphere from 85 to 100 km altitude in the equatorial region in the zonal wind during the period around the September equinox and solstices of 1994 (Luo et al., 2002), which coincides to the periods of observation of the present results.
Results and Discussion
Lunar semidiurnal tides have been pointed out as important factor to the appearance and the start time of EPBs.The main reason for the influence of the Lunar tides in the EPB variability is the capability of the lunar tides propagate upward to high levels of the atmosphere and consequently it can affect the pre-reversal enhancement (PRE) amplitude and time (Stening and Fejer, 2001).Another factor to be considered is the moon phase (New Moon) that coincides to zero position of the oscillation for all observed cases, including the case studies observed from the coherent backscatter radar that will be shown ahead.The real mechanism that allows the lunar tides to act in the PRE is not well defined, but some works have pointed out as either the direct propagation to the bottom side of ionospheric F region (e.g., Evans, 1978;Forbes, 1982) or coupling of the E region dynamo to the F region (e.g., Immel et al., 2009;Eccles et al., 2011).
In order to corroborate the present results, data from the backscatter radar deployed in São Luís have been used to investigate the start time of the bottom type spread-F (BTSF) and plumes.The main goal of these analysis is trying to observe more than one cycle of the semimonthly oscillation in the start time of spread-F, since the radar was operating continuously and does not depend on tropospheric weather conditions.Figures 3 and 4 show that the spread-F structures can be controlled by the semimonthly oscillation.However, the strong day-to-day variability of the spread-F does not allow to observe this signature always.Another difficulty in the radar data analysis was the algorithm does not give an exact start time of the oscillation, i.e., there was a temporal resolution of 15 min in The present results indicate that one semimonthly dynamical structure can control either the start time or the amplitude of the PRE that can consequently produce Spread-F.These results must contribute to understanding the day-to-day variability of equatorial spread-F.However, the results shows that besides the semimonthly oscillations, other phenomena are important to the day-to-day variability occurrence of EPBs since this oscillations is not dominant in the whole period of observation.
Regarding to the agents that are causing this oscillation, further investigation are necessary and they are out of the scope of this work.Lunar semidiurnal tides, which have semimonthly period of oscillation have been pointed out as a likely agent to produce this kind of oscillation in the start time of Spread-F.Besides, we have discussed the importance of 16d PWs that must be further investigated before being neglected.
2008.The VHF coherent radar of São Luís operates at 30 MHz with a power peak of 4 kW.It has antenna half-power-fullbeam-width of 10 o and inter-pulse-period of 9.34 ms.The coverage in altitude of 87.5 to 1267 km and velocity of ± 250 ms −1 .
Figure 2
Figure 2(a) shows the evolution of start time of the EPBs observed in September 2003 over São João do Cariri.The solid linerepresents the best fit for a periodicity of 14.5 days, the stars correspond to the exact time in which the plasma bubble appeared in the OI6300 images and the filled circle shows the New Moon time.In this case, one can see a good agreements of the fit line with the observation during a half cycle of the oscillation.The amplitude of this oscillation was calculated from the fitting as ∼52 min, i.e., there was a difference of ∼52 min in the start time of EPBs along the observed nights.
Figure 2
Figure 2(b) shows the best fit 14.5 days oscillation in the start time of EPBs observed from the later September to early October 2005.For the whole period of airglow observation, it was the best case study observed because it covers a full circle of the oscillation.There was an amplitude of ∼37 min and the position of the New Moon was observed on 03 October 205.The predominance of this oscillation persists up to November 2005 as shown in Figure 2(c) with higher amplitude ∼70 min.Similar results to September 2003 and November 2005 were found in January 2008 as one can see in Figure 2(d), inclusive the position of the New Moon in the cycle.The estimated amplitude was ∼45 min.
Figure 2 .
Figure 2. Start time of plasma bubble (stars) as function of time.Solid line represents the best fit to a sinusoidal oscillation with period of 14.5 days.The respective amplitudes are shown on the middle top of the panels.Panel (a) shows the results for September 2003.Panel (b) shows the results for September -October 2005.Panel (c) shows the results for October-November 2005.Panel (d) shows the results for January 2008.Filled circles indicate the New Moons.
Figure 3
Figure3shows a complete cycle of 14.5 days in the start time of plume observed on November 2005, which coincide with the same period of observation of EPBs in the airglow images.For this case, an amplitude of ∼ 1 hour was observed.
Figure 3 .
Figure 3. Same as Figure 2, but for start time of plumes.Open circles indicate the Full Moon.
Figure 4
Figure 4 shows the start time evolution of the bottom type spread-F in November 2008, one can observe two complete cycles of the start time of the BTSF fitting a semi-month oscillation with an amplitude of ∼ 12 min.
Figure 4 .
Figure 4. Same as Figure 3, but for start time of bottom type spread-F.
-
. Discuss., https://doi.org/10.5194/angeo-2019-62Manuscript under review for journal Ann.Geophys.Discussion started: 2 May 2019 c Author(s) 2019.CC BY 4.0 License.Using almost one solar cycle of data from OI630 airglow images and range time integration maps from a backscatter radar in the equatorial region over Brazil, semimonthly oscillations in the start time of spread-F (EPBs, BTSF and Plumes) were observed and the results are summarized as follow: -Four periods of airglow observation showed amplitudes higher than 36 min in the start time of EPBs for 14.5 days oscillation, three periods of observations (September 2003, October 2005 and January 2008) revealed good fit for half cycle and the another case (September 2005) showed and complete cycle; -Two complete cycles of 14.5 days with amplitude of ∼ 12 min were observed in the bottom type spread-F in November 2008; Plumes observed in the RTI data showed a 14.5 cycle oscillation with amplitude of 1 hour in the start time of plumes during November 2005; Ann. Geophys.Discuss., https://doi.org/10.5194/angeo-2019-62Manuscript under review for journal Ann.Geophys.Discussion started: 2 May 2019 c Author(s) 2019.CC BY 4.0 License.Ann.Geophys.Discuss., https://doi.org/10.5194/angeo-2019-62Manuscript under review for journal Ann.Geophys.Discussion started: 2 May 2019 c Author(s) 2019.CC BY 4.0 License. | 2,648.8 | 2019-05-02T00:00:00.000 | [
"Physics",
"Environmental Science"
] |
RESOURCE SHARING NETWORKS : OVERVIEW AND AN OPEN PROBLEM By
This paper provides an overview of the resource sharing networks introduced by Massoulié and Roberts [20] to model the dynamic behavior of Internet flows. Striving to separate the model class from the applications that motivated its development, we assume no prior knowledge of communication networks. The paper also presents an open problem, along with simulation results, a formal analysis, and a selective literature review that provide context and motivation. The open problem is to devise a policy for dynamic resource allocation that achieves what we call hierarchical greedy ideal (HGI) performance in the heavy traffic limit. The existence of such a policy is suggested by formal analysis of an approximating Brownian control problem, assuming that there is “local traffic” on each processing resource.
1. Introduction.We consider the resource sharing networks, also called bandwidth sharing models, connection-level models, or flow-level models, that were introduced by Massoulié and Roberts [20] to study the dynamic behavior of Internet flows.This elegant model class may also prove useful in other application domains, and is of mathematical interest in its own right, so we shall avoid terminology that is specific to communication networks.In particular, we refer to the entities being processed in the model as jobs, rather than files or documents or flows, and speak of processing resources rather than links or servers.Verloop et al. [23] introduced the context-neutral term "resource sharing networks" to describe the Massoulié-Roberts model class, and we shall follow that usage.
Following [14], we assume that job sizes are exponentially distributed except where more general distributions are explicitly mentioned, and we assume that there is "local traffic" on each of the network's processing resources.We imagine a system manager who dynamically allocates resource capacities to jobs whose processing is not yet complete.The performance measure on which we focus is the steady-state expected total job count, or equivalently, the steady-state average delay experienced by arriving jobs, irrespective of their type.
Following a standard recipe, we formulate a Brownian control problem (BCP) that formally approximates the system manager's dynamic allocation problem under heavy traffic conditions.Given our local traffic assumption, there exists a control policy for the approximating BCP that has the following two properties: first, the backlog of work for each resource is minimal at each point in time, which is equivalent to saying that no resource's capacity is ever underutilized while there is work for that resource in the system; and second, the total number of jobs waiting at each point in time is the minimum number consistent with the vector of workloads for the various resources.This is called the hierarchical greedy (HG) policy for the approximating BCP, and its associated performance measure is referred to as HG performance.In the context of our original resource sharing network, we define an analogous performance target and call it hierarchical greedy ideal (HGI ) performance.Loosely phrased, the open problem referred to above is the following: to formulate a control policy for the resource sharing network that achieves HGI performance in the heavy traffic limit.
We study three small but representative examples in which all resources have identical load factors, treating the common load factor as a variable parameter.For each example, HGI performance is estimated via simulation, and compared against the performance of proportionally fair (PF) resource allocation, which is the most commonly cited and most thoroughly studied dynamic allocation scheme for resource sharing networks.For load factors ranging from 0.80 to 0.95, HGI performance represents a 20-40 percent improvement over PF performance in our examples.A dynamic allocation scheme called UFOS (mnemonic for utilization first, output second) approximates HGI performance quite well in our three examples, but a fourth example due to R. Srikant shows that UFOS is not viable in general as a means of approaching the hierarchical greedy ideal.
The remainder of the paper is organized as follows.Sections 2 through 4 identify the model class under study, introduce our three examples, and establish notation and terminology.Sections 5 and 6 are devoted to proportional fairness and its relationship to what we call baseline performance, and Section 7 develops some basic theory related to workload.Ideas related to HGI performance are developed in Sections 8 through 11, including a recapitulation of the approximating Brownian control problem in Section 9. Section 12 explains the UFOS algorithm by which HGI performance is approximated in our small examples.Sections 13 and 14 discuss the inadequacy of UFOS as a general method, and summarize our current state of knowledge.The paper concludes with two short technical appendices.
2. Resource sharing networks.In the general model to be considered, jobs of types 1, . . ., J arrive according to independent Poisson processes at rates λ 1 , . . ., λ J , and for concreteness we assume that the system under study is initially empty.Each type j arrival has a size that is drawn from a type-specific distribution with mean m j > 0 (j = 1, . . ., J), and the job sizes for the different types form J mutually independent sequences of independent and identically distributed random variables.In the usual way, let µ j = m −1 j (j = 1, . . ., J).Unless something is explicitly said to the contrary, we assume the job size distribution to be exponential for each type.
The processing system is composed of resources numbered 1, . . ., I, with associated capacities C 1 , . . ., C I (The significance of resource capacities will be explained shortly.)The processing of a job is accomplished by allocating a flow rate to it over time: a job departs from the system when the integral of its allocated flow rate equals its size.In general, the processing of a job consumes the capacity of several different resources simultaneously, as follows.
There is given a non-negative I × J matrix A = (A ij ), and each unit of flow allocated to type j jobs consumes the capacity of resources 1, . . ., I at rates A 1j , . . ., A Ij , respectively.Thus, denoting by x = (x 1 , . . ., x J ) the vector of total flow rates allocated to the various job types at a given time, Ax is the corresponding vector of capacity consumption rates for the various resources, and x must satisfy the capacity constraint Ax ≤ C = (C 1 , . . ., C I ).We use the terms "capacity allocation" and "flow rate allocation" interchangeably.The former term is standard in the literature, where the system manager's task is usually described as one of dynamic capacity allocation.
In the canonical application to Internet modeling, the job types correspond to files that require transmission over different routes, the resources correspond to transmission links, the job sizes are interpreted as file sizes, and the capacity consumption rate A ij is either 1 or 0, depending on whether or not link i is part of the route used by jobs of type j.Positive values of A ij other than 1 may also be meaningful, specifically in the reduced representation of systems with multi-path routing; see Section 5.5 of [14].
As in [14], we assume the following throughout this paper: for each i ∈ {1, . . ., I} there is a j ∈ {1, . . ., J} such that A ij > 0 and A kj = 0 for k = i.That is, for each resource i there is a type j whose processing consumes the capacity of resource i but not that of any other resource.In the context of communication networks, this is referred to as a "local traffic" assumption, for obvious reasons.Readers will see that the local traffic assumption plays a crucial role in our analysis.
We denote by n j (t) the number of type j jobs residing in the system at time t, calling n (t) = (n 1 (t) , . . ., n J (t)) the job count vector or state vector and calling {n (t) , t ≥ 0} the job count process.A control takes the form of a flow rate vector x = (x 1 , . . ., x J ) for each time t ≥ 0. Of course, controls must be suitably non-anticipating.For our purposes it will suffice to consider only stationary Markov control policies, meaning that the flow rate vector employed at each time t is a deterministic function of n(t).That is, we define an admissible control as a function A number of other terms, including "dynamic allocation scheme" and "resource sharing policy," will be used as synonyms for "control" later in the paper.
In the definition of a control that we have advanced, x j (n) is interpreted as the total flow rate allocated to type j jobs when the system is in state n, without regard to how that total flow is divided among the type j jobs residing in the system.That finer information is irrelevant for our purposes, because the performance measures on which we focus are expressed solely in terms of job counts, and we assume exponential (memoryless) job size distributions, so the division of the total flow rate among individual jobs of a given type does not affect the probabilistic evolution of the job count process {n (t) , t ≥ 0}.Also, because the flow rate vector chosen at each time t is only allowed to depend on the state vector n (t), we implicitly assume that the system manager either cannot observe job sizes or else is not allowed to base capacity allocations on that information.
At some points in this paper mention will be made of resource sharing networks with general (non-exponential) job size distributions, and in such models it is necessary to specify how the flow rate allocated to a job type is divided among individual jobs of that type.In all such cases we assume an equal sharing rule, which means that the type j flow rate is divided equally among all type j jobs residing in the system.This assumption is standard in the literature.(With one resource and one job type, it corresponds to the usual processor sharing discipline.) For future purposes let A ij λ j m j for i = 1, . . ., I.
The sum in the definition of ρ i is the expected amount of resource i capacity needed to process jobs of all types that arrive during one time unit, and we Three-Link Linear Network (3LLN).
divide that by the amount of resource i capacity that is available per time unit.Thus we call ρ i the load factor for resource i. (Such ratios are also called traffic intensity parameters in queuing theory.) In the basic network model that we have described here, the collection of resources required to process any given job type is fixed.In the language of communication networks, it is a model with single-path routing.In contrast, one may consider a more general model in which several different processing modes exist for a given job type, with each mode involving a different combination of resources.In communication networks, the term multi-path routing is used to describe that more general setup.Remarkably, any resource sharing model with multi-path routing is exactly equivalent, not just approximately equivalent or asymptotically equivalent, to another model with only single path routing; that equivalence is explained in Section 5.5 of [14].[20] call linear networks, involving local traffic on each of the I resources plus another job type that uses all of the resources; to state the obvious, I = 2 and I = 3 in Figures 1 and 2, respectively.Our third example (Figure 3) is a more complicated version of the second one, involving two additional job types that each require two of the three resources for their processing.Using Internet modeling language, we call the first two examples the two-link linear network (2LLN) and three-link linear network (3LLN), respectively; in parallel fashion, the final example is called the complex three-link network (C3LN).
In Figures 1 through 3 the Poisson arrival rates for the various job types are expressed as multiples of a variable parameter ρ, and as noted in the figures, we take C i = 1 for each resource i as well.Finally, in each example we assume that the job size distribution is exponential with mean 1 for each job type.Defining A as the obvious matrix of zeros and ones for each of the examples (A is 2 × 3 in the first example, 3 × 4 in the second one, and 3 × 6 in the third), readers can easily verify that (3.1) ρ i = ρ for all i = 1, . . ., I in all of our three examples.
4. Stability and system performance.Because we restrict attention to stationary Markov controls (see Section 2), the job count process {n (t) , t ≥ 0} evolves under any admissible control as a continuous-time Markov chain with stationary transition probabilities: transitions that increment n j by one occur at rate λ j (j = 1, . . ., J), transitions that decrement n j by one occur at rate µ j x j (n) (j = 1, . . ., J), and all other transition intensities are zero.An admissible control will be called stable if the associated Markov chain is positive recurrent.It is known that there exist stable controls if and only if (4.1) ρ i < 1 for all i = 1, . . ., I, and (4.1) will be referred to hereafter as the usual traffic condition.For a proof that (4.1) is necessary for existence of a stable control, see page 1060 of [18]; with regard to sufficiency, we shall describe in Section 7 a specific control policy (proportional fairness) that is known to be stable when (4.1) holds.Attention will be restricted hereafter to resource sharing networks that satisfy the usual traffic condition (4.1).Throughout most of this paper, we use the term heavy traffic to mean that ρ i is close to 1 for every resource i, but at the very end (Section 14), attention is directed to the broader scenario where ρ i is close to 1 for some, but not necessarily all, resources i.
Given a stable admissible control, let n(∞) be a random variable whose distribution is the stationary distribution of the associated job count process.The performance measure on which we focus is In words, E (T ot) is the steady-state expected total job count (under the specified control).By Little's law, the steady-state average delay for arriving jobs (that is, the average elapsed time in steady-state, between arrival of a job and completion of its processing) equals E (T ot) divided by the total arrival rate λ 1 + • • • + λ J .Thus minimizing the steady-state average delay is equivalent to minimizing E (T ot), and a given percentage reduction in E (T ot), by one control versus another, ensures the same percentage reduction in steady-state average delay.
5. Proportional fairness.Given a job count vector n, the vector x of proportionally fair (PF) flow rate allocations solves the following problem: where Φ(n) is defined by (2.1).PF allocations are non-extremal: every job type that is currently represented in the backlog of work gets some flow rate allocation, regardless of system status.The idea of proportional fairness originated in work by Kelly [16] and was raised to prominence by the influential paper [17].A more general concept of α-fair allocations was advanced and analyzed by Mo and Walrand [21], but discussion will be restricted here to the original notion of proportional fairness.Explaining the rationale for proportional fairness in digital communications is rather complicated, and we shall not attempt to do so.But a reader may reasonably ask in what sense does (5.1)lead to "fair" flow rate allocations, and there is no good answer to that question in the setting we consider, as indicated by the following quotations from [20]: "In this paper we argue that fairness should be of secondary concern and that the network should be designed rather to fulfill minimal quality of service requirements . . .[Q]uality of service is manifested essentially by the time it takes to complete the document transfer.(p.186). . .User perceived quality of service may be measured by the response time of a given document transfer. . .The fact that this [transfer was achieved] 'fairly' is largely irrelevant and, moreover, totally unverifiable by the user (pp.[189][190]."We shall take the view that "proportional fairness" is simply a name attached to a particular method for dynamic resource allocation.To evaluate this method, one must analyze the delay performance of PF and make comparisons against the delay performance of other allocation schemes.Assuming exponential job size distributions (as we do), [8] showed that PF allocations are stable whenever (4.1) holds.Of course, a stable control policy can still have undesirable delay characteristics, but results reviewed in the next section justify calling delay performance under PF at least "good." In the context of communication networks, an important virtue of proportional fairness is that it can be implemented, at least approximately, by means of a distributed algorithm, requiring only local information for purposes of local decision making; see [17] for elaboration.
6. Baseline performance.The salient feature of resource sharing networks, relative to conventional queueing networks, is simultaneous resource possession.Conceptually, arriving jobs of any given type are stored in a type-specific buffer upon arrival, and are processed by means of a capacity "pipeline" (generally involving several resources) that completes the processing in a single hop.
If we think of the network resources as communication links that are combined to transmit jobs over different routes, then a potential alternative arrangement, commonly called store-and-forward processing, is to transmit the entire job over the first link on its route, store the job in an intermediate buffer, then transmit the job over the second link on its route, store it in another intermediate buffer, and so on to completion.This is illustrated for our two-link linear network in Figure 4, assuming that resource 1 is traversed first in the routing of type 3 jobs.
In Figure 4 there are three "external" buffers where arriving jobs of different types are stored, plus one "internal" buffer where partially completed jobs of type 3 are stored.For our complex three-link network (C3LN), originally pictured in Figure 3, the analogous store-and-forward representation would include six "external" buffers for newly arriving jobs of different types, plus a total of four "internal buffers" for storing jobs of types 4 through 6 at intermediate stages of their routes.In our specification of a resource sharing network, no ordering is given for the resources involved in the processing of a given job type, so one could specify the "route" of any given type in several different ways, each of which gives rise to a different store-and-forward analog.However, the result cited below (Proposition 6.1) is invariant to the ordering.
Each of the buffers in a store-and-forward network is associated with a specific resource, namely, the resource from which the jobs stored in that buffer need processing next.Before we can analyze the delay performance of a store-and-forward network, a method must be specified for dividing the capacity of each resource among the jobs waiting in the resource's associated buffers.In communications modeling, the method widely viewed as "standard" is processor sharing (PS), in which the capacity of each resource is shared equally among all the individual jobs residing in the resource's associated buffers.Under PS, then, the capacity of a resource is divided among its associated buffers in proportion to the job counts in those buffers; to repeat an earlier point, it is irrelevant for our purposes how the capacity allocated to a given buffer is divided among the individual jobs in that buffer, because (a) the performance measures on which we focus are expressed solely in terms of job counts, and (b) we assume exponential job size distributions, so the division of the total flow rate among individual jobs of a given type does not affect the probabilistic evolution of the job count process.
With store-and-forward processing, our system has the structure of a multi-class queueing network, with each resource functioning as a singleserver station, and with one "customer class" defined for each buffer (that is, one customer class defined for each job type at each stage on its route), but with the following non-standard feature: the size of a job, which is drawn from a type-specific exponential distribution upon arrival to the network, remains the same at each stage of its processing, so the "service times" of an individual job at successive processing stages are perfectly correlated.To get a queueing network model of standard type, we shall treat the service times of a single job at successive processing stages as independent random variables, each of them having the appropriate type-specific exponential distribution.This is a version of what is called Kleinrock's independence assumption, or Kleinrock's independence approximation, in the literature of communication networks; see, for example, page 115 of [19].Section 4.1 of [15] contains a persuasive but non-rigorous argument that the independence approximation does not actually affect the product-form result cited below (that is, that the network's steady-state distribution is the same whether a job's successive service times are perfectly correlated or are independent).
Hereafter, when we refer to the SFPS analog of a resource sharing network, this means the store-and-forward system described at the beginning of this section, with processor sharing used to dynamically allocate the capacity of each resource, and with the added independence approximation explained in the previous paragraph.Such a multi-class queueing network is known to have a product form stationary distribution, because each of its single-server stations is what Kelly [15] called a "symmetric queue."To be specific, it follows from Theorems 3.7 and 3.8 of [15] that the steady-state distribution of the SFPS analog can be specified via the constructive procedure immediately below; the same result can be found in [1], Chapter 2 of [6], and Chapter 4 of [7].
For the construction, let us denote by N 1 , . . ., N I the numbers of individuals that belong to "populations" numbered 1, . . ., I and suppose that each individual is given a "label" that belongs to the set {1, . . ., J}. (These "individuals" correspond to jobs being served in our multi-class queueing network.One interprets N i as the total number of jobs occupying station i in steady state, and the label given to an individual is interpreted as the type of the corresponding job.)The probabilistic assumptions are as follows: (a) the population sizes N 1 , . . ., N I are independent random variables; (b) N i is geometrically distributed with mean ρ i /(1 − ρ i ) for i = 1, . . ., I; and (c) the probability that an individual belonging to population i is given label j equals A ij λ j m j / J k=1 A i λ k m k (i = 1, . . ., I and j = 1, . . ., J), independent of how other individuals are labeled.
Denoting by L j the total number of individuals that are given label j, we see that L j is the sum of I independent geometrically distributed random variables (j = 1, . . ., J), and that L 1 , . . ., L J are generally not independent.For the store-and-forward analog of a resource sharing network, let us denote by n (t) = (n 1 (t) , . . ., n J (t)) the job count vector at time t, each component of which involves a sum over buffers containing a given job type, and let n(∞) be a random vector having the associated stationary distribution.The first statement of the following proposition articulates the "product form" result referred to above, and the second statement follows from the fact that Proposition 6.1.For the SFPS analog of a resource sharing network, n(∞) is distributed as the vector L = (L 1 , . . ., L J ) constructed above, and hence (6.1) As noted earlier, processor sharing is widely viewed as a fair and reasonable mechanism for allocating resource capacities in store-and-forward networks, and there is no a priori reason to think that resource sharing (that is, simultaneous resource possession mandated by an absence of internal buffering) is either more or less favorable than a store-and-forward protocol with regard to delay performance, so we shall refer to (6.1) hereafter as baseline performance.Recent results by Kang et al. [14] and Shah et al. [22] show that, in heavy traffic, the delay performance of a resource sharing network with flow rate allocation via proportional fairness (see Section 5) is approximately baseline performance; the next paragraph explains their results in more detail.
For a resource sharing network of the kind considered here, with proportionally fair resource allocation, [14] proves that a properly scaled version of the job count process converges weakly in heavy traffic to a particular diffusion limit.That limit is a semi-martingale reflected Brownian motion (SRBM) with what are called skew-symmetric data, and so its stationary distribution has a product form.More specifically, the stationary distribution of the SRBM is the heavy traffic limit of the product form stationary distribution for the original network's SFPS analog (see Proposition 6.1).Kang et al. [14] do not deal directly with convergence of stationary distributions, but [22] shows that their heavy traffic limit can be interchanged with the t → ∞ limit.That is, if one considers a sequence of networks in heavy traffic, assuming that each of them satisfies the usual traffic condition (4.1), their normalized stationary distributions converge to the stationary distribution of the SRBM that is their heavy traffic process limit.
Table 1 presents simulation results which are consistent with that theoretical analysis, focusing exclusively on the bottom-line performance measure E(T ot).For each of the three examples introduced in Section 3, figures in the column labeled "PF simulation" were derived using Monte Carlo simulation and proportionally fair resource allocation, and figures in the "baseline" column were computed using formula (6.1).
Table 1 reports only the aggregate performance measure E(Tot) for our three examples, but the theoretical results cited above establish the following stronger conclusion: in heavy traffic, the entire stationary distribution of job counts for a resource sharing network with PF allocations is well approximated by the stationary job count distribution of the network's SFPS Comparisons for ρ = 0.90 analog.Table 2 presents more detailed simulation results for the 3LLN example (see Figure 2) which give credence to that view.In that table, simulation estimates are provided for both the mean and the standard deviation of the stationary job count distribution for each of the network's four job types, along with baseline values for each of those quantities; in the obvious way, the baseline values are computed from the product form stationary distribution of the SFPS analog (see Proposition 6.1).The simulation estimates agree quite well generally with the baseline values, and the agreement tends to improve as the system load factor increases.Kang et al. [14] showed that their heavy traffic diffusion limit under proportional fairness, which has a product form stationary distribution, remains the same when job size distributions are mixtures of exponentials.Based on that extension, one may plausibly conjecture that the product form approx-imation under proportional fairness, corresponding to what we have called baseline performance, remains valid with general (non-exponential) job size distributions.
Thus far we have spoken of "baseline performance" only as an approximation, or benchmark, but the analysis of Zachary [26] shows that baseline performance is exactly achievable in any resource sharing network that satisfies the usual traffic condition (4).Zachary's analysis, which generalizes earlier work by Bonald and Proutiére [4,5], shows that there exists a dynamic allocation scheme (see below) under which the stationary job count distribution of the resource sharing network coincides exactly with the product form distribution of the network's SFPS analog, and this remains true even with general (non-exponential) job size distributions.Thus, even if the system manager's objective is to minimize E [h (n(∞))] for some arbitrary cost function h(•), we know that the baseline value of that performance measure (that is, its value under the product form distribution defined by construction earlier in this section) is an upper bound on the minimum achievable value.
The dynamic allocation scheme referred to in the previous paragraph is determined as follows.In the final paragraph of his paper, Zachary [26] observes that a resource sharing network of the kind considered here is an example of his multi-class network model "with no internal transitions," and hence the partial balance equations appearing in his Theorem 2 reduce to the detailed balance equations numbered (16) in his paper.By specifying the equilibrium distribution π(•) in those equations to be the product-form distribution described earlier in this section, and specifying the arrival rates for jobs of types 1, . . ., n to be the constants λ 1 , . . .λ n , irrespective of the current state, one can simply solve for the departure rates of the various job types in various systems states, and those departure rates immediately determine the state-dependent flow rate allocations (or capacity allocations) for the various job types.Of course, it must be verified that those allocations satisfy the capacity constraints for all resources, and doing so is a straightforward task.
7. Nominal and actual workload processes.The main question that we wish to address in the remainder of this paper is the following: Can one improve significantly on baseline performance, assuming that E(Tot) is the performance measure of interest?To address that question, some further basic theory is needed.We first define the nominal workload for resource i at time t as follows: A ij n j (t)m j for i = 1, . . ., I.
In contrast, we denote by w i (t) the actual workload for resource i at time t, which means the total amount of capacity required from resource i to complete the processing of jobs residing in the system at time t.(It is perhaps worth noting that in [14] the name "workload" is used for what we call nominal workload.)Seeking to express the verbal definition of actual workload in terms of model primitives (that is, in terms of arrival processes, job size random variables, resource consumption rates and resource capacities), we proceed as follows.First, for each resource i and each t ≥ 0, let ℓ i (t) denote the total amount of resource i capacity required to process all jobs that arrive over [0, t].(The letter ℓ is mnemonic for load.)This quantity can be expressed in terms of arrival processes and job size random variables for the various job types j, plus the resource consumption rates A ij .Then one has (7.2) We call {ξ(t), t ≥ 0} the netflow process for resource i, and interpret u i (t) as the total amount of resource i capacity that goes unused over [0, t].Note that ξ i (t) is a random variable defined directly in terms of model primitives, without reference to the control chosen by the system manager, whereas u i (t) is dependent on the chosen control.Because {w i (t), t ≥ 0} is by definition a non-negative process, one has u i (t) ≥ − min{ξ i (s), 0 ≤ s ≤ t}, which implies (7.4) , where (7.5) We call {w * i (t) , t ≥ 0} the minimum workload process for resource i, observing that (7.4) holds with equality if and only if, over the entire time interval [0,t], resource i is able to work at full capacity whenever there is work for it to do in the system.
(The time-reversal argument requires only that the process ξ(t) = (ξ 1 (t), . . ., ξ I (t)), t ≥ 0, have stationary independent increments and ξ(0) = 0.) It follows that the random vectors w * (t) increase stochastically to a finite limit w * (∞) as t ↑ ∞.Also, under any stable admissible control, the workload vector w (t) = (w 1 (t) , . . ., w I (t)) converges in distribution as t ↑ ∞ to a finite limit (see Appendix A) that we shall denote w(∞), and by (7.4) we can define w(∞) and w * (∞) on a common probability space in such a way that Given a stable admissible control, let w (t) be the associated I-dimensional actual workload process, and let ŵ (t) be the associated I-dimensional nominal workload process.We can rewrite (7.1) in vector-matrix form as Also, from the memoryless property of the exponential distribution, we have that ŵi (t) = E [w i (t) |n (t)] for all i and t, which can be expressed in vector form as (7.9) ŵ (t) = E [w (t) |n (t)] , t ≥ 0.
Minimum possible cost rate given nominal workload.
Hereafter we denote by e the J-vector of ones, and by R k + the non-negative orthant of k-dimensional Euclidean space.Formula (8.1) below defines a function f : R I + → R J + for which an interpretation will be provided shortly.
Proposition 8.1.For each w ∈ R I + the set {z ∈ R J + : AM z = w} is non-empty, and so it is meaningful to define Moreover, there exists a continuous function g : R I + → R J + such that Proof.Restated in different terms, our local traffic assumption says that there exist I columns of the capacity consumption matrix A that constitute an I × I diagonal sub-matrix with strictly positive diagonal elements.The same is then true for AM, from which the first statement of the proposition is immediate.The second statement of the proposition says that the optimal solution of a certain linear program can be chosen as a continuous function of the right-hand side values.It follows from the Basis Decomposition Theorem in Section 3 of [25], as those authors explain immediately before the theorem statement.
For an interpretation of f (w), it is useful to think of e • n as the cost rate associated with a job count vector n, so our performance measure E(T ot) represents the steady-state average cost rate under a given policy.Because n (t) ≥ 0 for any t ≥ 0, the definitions (7.8) and (8.1) imply That is, no job count vector which yields a given nominal workload vector w can achieve a cost rate smaller than f (w).Moreover, if one ignores the distinction between integer and non-integer job counts (which is a minor distinction in heavy traffic), then the function value f (w) in (8.1) can be described as the minimum possible cost rate given that the nominal workload vector is w.(This interpretation ignores the fact that not all w ∈ R I + can occur as nominal workload vectors.)Perhaps surprisingly, f (•) is not necessarily monotone in all of its arguments; see below for elaboration.
It is instructive to consider the form of the function f (•) for the first two examples specified in Section 3.For our two-link linear network (Figure 1) one has and M is the 3 × 3 identity matrix, from which one obtains the following: for any workload vector w = (w 1 , w 2 ), the minimizing choice of z in (8.1) is z = (w 1 − w 2 ) + , (w 2 − w 1 ) + , w 1 ∧ w 2 , which gives f (w) = w 1 ∨ w 2 .That is, to minimize the cost rate z 1 + z 2 + z 3 given w, one takes z 3 (the number of waiting jobs that require both resources for their processing) as large as possible.In this case we see that f (•) is monotone increasing in both arguments.For our three-link linear network (Figure 2), one has and M is the 4 × 4 identity matrix, from which one obtains In this case f (•) is not monotone, as shown by the following: for w = (2, 2, 1) the minimizing choice of z in (8.1) is (1,1,0,1), and for w = (2, 2, 2) it is (0,0,0,2), implying that f (2, 2, 1) = 3 but f (2, 2, 2) = 2.That is, a lower cost rate (lower total job count) can be achieved with a higher workload for resource 3, because then it becomes possible to hold all work in the form of jobs that need processing by all three resources.9. Approximating Brownian system model.In Section 1 it was said that we follow a "standard recipe" in formulating a Brownian approximation of the system manager's dynamic allocation problem.To be more specific, we adopt the framework developed in Section 5 of Harrison [11], referred to hereafter as H2000.The general resource sharing network described in Section 2 of this paper is a straightforward example of what is called a "stochastic processing network" in H2000: in a resource sharing network we have I processing resources, J different "materials" being processed (namely, jobs of the J different types), and J "processing activities" (namely, the processing of the J different job types).At any given time t, it is the flow rate allocated to type j jobs that constitutes the "activity level" for activity j.In our context, the input-output matrix R identified in H2000 is the J × J diagonal matrix M −1 , because the stock of material j on hand (that is, the number of type j jobs residing in the system) is decreased at an average rate of µ j x j when activity j is conducted at level x j .Finally, the capacity of resource i is consumed at rate A ij x j when activity j is conducted at level x j .
Conditions (9.1) and (9.2) can be expressed verbally as follows: the nominal arrival rates are close to the actual arrival rates, and taken in aggregate, they load each resource to exactly its capacity.(Given our local traffic assumption, the H2000 notion of "heavy traffic" holds if and only if all of the actual load factors ρ 1 , . . ., ρ J are close to 1.) The nominal activity levels defined in H2000 are simply x * j = λ * j /µ j for j = 1, . . ., J and then we re-express the system manager's chosen control in the following form, using the simplified notation x(t) = (x 1 (t), . . ., x J (t)) to denote the vector of flow rate allocations chosen at time t: x j (s)ds for j = 1, . . ., J and t ≥ 0.
Elements of the vector y(t) re-express the system manager's allocations to the various job types as cumulative decrements from the nominal allocations.Now the small parameter ε is used as a scaling constant in the following definitions: (9.4) for t ≥ 0. That is, we define Z, Y and U as diffusion-scaled versions of the job count process n, the chosen control y, and the unused capacity process u, respectively.It follows from (9.3), (9.4), the definition of u(•) in ( 7.3), and the defining characteristics of the nominal arrival rates λ * j that (9.5) The key relationship for the approximate system model developed in Section 5 of H2000 is where X is a J-dimensional Brownian motion having drift vector θ = (θ 1 , . . ., θ J ) and a particular covariance matrix Σ that need not concern us here.We also have the obvious requirements that (9.7) U (•) is non-decreasing with U (0) = 0, and (9.8) Z(t) ≥ 0 for all t ≥ 0.
In the approximating Brownian system model, the Brownian motion X is taken as primitive, and the system manager must choose a control Y that is non-anticipating with respect X, subject to the constraints (9.7) and (9.8),where U (•) and Z(•) are defined by (9.5) and (9.6), respectively.Up to now nothing has been said about the system manager's objective, but let us suppose it is to (9.9) minimize That is, we restrict attention to control policies under which Z has a steadystate distribution, denoting by Z(∞) a random variable which has that distribution, and seek to minimize the analog of E(T ot) for the Brownian model.The addition of this objective to the system equations (9.5)- (9.8) gives us a Brownian control problem (BCP) that approximates the dynamic control problem discussed earlier for the original resource sharing network.
10. Hierarchical greedy control in the Brownian model.The Brownian approximation of a stochastic processing network is invariably simpler than the original, "exact" model that it replaces, and one manifestation of that simplicity is the existence of an equivalent workload formulation of the approximating BCP, the general theory of which was developed by Harrison and Van Mieghem [13].Rather than recapitulate all of that theory, a few simple observations will suffice for our purposes here.First, let us define (10.1) Recall that Z is interpreted as a diffusion-scaled version of the job count process in our resource sharing network.Thus, comparing (7.8) and ( 10.1), one is led to interpret W as a diffusion-scaled version of the nominal workload process ŵ that was defined in Section 8.However, given our assumption of exponential job size distributions, the diffusion-scaled difference between nominal and actual workload processes vanishes in the heavy traffic limit; see Appendix A for a sketch of the standard argument supporting that conclusion.Expressing that state of affairs more loosely, one may say that the distinction between nominal and actual workloads is negligible in heavy traffic, so W will be called simply the workload process (without any modifier) for our approximating Brownian system model.Multiplying both sides of (9.6) by AM , then substituting (9.5) and (10.1), we have the key relationship Now (10.1) implies that W (•) ≥ 0, so for each i = 1, . . ., I and each t ≥ 0, the smallest possible value for U i (t) is which corresponds to the minimum workload process (10.4) Defining W * (t) = (W * 1 (t), . . ., W * I (t)) in the obvious way, we can invoke Proposition 8.1 to define a continuous process (10.5) Z * (t) = g(W * (t)), t ≥ 0, which gives us Z * (•) ≥ 0 and Inverting the fundamental system equation (9.6), the corresponding control is Equations ( 10.3), (10.4), (10.5) and (10.6) together define an admissible control Y * for the approximating BCP that has two distinguishing features.First, it minimizes cumulative unused capacity of all resources simultaneously at all points in time, thus achieving the minimum workload process W * .The proof of Proposition 8.1 shows that our local traffic assumption is essential for existence of such a control: it ensures that any non-negative workload vector w satisfies w = AM z for some non-negative job count vector z; expressing the same thing in different words, it ensures that the potential state space for the workload process W in the Brownian system model is the entire orthant R I + .The second distinguishing feature of the control Y * is the following: at every time t it achieves the lowest cost rate e•Z(t) that is possible given the constraints imposed by maximum resource utilization.We call Y * the hierarchical greedy (HG) control, or hierarchical greedy policy, for our approximating BCP, because it first focuses myopically (greedily) on maximizing resource utilization, and then, given the constraints imposed by that dominant concern, configures the backlog of work so that the associated cost rate is minimized.That is, the control Y * represents or embodies a hierarchical strategy in which resource utilization is primary and job count is secondary.The associated steady-state performance measure will be referred to hereafter as HG performance for our approximating BCP.
It should be emphasized that HG performance is not necessarily optimal performance in the BCP, because the minimum-possible-cost-rate mapping f (•) is not necessarily monotone (see Section 8).That is, greedily maximizing resource utilization may actually be inconsistent with minimizing steadystate total job count, but one feels intuitively that reducing workload will tend to have a favorable effect on job count as well.
The maximum-utilization aspect of the HG control is pictured in the right panel of Figure 5 for two-dimensional (that is, two-resource) systems: under HG control the workload process W is reflected only at the boundary of the quadrant, which is interpreted to mean that capacity of each resource is fully utilized as long as there is any work in the system for that resource.The left panel of Figure 5 reproduces a figure from [14], showing the workload state space for the baseline diffusion process which those authors obtain as a heavy traffic limit under proportionally fair resource allocation; reflection occurs along two rays that lie strictly inside the non-negative quadrant, which corresponds to the occurrence of unused capacity under circumstances where it is avoidable.
11. Hierarchical greedy ideal (HGI) performance for the original model.In the foregoing discussion of Brownian approximations, we identified a hierarchical greedy control Y * with associated performance measure , where W * is the minimum workload process defined by (10.3) and (10.4).For our original resource sharing network, we define an analogous hierarchical greedy ideal (HGI) performance goal as follows: where w * is the minimum actual workload process defined via (7.5).Given the interpretation of f (•) that was provided in Section 8, it might seem more natural to define HGI performance in terms of nominal workload, rather than actual workload, but (a) as noted in Section 10, the distinction between nominal and actual workload is negligible in the heavy traffic parameter regime on which we focus, and (b) minimum actual workload is a well defined process in our original model setting, but minimum nominal workload is not.In broad terms, the obvious way for a system manager to pursue (11.1) as a performance goal is to use a hierarchical greedy approach akin to what was described in Section 10: first focus myopically (greedily) on workload minimization, or equivalently, on maximizing resource utilization (minimizing unused capacity); and then, given the constraints imposed by that dominant concern, strive to configure the backlog of work so that the associated cost rate is minimized.Our use of the word "ideal" in reference to (11.1) emphasizes the point that neither full resource utilization, nor cost rate minimization given workload, can be achieved exactly in a resource sharing network.On the other hand, the right-had side of (11.1) is not necessarily a lower bound on achievable performance, because of the non-monotonicity of f (•) that was demonstrated in Section 8.That is, a control policy that achieves HGI performance is not necessarily optimal, although one feels intuitively that (11.1) represents a high standard of performance.
To substantiate that view, Table 3 compares HGI and baseline values of E(T ot) for our three examples, using three different load factors ρ: the E(T ot) value in the column labeled "Baseline" is computed via formula (6.1), and the E(T ot) value in the column labeled "HGI" is a simulation estimate of E [f (w * (∞))], in accordance with (11.1); see Appendix B for an explanation of the simulation logic.In these examples, HGI performance represents a 25-45% improvement over baseline performance, with greater percentage gains occurring at higher load factors.Also, the greatest percentage gains occur in the most complex of the three examples.
The analysis presented in Section 10 of this paper, when combined with arguments made in H2000 and earlier work referenced there, lead us to conjecture the existence of a control policy that achieves HGI performance in the heavy traffic limit; see Section 14 for elaboration.However, because the Brownian system model provides such a highly compressed representation of the original resource sharing network, there is no obvious general way of translating achievable behavior in the Brownian model into an implementable control policy for the original system.That conundrum has been noted and discussed by various authors, starting with [9].No general resolution has been found thus far, although [10] describes an approach using discrete-review policies that has broad potential applicability, and progress has been made for certain special network structures, such as the parallel server models studied by [2,3].
12. Striving for HGI performance via UFOS.For the two-link linear network (2LLN) portrayed in Figure 1, it is possible to achieve the minimum workload vector w * (t) exactly, at every time t ≥ 0 with probability 1, as follows.(a) Use one of the allocation vectors x = (1, 1, 0) or x = (0, 0, 1) whenever possible; the former choice keeps both resources fully utilized by processing type 1 jobs and type 2 jobs simultaneously, while the latter choice keeps both resources fully utilized by processing (only) type 3 jobs.(b) If only type 1 jobs are available for processing, choose x = (1, 0, 0), and if only type 2 is available, choose x = (0, 1, 0).These rules are sufficient to achieve minimum workload (that is, they ensure that each resource will be fully utilized whenever there is work for it in the system), but they leave open the question of what to do when all three job types are present.In that case, the obvious choice for a system manager who wants to minimize E(T ot) is x = (1, 1, 0) rather than x = (0, 0, 1), because the former decreases the total job count n 1 + n 2 + n 3 at an expected rate of µ 1 + µ 2 = 2, whereas the latter decreases n 1 + n 2 + n 3 at expected rate µ 3 = 1.That is, when the primary criterion of maximizing resource utilization does not fully specify the control action, the remaining freedom should be used to decrease the instantaneous "cost rate" n 1 + n 2 + n 3 as rapidly as possible on an expected value basis.This scheme will be referred to by the acronym UFOS, which is mnemonic for utilization first, output second.
The UFOS allocation scheme is the obvious way to strive for HGI performance in the 2LLN In fact, [24] shows the following: if µ 1 , µ 2 ≤ µ 3 and µ 3 ≤ µ 1 + µ 2 then UFOS stochastically minimizes n 1 (t) + n 2 (t) + n 3 (t) for each t ≥ 0 in a two-link linear network, and hence it minimizes E(T ot) as well The parameter values that we have assumed (µ 1 = µ 2 = µ 3 = 1) satisfy those inequality constraints, so UFOS is exactly optimal for our 2LLN.
Moving now to the three-link linear network (3LLN) portrayed in Figure 2, the following analogous UFOS scheme immediately suggests itself: if all of job types 1, 2 and 3 are available for processing, choose the allocation vector x = (1, 1, 1, 0); if one or more of those types is not available, but type 4 is available, choose x = (0, 0, 0, 1); and if neither of those choices is available, allocate the capacity of each resource to local traffic in the obvious way.This UFOS scheme achieves the minimum workload vector w * (t) at every time t, and it maximizes the instantaneous output rate when doing so does not jeopardize full resource utilization, but it is not necessarily optimal, because with three links there exist system states where maximizing resource utilization requires some sacrifice in terms of the total output rate, and vice versa.We conjecture that, for both the 2LLN and 3LLN, the UFOS allocation scheme described above will approach HGI performance asymptotically in the heavy traffic limit, by which we mean that the percentage difference between HGI performance and E(T ot) under UFOS will vanish as ρ ↑ 1.That conjecture is reasonably well supported by the simulation results reported in Table 4, where the percentage differences fall between 2% and 4% for all three values of ρ considered.
For the complex three-link network (C3LN) portrayed in Figure 3, the simulation results reported in Table 4 were derived using the following definition of UFOS: first, for any given state vector n, identify the set of allocation vectors x = (x j ) that (12.1) maximize where Φ(n) is defined by (2.1); and second, among the allocation vectors x that achieve the maximum in (12.1), choose one to (12.2) maximize J j=1 µ j x j .The objective function in our first-level optimization (12.1) is simply the sum of the utilization rates for the various resources, and the equal weighting used in that objective function is arbitrary: any weighted sum of the utilization rates having all weights strictly positive would be consistent with our earlier specification of UFOS for the 2LLN and 3LLN.
A striking feature of Table 4 is that, for the C3LN with load factor ρ = 0.80, the simulation estimate of E(T ot) under UFOS is actually 8% below HGI performance, which underscores the point that HGI performance is not necessarily a bound on optimal performance (see Section 11).At higher load factors, the ordering of UFOS performance and HGI performance is reversed, and the gap between them is substantial.There is no compelling reason to believe that the percentage gap between HGI and UFOS will vanish as ρ ↑ 1, but we have no better scheme to recommend as a means of approaching HGI performance in the C3LN.
13. Recap and a negative example.Table 5 combines results presented earlier (specifically, in Tables 1, 3 and 4), in order to underscore the following points about our three examples.First, the baseline formula (6.1) closely approximates simulated performance under proportional fairness (PF).Second, the hierarchical greedy ideal (HGI) formula (11.1) represents a 25-45% improvement relative to baseline.And third, the UFOS allocation scheme defined by (12.1) and (12.2) gives E(T ot) values reasonably close to HGI performance for these examples.
In particular, as shown in the right-most column of Table 5, E(T ot) values under UFOS are 15-35% lower than those under proportional fairness, with greater relative improvements achieved at higher load factors.These are moderate but significant performance gains.We conjecture that similar gains would be achievable, relative to the performance of proportional fair- ness and other frequently cited allocation schemes, if a congestion measure different from E(T ot) were used.For example, if the objective were to minimize a quadratic function of the steady-state job counts, rather than the linear function embodied in E(T ot), the second-stage logic of UFOS could be altered to drive down that quadratic cost as quickly as possible on an expected value basis.Unfortunately, the UFOS allocation scheme (as we have defined it) is not generally effective.This is illustrated by the example portrayed in Figure 6, which was suggested by R. Srikant.(This example does not satisfy our local traffic assumption, but one can add to it a stream of local traffic for each resource, with each such stream having mean job size 1 and an average arrival rate of, say, ρ/100.The local traffic will then constitute an insignificant fraction of the load on any given resource, and everything said here will remain essentially the same.)In Figure 6 there are three job types, and the mean job size is assumed to be 1 for each of them.As shown on the figure, each of the six resources has capacity 1.Average arrival rates are as shown on the figure, so resources 1 through 4 all have a load factor of 0.5, while resources 5 and 6 each have a load factor of 0.8.
Job types 1 and 2 both utilize three resources, whereas type 3 utilizes only two resources.Thus the first-stage UFOS optimization (12.1) effectively gives priority to types 1 and 2: if there are any jobs of either type 1 or type 2 present in the system, a flow rate of 1 will be allocated to them, while type 3 is given a zero allocation.Because of their priority status, jobs of types 1 and 2 both enter what is effectively an M/M/1 queue with load factor 0.5, and those two M/M/1 queues operate independently of one another; the steady-state probability that either n 1 or n 2 individually equals zero is 0.5, and the steady-state probability that n 1 = n 2 = 0 is 0.25.Type 3 jobs receive a flow rate allocation of 1 when n 1 = n 2 = 0, and an allocation of zero otherwise, which is inadequate to meet the type 3 input rate of 0.3.Thus the system portrayed in Figure 6 is actually unstable under UFOS as we have defined it, even though no resource has a load factor larger than 0.8.
The problem, of course, is that UFOS is "distracted" by the lightly loaded resources numbered 1 through 4, giving just as much weight to keeping them busy as to keeping busy the critical resources numbered 5 and 6.One approach to fixing that problem is to use a weighted sum of utilization rates as the objective function in (12.1), with larger weights for more heavily loaded resources, or to adjust the weights dynamically depending on the current workloads of different resources.While that idea is attractive in principle, we have not been able to devise a provably effective implementation thus far.
14. Open problem.In conclusion, it may be helpful to state more precisely the "open problem" referred to in the title of this paper.Consider a general resource sharing network of the kind described in Section 2, with ρ i < 1 for each resource i, viewing the arrival rates λ 1 , . . ., λ J as variable parameters.Initially, consider a sequence of values for the arrival rate vector such that ρ i ↑ 1 for each resource i, and moreover, (1 − ρ i )/(1 − ρ k ) converges to a strictly positive constant for every pair of resources i and k (that is, the load factors for different resources converge to 1 at the same rate).This type of formulation, in which all resources are equally "critical," is more or less standard in heavy traffic theory.Using the standard approach to asymptotic optimality, one would then state the problem as follows: develop a corresponding sequence of dynamic allocation schemes such that the percentage difference between the E(T ot) values they achieve and HGI performance vanishes.Presumably that would be accomplished by constructing controls whose associated job count processes, properly scaled, converge weakly to the process Z * defined in Section 7.
A more stringent version of the problem would require that the dynamic allocation logic not depend on arrival rates; in the literature of communication networks, this is commonly cited as a desirable characteristic, because arrival rates may vary through time and one wants a control scheme that remains effective in the face of such changes.With that constraint, the problem is effectively to find a single dynamic allocation scheme which, when applied with the specified sequence of arrival rate vectors, causes the percentage difference between achieved E(T ot) values and HGI performance to vanish.Of course, one wants a dynamic allocation scheme which has that property for any sequence of arrival rate vectors that take the system into heavy traffic.
Finally, it is of interest to consider the broader heavy traffic regime where ρ i ↑ 1 for some resources i but not necessarily for all of them, or where the load factors for different resources approach 1 at different rates.Again the problem is to formulate a control policy such that the percentage difference between HGI performance and the policy's achieved E(T ot) value vanishes in the limit.With this broadened view of "heavy traffic," there arises the distinction between "critical" and "sub-critical" resources: the steady-state minimum workloads for sub-critical resources are eventually insignificant compared to the steady-state minimum workloads for critical ones, and so sub-critical resources are eventually irrelevant for computing HGI performance values.Still, the preceding discussion of Srikant's example (Figure 6) shows that the potential presence of sub-critical resources substantially complicates the task of designing general control logic.
APPENDIX A: MORE ON WORKLOAD PROCESSES
We shall consider in this appendix an ordinary M/M/1 queueing system, which can be viewed as a special case of the resource sharing network described in Section 2. To be specific, it is the special case where I = J = 1 and the capacity consumption matrix A consists of a single 1.Two results will be developed in that simplified setting, and then their obvious analogs for general networks will be stated.
Consider an M/M/1 system with service rate µ = 1, arrival rate λ < 1, and initial state n (0) = 0 We view λ as a variable parameter, define ε = √ 1 − λ, and eventually consider the heavy traffic limit where ε ↓ 0. Assuming that server capacity is allocated to jobs in a work conserving manner that does not depend on the jobs' service times (for example, it could be FIFO, LIFO or processor sharing) the memoryless property of exponential service times gives us the following: (A.1) w(t) ∼ S (n(t)) for each fixed t > 0, where {n(t), t ≥ 0} is the job count process as in Section 2, {w(t), t ≥ 0} is the actual workload process as in Section 7, "∼" denotes equivalence in distribution, S k = η 1 + • • • + η k for k = 1, 2, . . ., and η 1 , . . ., η k are independent, exponentially distributed random variables with mean 1, also independent of n (t).Because the mean service time (mean job size) is 1 by assumption, the nominal workload process { ŵ(t), t ≥ 0} defined in Section 7 is simply (A.2) ŵ(t) = n(t), t ≥ 0.
Using the symbol "⇒" to denote convergence in distribution, we have n(t) ⇒ n(∞) as t → ∞, where n(∞) has a specific distribution that need not concern us here.From that and (A.1) it follows (using the continuity APPENDIX B: ESTIMATING HGI PERFORMANCE VIA MONTE CARLO SIMULATION Estimates of the HGI performance goal E[f (w * (∞))] were obtained, for each of our three network examples and for each of the ρ values considered in Section 11, using a two-step procedure.The first step generates a sample path of the minimum workload process w * defined in Section 7, as follows.Poisson arrivals and exponentially distributed job sizes are generated for each job type j, from which we construct sample paths of the workload input processes {ℓ i (t), t ≥ 0} that were defined verbally in Section 7: the sample path of ℓ i (•) for a given resource i starts at ℓ i (0) = 0, is constant between arrival epochs, and jumps upward by an amount A ij S when a type j job arrives and that job has size S. Next, the sample path of the minimum workload process {w * i (t), t ≥ 0} is constructed independently for each resource i, exactly as one constructs the content process of a dam with cumulative input process ℓ i (•) and constant outflow rate C i .That is, the sample path of w * i (•) starts at w * i (0) = 0, has upward jumps identical to those of ℓ i (•), slopes downward at rate C i until w * i (•) = 0 again, and then remains at zero until the next jump of ℓ i (•) occurs.A regenerative cycle is completed at the first time τ , following the first arrival of a job of any type, when we once again have w * i (τ ) = 0 for all i = 1, . . ., I. The second step in our estimation procedure is to calculate, given a piecewise linear sample path {w * (t), 0 ≤ t ≤ τ } of the vector process w * over a regenerative cycle, the integral Thus, exact computation of the integral in (B.2) requires only that f (•) be evaluated for finitely many values of its argument, which is easily done: explicit formulas for f (•) were given in Section 7 for our 2LLN and 3LLN, and a similar, more complicated formula was developed for the C3LN.We generated a large number N of regenerative cycles, recording their durations τ 1 , . . ., τ N and the corresponding values F 1 , . . ., F N for the integral in (B.2).Using the regenerative method, we then estimated the HGI performance goal as
3 .
Three examples.Figures 1 through 3 portray three examples of resource sharing networks that will be discussed later in this paper, each of which satisfies the local traffic assumption enunciated in Section 2. The first two examples are what Massoulié and Roberts
(fT k− 1 f
(w * (t))dt.Let 0 = T 0 < • • • < T K = τ be a sequence of times such that all components of w * (•) are linear over each sub-interval [T k−1 , T k ), k = 1, . . ., K.That is, each break point T k is either the arrival time of some job or else a time at which some component of w * hits zero from above.In all three of our examples we have C i = 1 for each resource i, which implies the following: over each of the subintervals [T k−1 , T k ), each component w * i (•) of the minimum workload process is either identically zero or else linear with slope −1.It follows from that special structure and the definition off (•) that f (w * (t)) is itself linear in t over each of the subintervals [T k−1 , T k ), implying that (B.2) T k (w * (t)) dt = 1 2 [f (w * (T k−1 +)) + f (w * (T k −))] (T k − T k−1 ).
Table 1 E
(T ot) comparison: Baseline versus proportional fairness for our three examples
Table 2
Detailed comparison: Baseline versus proportional fairness for the 3LLN
Table 3 E
(T ot) comparison: Baseline versus HGI for our three examples
Table 4 E
(T ot) comparison: HGI versus UFOS for our three examples
Table 5
Summary of E(T ot) comparisons for our three examples | 15,426.6 | 2014-10-06T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
HD 12098: a highly distorted dipole mode in an obliquely pulsating roAp star
HD 12098 is an roAp star pulsating in the most distorted dipole mode yet observed in this class of star. Using TESS Sector 58 observations we show that there are photometric spots at both the magnetic poles of this star. It pulsates obliquely primarily in a strongly distorted dipole mode with a period of $P_{\rm puls} = 7.85$ min ($\nu_{\rm puls} = 183.34905$ d$^{-1}$; 2.12210 mHz) that gives rise to an unusual quadruplet in the amplitude spectrum. Our magnetic pulsation model cannot account for the strong distortion of the pulsation in one hemisphere, although it is successful in the other hemisphere. There are high-overtone p~modes with frequencies separated by more than the large separation, a challenging problem in mode selection. The mode frequencies observed in the TESS data are in the same frequency range as those previously observed in ground-based Johnson $B$ data, but are not for the same modes. Hence the star has either changed modes, or observations at different atmospheric depth detect different modes. There is also a low-overtone p mode and possibly g modes that are not expected theoretically with the $>1$ kG magnetic field observed in this star.
INTRODUCTION
Magnetic peculiar A (Ap) stars are found in the mainsequence band from spectral types early-B to mid-F.They have global magnetic fields that are roughly dipolar with strengths up 34 kG and with a magnetic axis that is inclined to the rotation axis, so that the field is observed from varying aspect with rotation.Atomic diffusion gives rise to surface abundance anomalies that can reach a million times that of normal stars for some rare earth elements.Those enhanced abundances are concentrated in surface patches, usually referred to as spots, that are closely associated with the magnetic poles, giving rise to rotational variations in the spectral line strengths (hence abundances) and in brightness.These stars are known as α 2 CVn stars; they constitute about 10 per cent of all main-sequence stars in that spectral type range.
A notable characteristic of the α 2 CVn stars is that the surface spots are stable over time-scales of at least decades, in strong contrast to lower main-sequence spotted stars for which those spots can evolve on time-scales of days.The Sun ⋆ E-mail<EMAIL_ADDRESS>is the prime example of this.This stability in the Ap stars means that the rotational light curves can be used to determine the rotation period to high precision.While the spots are stable, they have a wide variety of surface configurations, often giving rise to non-sinusoidal light curves.When the rotational inclination, i, and the magnetic obliquity, β, of the dipolar field sum to greater than 90 • , both magnetic poles are seen; also, when there are spots associated with both poles, the rotational light curve has a double-wave.
The rapidly oscillating Ap (roAp) stars are a subset of the magnetic Ap stars that show high radial overtone p mode pulsation with periods in the range 4.7 − 23.6 min (see Holdsworth et al., 2024, in press).The spectral types of the roAp stars range from early A to late F with effective temperatures of 6500 ≤ T eff ≤ 8800 K.They partially overlap with the δ Sct stars in the HR Diagram, but differ from those because of the impact of the magnetic field on the pulsations.The roAp stars generally pulsate in nonradial, zonal dipole and quadrupole modes (l = 1, 2; m = 0) with the pulsation axis lying close to the magnetic axis.This gives rise to oblique pulsation where the pulsation mode is seen from varying aspect as the star rotates.That provides information on the mode geometry, hence constrains mode identification, which in many other stars can be problematic.Mode identification is requisite for the application of asteroseismic modelling.
The oblique pulsator model was introduced by Kurtz (1982) and further developed and improved by Shibahashi & Saio (1985), Shibahashi & Takata (1993), Takata & Shibahashi (1994, 1995), Bigot & Dziembowski (2002) and Bigot & Kurtz (2011), among others.For pulsation modes that can be described by single spherical harmonics, the pulsation amplitude and phase changes that are observed over the rotation cycle allow the oblique pulsator model to give constraints on i and β, thus on the pulsation geometry, hence identifying the mode.Two well-studied cases, HR 3831 and HD 99563, pulsate in slightly distorted dipole modes for which the rotational pulsation amplitude and phase modulation provide good constraints on the pulsation geometry (Kurtz et al. 1990, Handler et al. 2006).
Observations show that in other roAp stars the modes are often more strongly distorted.Saio (2005) made a nonadiabatic analysis of nonradial pulsation modes in the presence of a dipole field for roAp stars, finding that the dipole and quadrupole modes distorted by the magnetic field are most likely to be excited.Several cases of strongly distorted quadrupole modes were studied by Holdsworth et al. (2018, and references therein) and modelled with the method of Saio (2005).This was particularly successful in explaining the flattening of the pulsation phase curve as a function of rotation phase.
However, the models are not fully capable of explaining the distortion of the pulsation modes, and further complications and challenges to the oblique pulsator model, as applied to the roAp stars, have arisen.In particular, Kurtz & Holdsworth (2020) showed that the geometry deduced for the dipole pulsation mode in the roAp star HD 6532 is dramatically different for observations made in Johnson B and for those made with the red-dominated bandpass of the Transiting Exoplanet Survey Satellite (TESS) mission.Since observations at these different wavelengths probe to different atmospheric depths because of differences in opacity, this suggests strong changes in pulsation geometry as a function of depth.In a high-resolution spectroscopic study of the distorted dipole mode in HR 3831, Kochukhov (2006) first pointed out that the pulsation geometry inferred using the oblique pulsator model is dependent on the depth of the observations.
Hence the roAp stars continue to present challenges to our understanding of their oblique pulsation.This is particularly true for the stars with the most distorted mode geometries.This has recently become relevant in the context of the tidally tilted pulsators, where tidally distorted zonal and sectoral dipole modes have been observed (Jayaraman et al. 2022;Rappaport et al. 2021;Fuller et al. 2020;Kurtz et al. 2020;Handler et al. 2020).Thus the study of oblique pulsation of distorted nonradial modes has expanding applications.Therefore, in this work we present the most distorted dipole mode yet observed in an roAp star.Extreme examples provide the strongest challenges to theory.
HD 12098
HD 12098 has only one published spectral classification, which is F0 from the original HD catalogue.However, its Strömgren colours and indices are characteristic of cool Ap stars; b − y = 0.191, m1 = 0.328, c1 = 0.517, Hβ = 2.796 (Olsen 1983), from which the indices δm1 = −0.122and δc1 = −0.255can be calculated from the calibration of Crawford (1979).Based on these indices, in 1999 the star was tested for pulsation under the Nainital-Cape Survey project (Ashoka et al. 2000, Martinez et al. 2001, Joshi et al. 2006, 2009, 2016) using 10-s integrations through a Johnson B filter on the 1.04-m Sampurnanand telescope of the Aryabhatta Research Institute of Observational Sciences (ARIES; at that time called the Uttar Pradesh State Observatory).Martinez et al. (2000) found it to be an roAp star with a pulsation frequency of 189.2 d −1 (2.19 mHz; P = 7.6 min).Girish et al. (2001) then obtained further data from Mt. Abu Observatory, India, and found the star to be multiperiodic with a dominant frequency of 187.82 d −1 (2.17 mHz).They also found rotational variation, but could not discriminate between periods of 1.2 d and 5.5 d.That ambiguity was addressed by Wade et al. (2001) and then resolved by Ryabchikova et al. (2005) who found a rotation period of 5.460 ± 0.001 d from a study of the mean longitudinal field, which varies almost sinusoidally between +2000 G and −500 G with that period.That shows that both magnetic poles are seen over the rotation cycle.
HD 12098 DATA AND ANALYSIS
HD 12098 was observed at 120-s cadence by the TESS mission in Sector 58 in late 2022.We have used the PDCSAP (Presearch-Data Conditioning Simple Aperture Photometry) data to study the rotation and pulsation variations of this star.The data span 27.72 d and comprise 19475 data points.No outliers were removed.
The rotation period
Fig. 1 shows the Sector 58 light curve in the top panel showing a clear double wave.This is consistent with polar spots and the longitudinal magnetic field curve of Ryabchikova et al. (2005).The second panel shows the first two harmonics of the rotational light variation in an amplitude spectrum calculated using the Discrete Fourier Transform algorithm of Kurtz (1985).The rotation period determined from those harmonics is 5.4815 ± 0.0002 d, which is significantly different to that determined from the magnetic variation by Ryabchikova et al. (2005) of 5.460 ± 0.001 d.
After fitting 10 harmonics of the rotation frequency, the third panel of Fig. 1 shows significant peaks that may indicate some g modes.One of these is not fully resolved from the second harmonic of the rotation frequency, hence may have perturbed the value of the rotation period determined.Panel 4 shows a 5σ peak at 11.819 d −1 that is potentially from a low-overtone p mode.Because of this possible perturbation of the rotation frequency by an unresolved g mode frequency, we adopt the rotation period (frequency) of Prot = 5.460 d (νrot = 0.18315 d −1 ) of Ryabchikova et al. (2005) for the pulsation analysis.Pre-whitening by a 10-harmonic series with this rotation frequency provides a good fit to the data.showing the double-wave rotational variation typical of the α 2 CVn stars.Panel 2: A low-frequency amplitude spectrum showing the first two harmonics of the rotation frequency.Panel 3: A lowfrequency amplitude spectrum after pre-whitening a 10-harmonic fit for the rotation frequency.There are significant peaks visible that may be from g modes.One of these is not fully resolved from the second harmonic of the rotation frequency.Panel 4 shows a 5σ peak that is potentially from a low-order p mode.Note the changes of ordinate scale on the panels.
The pulsations
A high-pass filter was used to remove the low-frequency rotational frequency and its harmonics, the (probable) g mode and p mode frequencies, and instrumental variations so that the noise in the amplitude spectrum is white.This gives the best uncertainty estimates for the derived roAp pulsation frequencies, amplitudes and phases.The top panel of Fig. 2 shows a section of the amplitude spectrum in the range of the roAp pulsation frequencies where 6 significant peaks can be seen, four of which form a quadruplet split by exactly the rotation frequency.The middle panel shows the amplitude spectrum of the residuals after those 6 frequencies were prewhitened.The bottom panel shows a higher resolution view of the quadruplet.Table 1 gives the frequencies derived from a least-squares fits of the quadruplet with a forced splitting equal to the rotation frequency, and the other two pulsation frequencies.
Fig. 3 shows the rotational light variations and pulsation amplitude and phase as a function of rotation phase.The time zero point, t0 = BJD 245 9893.80, was chosen to be the time of maximum rotational brightness.The pulsation amplitude and phase were calculated from the frequency quadruplet, taking ν2 = 183.34905d −1 (2.1221 mHz) to be the pulsation frequency, and the other three frequencies, ν1, ν3, and ν4 to be rotational sidelobes generated by oblique pulsation.A noisefree time series was generated for this frequency quadruplet as given in Table 1, sampled at the times of the actual observations.The pulsation amplitudes and phases were then derived by least-squares fitting of the pulsation mode frequency, ν2, to sections of the generated data 0.05-d long, which is just over 9 pulsation cycles.Those were then plotted as a function of rotation phase.The choice of ν1 or ν3 as the pulsation frequency generated pulsation phase plots, such as the bottom panel of Fig. 3, that had strong slopes, indicating that those are not the pulsation frequency.
Discussion of the pulsation frequencies
Several characteristics of the curves in Fig. 3 are notable.1) The pulsation phase changes by π radians at the times when the pulsation amplitude goes to zero (or nearly so).This is characteristic of an oblique dipole pulsation when the pulsation node crosses the line of sight.2) One pulsation pole dominates for about 30 per cent of the rotation period and the other for about 70 per cent.This is consistent with the longitudinal magnetic curve of Ryabchikova et al. (2005), which shows the longitudinal magnetic field to be negative for about 30 per cent of the rotation and positive for the other 70 per cent.
3) The double-humped pulsation amplitude maximum between rotation phases 0.3 − 1.0 shows that the dipole mode is heavily distorted on this magnetic hemisphere of the star.The corresponding pulsation phase is also significantly distorted.4) The pulsation amplitude and phase variation on the other magnetic hemisphere does not appear to be distorted.5) The time of brightest light in the rotation curve coincides with the time of one pulsation minimum.This suggests that the star is brightest around the pulsation equator and dimmest near the poles, as is characteristic of α 2 CVn stars. 1 However, the pulsation mode is so distorted from a simple dipole, that this is a conjecture.6) The mode frequency separation ν6 − ν5 = 2.54 d −1 (29 µHz) is plausibly half the large separation, suggesting that these two modes are of alternatively even and odd degree, ℓ, and that the large separation is about ∆ν0 ∼ 60 µHz.7) The other mode frequency separation ν5 − ν2 = 13.350d −1 (155 µHz) is then about 2.5 ∆ν0, so that some intermediary modes are not excited to observable amplitude.This is seen in some other roAp stars, particularly, e.g., HD 217522 (Medupe et al. 2015).
The rotational variations in the mean longitudinal magnetic field shown in Figure 2 of Ryabchikova et al. (2005) shows that pulsation maximum occurs at the time of positive magnetic extremum.The magnetic ephemeris of Ryabchikova 2005) cannot be meaningfully extrapolated across the nearly 20-yr time gap between the magnetic measurements and the TESS photometry to compare the time of pulsation maximum in the TESS data with then extrema of the magnetic field.Interestingly, Figure 2 of Ryabchikova et al. (2005) shows that the mean longitudinal magnetic field of HD 12098 is negative for 30 per cent of the rotation period and positive for 70 per cent.Those are the same percentages of the rotation period for which we see the two pulsation amplitude maxima in Fig. 3.We therefore conclude that the first pulsation maximum occurs when the negative magnetic hemisphere is observed, and the second, more complex maximum occurs when the positive magnetic hemisphere is visible.As with other roAp stars, it is likely that the pulsation amplitude maxima occur at the times of magnetic extrema, as appears to be the case in Figure 2 of Ryabchikova et al. (2005) for the Johnson B observations of Girish et al. (2001) (see the next section).New contemporaneous magnetic and photometric observations are needed to confirm these reasonable conclusions.
There are no systematic studies of the pulsation amplitude and phase as a function of rotation phase, as shown in Fig. 3, compared to the rotational light curves in roAp stars.With TESS data, such a study would be useful.
Comparison with ground-based data
Observing through a Johnson B filter, Girish et al. (2001) obtained ground-based observations and found four pulsation Table 1.A linear least-squares fit of the frequencies derived from the Sector 58 data for HD 12098.The zero point for the phases is t 0 = BJD 2459893.80,which matches the time of magnetic maximum predicted from the ephemeris of Ryabchikova et al. (2005).The second frequency of the quadruplet, ν 2 = 183.3491d −1 (2.1221 mHz) is taken to be the mode frequency from a distorted dipole.The other two frequencies are assumed to be from independent pulsation modes, for which there is no sign of rotational sidelobes as expected for oblique pulsation; those, if present, could be lost in the noise.The frequencies ν 1 to ν 4 are over-specified to 5 decimals so that it can be seen that they are exactly equally split by 0.18315 d −1 (Prot = 5.460 d 2001) data (see Fig. 5 in Section 3 below).As some roAp stars are known to have modes that show amplitude variation on time scales as short as days, and others have been observed to change modes on longer time scales, this may be the explanation for the different frequency ranges found in these two studies.Alternatively, it is known that the pulsation amplitudes and phases in roAp stars vary strongly both as a function of atmospheric depth and over the surface as seen in the spectral lines of elements trapped in abundance spots, usually associated with the magnetic poles (see, e.g., Kurtz et al. (2006) and Freyhammer et al. (2009) for graphic examples).A theoretical study of magneto-acoustic modes as a function of atmospheric depth by Quitral-Manosalva et al. (2018) provides insight into how the acoustic and Alfvén components of the modes vary throughout the line-forming layers of the observable atmosphere.It thus seems possible that mode observed by Girish et al. (2001) in Johnson B may have lower amplitudes, or be undetectable, in the deeper region observed by TESS with its red bandpass.
Yet unexplained complexity in the roAp pulsation modes and in the application of the oblique pulsator model to those stars gives rise to uncertainty in the inference of geometrical information about those modes.Two examples are the suggested discovery of modes with two different pulsation axes in the roAp star KIC 10195926 (Kurtz et al. 2011), and the finding that the pulsation geometry inferred from application of the oblique pulsator model is very different in ground-based Johnson B data and TESS red data in the roAp star HD 6532 (Kurtz & Holdsworth 2020).
The measured photometric amplitudes of pulsation modes in roAp stars vary significantly depending on the bandpass used to make the observations (Medupe & Kurtz 1998) with amplitudes dropping from the blue to the red.This is simply a consequence of the spectral energy distribution and the photometric amplitude being primarily the result of temperature variations.Cunha et al. (2019) compared photometric amplitudes measured through Johnson B and TESS red bandpass for some roAp stars and found that measurements in Johnson B typically show about 6 times the amplitude of measurements in TESS red.However, there is also an atmospheric depth effect, since photometric observations in different bandpasses sample different atmospheric depths, on average.Given the strong dependence of pulsation amplitude on atmospheric depth, it is not possible to conclude whether HD 12098 has changed modes between the time of the ground-based Johnson B observations and the TESS observations, or whether the detected modes have very different amplitudes when observed through different bandpasses.Simultaneous observations in multiple bandpasses, including Johnson B, and with TESS are needed to discriminate between these possibilities.
MODELLING
In this section we model the pulsational amplitude and phase modulations against rotational phase of the main p mode pulsation of HD 12098 (middle and bottom panels of Fig. 3).We obtain these modulations by integrating the eigenfunction of the p mode on the visible hemisphere at each rotational phase for an assumed set of angles β and i; the angle between rotational/magnetic axes and the inclination angle of the line-ofsight against the rotation axis, respectively (Saio & Gautschy Table 2. Adopted parameters of HD 12098 Parallax = 6.833 ± 0.022 mas, T eff = 7600 K 1) V = 7.97 mag, E(B − V ) = 0. , BC = 0.
The pulsational eigenfunctions are calculated by taking into account the effect of a dipole magnetic field (Saio 2005) with strength specified by the field strength Bp at the magnetic poles.Non-adiabatic pulsation variables (assumed to be axisymmetric to the magnetic axis neglecting rotation effects) are represented as a sum of the terms proportional to the Legendre function P ℓ j (cos θ) with ℓj = ℓ0 + 2j where j = 1, 2, . . .jmax (ℓ0 = 0 for even modes and ℓ0 = 1 for odd modes).We set jmax = 12 in this study.
Although even and odd modes are independent of each other, there is no pure dipole, or quadrupole mode since the pulsation energy is distributed among other values of ℓj because of the effects of the magnetic field.However, for convenience, we call a mode a distorted dipole mode if the ℓj = 1 component is dominant, or a distorted quadrupole mode if the ℓj = 2 component is dominant.
First, we choose a stellar model for the pulsation analysis.Table 2 lists observational parameters of HD 12098 adopted from the literature from which we also calculate the bolometric luminosity.Taking into account these parameters, we chose a M = 1.75 M⊙ model having log(L/L⊙) = 1.034 and log T eff (K) = 3.875 from the evolutionary models computed with the initial composition (X, Z) = (0.70, 0.02) in the fully ionised layers.The evolutionary models, common to our previous works (e.g.Holdsworth et al. 2016Holdsworth et al. , 2018;;Shi et al. 2021), were based on assumptions similar to the polar model of Balmforth et al. (2001), in which the helium mass fraction is depleted to 0.01 in the layers above the second He ionisation zone to the surface, and convection in the envelope is neglected assuming a strong magnetic field to stabilise the outer layers.
Fig. 4 compares a dipole pulsation model (ℓ0 = 1) with the observed pulsational amplitude and phase modulations of HD 12098.For this model we have chosen angles β = 30 • and i = 73 • to match the rotation phase at which sudden changes of the pulsation phases are seen.We have normalised the model amplitude to approximately fit the (secondary) maximum at a rotation phase of 0.15.At this phase the rotation axis, the line-of-sight and the magnetic axis are on a meridional plane with the visible magnetic pole at the angle (180 • − β − i) from the line of sight.During the range of rotation phase between 0.0 to 0.3 (when the negative magnetic pole is seen in the lower part of the visible hemisphere) the amplitude and phase variations reasonably agree with those observed.
During the rotation phases between 0.3 and 1.0, the positive magnetic pole is visible.The angle between the visible pole and the line of sight attains minimum (i − β) at the rotation phase 0.65, when the predicted pulsation amplitude is maximum.However, the predicted amplitude during this range of rotation phase deviates considerably from the observed one.The observed amplitude variation is asymmetric with a bump and tends to be lower than the model prediction.In this range of rotation phase, the other positive magnetic pole is visible with angles between (i − β) and π 2 rad from the line of sight.The cause of the deviation, and in particular the cause of the broad bump in amplitude, is not clear.The pulsation phase of our model is constant during this phase interval, while the observed one gradually decreases, which is probably caused by the effect of rotation (neglected in this model) as discussed in Bigot & Kurtz (2011).
The deviation of the predicted amplitude variation from the observed one results in a lack of the fourth frequency component in the Fourier spectrum shown in the top panel of Fig. 4. Our model always predicts a triplet-dominated frequency spectrum for an odd-mode oscillation even if it is significantly affected by contributions from components with ℓj ≥ 3, because such contributions are mostly cancelled through the surface integration as discussed in Saio & Gautschy (2004).For this reason, the Fourier amplitude at (ν − ν2)/νrot = 2 detected in the pulsation of HD 12098 cannot be explained by an axisymmetric odd mode.Some non-axisymmetric variation is needed for the Fourier component.In this respect, the bump in the amplitude modulation of HD 12098 might be caused by a non-axisymmetric (with respect to the magnetic axis) phenomenon and might be related with the Fourier component at (ν − ν2)/νrot = 2.
Our model has a p-mode large separation of 5.47 d −1 (63.3 µHz) similar to the observational value ∼60 µHz.panel) in the frequency range of HD 12098 (lower panel).This figure indicates that the observed highest frequency ν6 corresponds to the dipole mode of order 36, while ν5 is the quadrupole mode of order 35.In addition, it is interesting to note that the frequency 189.2 d −1 (2.19 mHz) obtained by Martinez et al. (2000) is close to a dipole mode of order 34, one order higher than the order 33 for ν2, indicating that the most strongly excited modes seem to shift with time, or that different modes are detected at different atmospheric depths (see section 2.4 above).We do not know why the frequencies of highest amplitude found by Girish et al. (2001) do not match any of our model mode frequencies.We note, however, that these frequencies are all above the acoustic cut-off frequency, ≈ 135 d −1 (1.56 mHz), as in a number of other roAp stars (see Figure 12 of Holdsworth et al. 2018).The excitation mechanism for these high-frequency pulsations in roAp stars is not known.
Finally, we find that quadrupole modes (even modes (ℓ0 = 0) with the maximum amplitude at ℓj = 2) affected with similar magnetic fields are not good for HD 12098.This is because the pulsational phase variations are significantly suppressed just as in our previous cases presented in, e.g., Holdsworth et al. (2018) and Holdsworth et al. (2016).
CONCLUSIONS
HD 12098 is an roAp star with a dipolar magnetic field and a rotation period of Prot = 5.460 d.Using data from Sector 58 of the TESS mission, we have shown that there are rotational light variations consistent with spots, or patches, of enhanced abundances near to each of the magnetic poles.In these senses, HD 12098 is a typical α 2 CVn star.
HD 12098 is a known roAp star (Girish et al. 2001) for which the TESS data give far more insight to the pulsations than previous ground-based data.The star pulsates with several modes in the range 183−200 d −1 (2.12−2.31mHz).While this encompasses the same frequency range found by Girish et al., the star has either changed modes, or the observations through the Johnson B bandpass and the TESS red bandpass sample different depths with different mode visibility.To discriminate between these possibilities requires simultaneous Johnson B and TESS observations.
The interesting new discovery for HD 12098 is that its principal pulsation mode is a dipole mode that is far more distorted than has been observed in other roAp stars.Using models that have reasonably explained the mode distortion in other roAp stars, we are unable to account for the doublehumped pulsation amplitude modulation between rotation phases 0.3 − 1.0.This characteristic has not been observed in any other roAp star.On the dipole pulsation mode hemisphere seen with best aspect during those 0.3 − 1.0 rotation phases, the mode is strongly distorted from a simple dipole.
Interestingly, recently oblique pulsators have been discovered in close binary stars where dipole pulsations are strongly trapped in one hemisphere, or another -the tidally tilted pulsators and so-called single-sided pulsators (Handler et al. 2020, Kurtz et al. 2020, Fuller et al. 2020, Rappaport et al. 2021, Zhang et al. 2023).Whether the clear asymmetry of the pulsation in the dipole hemispheres of HD 12098 has any relation to this is unknown.
HD 12098 also shows a low-overtone p mode and possibly some g modes.These, too, are unusual in roAp stars and theoretically unexpected (Saio 2005) for this star's magnetic field strength of over a kG (Ryabchikova et al. 2005).This star calls for simultaneous multi-colour photometric observations, and it presents new, challenging behaviour to theory.
Figure 1 .
Figure 1.Panel 1: The Sector 58 TESS light curve for HD 12098showing the double-wave rotational variation typical of the α 2 CVn stars.Panel 2: A low-frequency amplitude spectrum showing the first two harmonics of the rotation frequency.Panel 3: A lowfrequency amplitude spectrum after pre-whitening a 10-harmonic fit for the rotation frequency.There are significant peaks visible that may be from g modes.One of these is not fully resolved from the second harmonic of the rotation frequency.Panel 4 shows a 5σ peak that is potentially from a low-order p mode.Note the changes of ordinate scale on the panels.
Figure 3 .
Figure 3. Panel 1: The Sector 58 TESS light curve phased; the data have been binned by 10.Panels 2 and 3: The pulsation amplitude and phase as a function of rotation phase.These have been calculated from the frequency quadruplet only, with ν 2 = 183.34905d −1 as the mode frequency.
Figure 4 .
Figure 4. Comparison of observed Fourier amplitude (top), rotational modulations of the pulsation amplitude (middle) and the phase modulation (bottom) of HD 12098 with a 1.75 M ⊙ dipole pulsator model at a 2.5 kG magnetic field.
Fig. 5 compares model frequencies of distorted dipole and quadrupole p modes at Bp = 2.5 kG (upper
Figure 5 .
Figure 5.Comparison of observed frequencies with dipole and quadrupole modes of our model at Bp = 2.5 kG.The integer along each frequency of the dipole (ℓ = 1) modes indicates the radial order.The blue dashed lines show the frequencies found by Girish et al. (2001) from ground-based Johnson B data. ).
It is clear from comparison with Fig. 2 that peaks in this frequency range are not present in the TESS Sector 58 data, although ν6 = 199.2402d −1 (2.3060 mHz) is close.Similarly, the frequency quadruplet seen in Fig. 2 in the range 183.166 − 183.715 d −1 (2.120 − 2.126 mHz) is clearly not present in the Girish et al. ( | 6,928.6 | 2024-02-09T00:00:00.000 | [
"Physics"
] |
Two-dimensional solid state gaseous detector based on 10B layer for thermal and cold neutrons
Two-dimensional solid state gaseous detector for thermal and cold neutrons is created. The detector has active area of 128 x 128 mm2, 10B neutron converter, and gas chamber with thin windows. The resistive charge-division readout is applied to determine the neutron position. The detector was tested using W-Be photoneutron source at the Institute for Nuclear Research, Moscow. The detector efficiency is estimated as ∼4% at neutron wavelength λ = 1.82 Å and 8% at λ = 8 Å. The efficiency of the background detection was less than 10-5 of that for thermal neutrons. The resulting pulse height resolution and the spatial resolution are estimated as ∼15% and ∼2.5 mm, respectively.
Introduction
A thermal and cold neutron positional sensitive detectors are key devices of small angle neutron scattering (SANS) setup for studying structure and sizes of polycrystalline object in nanotechnology and biology [1,2]. Such detectors are also used as neutron flux monitors [3]. Recently, SANS has been applied for an investigation of charging and discharging cycle of Li-ion battery in situ [4].
Usually, a detectors based on 3 He gas [5] operate at high pressure which require to use a thick entrance window (~1 cm), as the leakage of this expensive gas leads to an efficiency decrease. But a neutron flux reduction in thick window does not permit to apply detector at wavelengths more than 8 Å. An alternative detector with a separation of functions of neutron converting to charged particles in 10 B layer and their further detecting in the ion chamber solves not only the problem of the gas mixture stability but permits to use cheap gases under standard conditions. The localization of an interaction point of neutron in the 10 B layer plane gives a possibility to obtain the good spatial and time resolution. The detector with a thin entrance window, in contrast with a gas filled one under extreme pressure, could be used in experiments with cold neutrons.
Recently, a prototype of such detector based on three 1.3 μm layer of 10 B [6] has been created. A detector with six layers of 1.4 μm 10 B is proposed [7]. Its efficiency is estimated as 21% at λ = 1.82 Å. An advantage of our detector is the thin entrance window (3mm) and a long time operation which is provided by the protective semiconducting polymer layer on the boron-aluminium cathode. To avoid a mutual diffusion the boron layer is separated from aluminium by a polyimide layer. n + 10 B→ 11 B*→ 4 He (1776 keV) + 7 Li (1013 keV), BR= 6.4%. Sum of cross section of both reactions up to energy of 1 keV can be estimated by the formula:
Operating principle of detector
where σ 0 =3837 b, λ 0 =1.82 Å and E 0 = 0.025 eV. The 4 He and 7 Li nuclei are detected in positional sensitive gaseous chamber. The chamber front cathode is founded on the glass disk of 2 mm thickness. Despite the fact that boron is semiconductor his surface quality is not suited to be used as the chamber electrode as was proposed in [6]. Therefore, the front cathode contains 3 μm 10 B layer coated by a polyimid layer and 0.1 μm aluminum layer coated by protective semiconducting polymer upon the glass disk.
Multiwire and multipad ion chamber
A design of multiwire and multipad ion gas chamber is shown in figure 1. A housing of the detector consists of front and rare covers of 20 mm duralumin and a side cylindrical wall of stainless steel. The housing is sealed hermetically with a vacuum rubber. Each cover has a 3 mm entrance window for neutrons. There are placed inside the housing the following elements: 2-mm glass disk with 3 μm 10 В layer and 0.1 μm aluminium layer. The disk serves as a front cathode.
Anode of 64 parallel 20 μm wires of tungsten rhenium coated by gold with 2 mm spacing. Rare cathode of 1 mm fiberglass with the 63 copper insulated pads of 2 mm width.
A distance between the anode and each cathode is equal to 2 mm. Wires and pads are placed perpendicular to one other. The electrode assembly is enclosed in fluoroplastic housing. 1front and rare housing covers; 2cylindrical side wall of housing; 3windows; 4glass disk; 5boron layer; 6aluminum layer; 7wire anode of X coordinate; 8pad rare cathode of Y coordinate; 9fluoroplastic housing of detection assembly.
A both anode wires and rare cathode pads are connected in series to each other through a 20 Ω resistor. The both ends of the anode resistor chain through the high voltage capacitors are connected to the X 1 and X 2 preamplifiers for measuring X coordinate. The both ends of the rare cathode resistor chain are connected to the Y 1 and Y 2 preamplifiers for measuring Y coordinate. As soon as the pulse height from any Y 1 or Y 2 preamplifier output exceeds a threshold of the discriminator (CAEN C808) the trigger occurs. The trigger starts a conversion in the amplitude-to-digital converter (ORTEC AD811). Coordinates X and Y are determined from the X 1 , X 2 , Y 1 and Y 2 pulse heights. Accumulated data from ADC is written on computer disk. The front cathode is connected to preamplifier and discriminator. A discriminator pulse could be used as a trigger and stop signal in a time-of-flight measurement. The positive anode voltage is set from 620 V to 920 V for the gas mixture Ar+25%CO 2 + 0.3%CF 3 Br under pressure of 1.05 bars. An internal volume of detector is equal to 3.5 liters.
Experimental data analysis
Detector was tested using a neutron source. Tungsten beryllium photoneutron source (IN-LUE) was created on the base of industrial 8 MeV electron linac LUE-8, tungsten electron-gamma convertor, photoneutron beryllium target and polyethylene moderator of fast neutrons. The maximal flux density of thermal neutrons in the center of the source is about of 10 7 cm -2 s -1 . Detector is located at a distance of 6 m from the source at an angle of 60º relative to the electron beam axis.
The two-dimensional diagram of pulse heights X 1 and X 2 at the anode voltage 700 V is shown in figure 2. An event is registered if any pulse height Y 1 or Y 2 exceeds a preset threshold. It means that the secondary charged particle has produced ionization in both gaps of the chamber. The normalized pulse height sum spectrum is shown in figure 3. The pulse height resolution as a full width at half maximum (FWHM) is equal 15% at 700 V.
The coordinate spectrum is shown in figure 4. The periodical structure in the spectrum at 700 V can be explained by a variation of electrical field near and between wires. A fact of observation of the structure leads to an estimation of spatial resolution ~ 2.5 mm. One can see from figure 5 that the form of pulse heights X 1 and X 2 at 800 V becomes broader. A height pulse corresponds to ionization loss in the gas gap. The X 2 height pulse and the simulated ionization loss spectrum are shown in figure 6. Comparing the measured and simulated spectra we can estimate the energy threshold as ~0.2 MeV and maximum position as ~0.55 MeV. Thus, the periodical structure in coordinate spectrum in figure 7 is almost disappeared. Taking into account the geometry of the detector, its solid angle, and neutron flux of the source an estimation of the detector efficiency ~ 4% is obtained. Simulated efficiency for two energy thresholds is presented in table 1. Figure 7. Coordinate spectrum at 800 V.
Then a cadmium shield of 2 mm thickness is installed in front of detector so that the left edge of the detector sensitive area was open but the right edge was closed on axis X. The opened width of detector area was 75 mm. The pulse height X 1 and X 2 2D-diagram at 650 V is shown in figure 8. It can be seen that the main part of events (99.99%) is located inside the dashed ellipse. The X coordinate spectrum for events located within the dashed ellipse is shown in figure 9. The shadow of the cadmium shield is observed on the right side. On the contrary, the coordinate spectrum for 0.01% events out of the dashed ellipse is shown in figure 10. They are related to the 4 He and 7 Li nuclei which produce a long track and move at small angle to the anode wire plane. Counting rate without Be-target (only electrons and gamma) was less than 0.001% of that with Be-target (thermal neutrons).
Conclusions
Position-sensitive thermal and cold neutron detector with 3 μm sensitive 10 B layer and 128 x 128 mm 2 gas chamber was created and studied. Positions are determined by a division charge method. The detector efficiency is estimated as 4% to 8%. Ratio of background efficiency to thermal neutron efficiency is less than 10 -5 . Pulse height resolution is about 15% and X coordinate spatial resolution is estimated as 2.5 mm at 700 V for gas mixture Ar+25%CO 2 +0.3%CF 3 Br in standard conditions. | 2,168.6 | 2016-12-20T00:00:00.000 | [
"Physics"
] |
WolfPath: Accelerating Iterative Traversing-Based Graph Processing Algorithms on GPU
There is the significant interest nowadays in developing the frameworks of parallelizing the processing for the large graphs such as social networks, Web graphs, etc. Most parallel graph processing frameworks employ iterative processing model. However, by benchmarking the state-of-art GPU-based graph processing frameworks, we observed that the performance of iterative traversing-based graph algorithms (such as Bread First Search, Single Source Shortest Path and so on) on GPU is limited by the frequent data exchange between host and GPU. In order to tackle the problem, we develop a GPU-based graph framework called WolfPath to accelerate the processing of iterative traversing-based graph processing algorithms. In WolfPath, the iterative process is guided by the graph diameter to eliminate the frequent data exchange between host and GPU. To accomplish this goal, WolfPath proposes a data structure called Layered Edge list to represent the graph, from which the graph diameter is known before the start of graph processing. In order to enhance the applicability of our WolfPath framework, a graph preprocessing algorithm is also developed in this work to convert any graph into the format of the Layered Edge list. We conducted extensive experiments to verify the effectiveness of WolfPath. The experimental results show that WolfPath achieves significant speedup over the state-of-art GPU-based in-memory and out-of-memory graph processing frameworks.
Introduction
The demand for efficiently processing large scale graphs has been growing fast nowadays.This is because graphs can be used to describe a wide range of objects and computations and graph-based data structures are the basis of many applications [19,[22][23][24]26,30,36,46,48].Traditionally, motivated by the need to process very large graphs, many frameworks have been developed for processing large graphs on distributed systems.Examples of such frameworks include Pregel [28], GraphLab [25], PowerGraph [9] and GraphX [10].However, since developing distributed graph algorithm is challenging, some researchers divert their attention to design the graph processing system that handle large scale graphs on a single PC.The research endeavours in this direction have delivered the systems such as GraphChi [17], PathGraph [45], GraphQ [39], LLAMA [27] and GridGraph [51].However, these systems suffer from the limited degree of parallelism in conventional processors.GPU is renowned for its potential to offer the massive degree of parallelism in many areas [3,4,11,21,32,[42][43][44]49,52].Therefore, much research now resort to use GPU to accelerate the graph processing process.The exemplar GPU-based graph processing systems include Medusa [47], Gunrock [40], CuSha [16], Frog [35] and MapGraph [6].
Many of these graph processing frameworks employ iterative processing techniques [38,41,50].Namely, graph processing involves many iterations.Some iterative graph processing algorithms use the threshold value (e.g., in the PageRank algorithm) or the number of vertices/edges (e.g., in the Minimum-cut algorithm) to determine the termination of the algorithms.In these algorithms, the iteration count is known beforehand.
However, in iterative traversing-based graph processing, the algorithm is driven by the graph structure and the termination of the algorithm is determined by the states of vertices/edges.Therefore, these algorithms need to check the state of vertices/edges at the end of every iteration to determine whether to run the next iteration.In each iteration, either synchronous or asynchronous methods can be used to compute and update the values of vertices or edges in the graph.The processing terminates when all vertices meet the termination criteria.
The aforementioned termination criteria is application-specific (e.g, the newly computed results of all vertices remain unchanged from the previous iteration).The number of iterations is unknown before the algorithm starts.Such a termination method does not cause any problem on CPU-based graph processing systems, because checking the termination condition on CPU does not incur much overhead.On the other hand, termination checking is a time consuming process in GPU-accelerated systems.This is because all the threads in different thread blocks need to synchronize their decision at this point.However, current GPU devices and frameworks (CUDA [5] and OpenCL [37]) do not support synchronization among different thread blocks during the execution of the kernel.Therefore, to synchronize between different thread blocks, the program has to exit the kernel, and copy the data back to the host and use the CPU to determine whether the computation process is complete.This frequent data exchange between host and GPU introduces considerable overhead.
To address this problem, we present WolfPath, a framework that is designed to improve the iterative traversing-based graph processing algorithms (such as BFS and SSSP) on GPU.Our development is motived by the facts that: (1) the iterative graph processing algorithm converges when the computations on all vertices have been completed; (2) the iterative traversing-based graph processing on GPU requires frequent data exchange between GPU and host memory to determine the convergence point; (3) the maximum number of iterations needed to complete a traversing-based graph processing algorithm is determined by the graph's diameter (longest shortest path).
WolfPath has following features.First, a graph in WolfPath is represented by a tree structure.In doing so, we manage to obtain very useful information, such as the graph diameter, the degree of each vertex and the traversal order of the graph, which will be used by WolfPath in graph computations.Second, we design a layered graph structure, which is used by WolfPath to optimize GPU computations.More concretely, for all vertices in the same depth of the graph tree, we group all the edges that use these vertices as source vertex into a layer.All the edges in the same layer can be processed in parallel, and coalesced memory access can be guaranteed.Last but not least, based on the information we gain from the graph model, we design a computation model that does not require frequent data exchange between host and GPU.
The rest of the paper is organised as follows.Section 2 overviews the limitation of iterative graph processing technique used by the state-of-art graph processing frameworks.Section 3 presents the graph modelling and in-memory data structure proposed in this paper.Section 4 presents the details of the WolfPath framework, including the iterative processing model and the graph partition method when the graph can not fit into GPU.Experimental evaluation is presented in Sect. 5. Section 6 discusses related work.Finally, Sect.7 concludes this paper.
Motivation: the Limitation of Current Approach
Many parallel graph processing frameworks are based on the iterative processing model.In this model, the computation goes through many iterations.In each iteration, either synchronous (such as Bulk Synchronous parallel used by Pregel) or asynchronous (Parallel sliding window used by GraphChi) methods can be used to compute and update the vertex or edge values.The computation terminates when all vertices meet the application specific termination criterion (e.g, the results of all vertices remain unchanged).
In order to determine if all processes/threads meet the termination condition, the processes/threads need to communicate with each other.On CPU based parallel graph processing systems, the processes/threads can communicate through messaging pass-ing or shared memory.On GPU, the threads are organised in thread blocks and the communication can be divided into two types: intra-and inter-block communication.
Intra-communication refers to the communications between the threads within a block, which is achieved via shared memory or global memory in GPU.On the contrary, the inter-block communication is the communications across different thread blocks.However, there is no explicit support for data communication across different thread blocks.Currently, inter-block data communication is realized through the GPU global memory followed by a barrier synchronization in CPU [5,37].The barrier is implemented by terminating the execution of the current kernel and re-launching the kernel.
The reason for the lack of support for inter-block communications on GPU is as follows.On GPU, the number of thread blocks launched by an application is normally much larger than the number of Streaming Multiprocessors (SM).However, When a large number of threads try to communicate between different blocks, it can cause the deadlock.An example is given as follows to illustrate the deadlock issue.Suppose that there are 5 thread blocks and only 4 SMs and that each thread block will occupy all resources on a SM.Assume blocks 1-4 execute on 4 SMs.When synchronization occurs, blocks 1-4 will wait until block 5 finishes.However, block 5 will never be executed on any SM since all SMs are busy and there are no resources available.Consequently, the deadlock occurs.
Due to the lack of support for inter-block communications, implementing iterative graph computation algorithm on GPU is much more challenging than on CPU.To demonstrate this, let us first consider how the iterative computation is implemented on CPU.Algorithm 1 shows the high level structure of the iterative computation on CPU.The loop is controlled by the flag variable, which is set to be true at the beginning.Next, all processes/threads execute a user-defined function, compute(), and then invoke the update_condition function to check if the user-specified termination condition is met.Each process/thread has its own f lag variable, which is updated by update_condition function.The update_condition function returns false if the program reaches the termination condition and returns true if otherwise.The f lag variable is synchronised between all the processes/threads.If its value is f alse, the iteration terminates.
Algorithm 1 Iterative Computation on CPU
When executing the above code in parallel on a CPU-based system, the synchronization of the f lag variable can be easily achieved through shared memory or message passing.However, due to the fact that the current GPUs do not support synchronisation between different thread blocks, it takes more efforts to achieve the synchronization on GPU.The only solution that can ensure that the shared variable is properly synchronized across all thread blocks is to exit the kernel.To implement the iterative processing on GPU, many state-of-art graph processing frameworks use the following approach.The f lag variable is stored in the global memory of GPU.Each thread also has a local d_ f lag variable.If a thread meets the termination condition in the current iteration, it sets its own d_ f lag to false.Then d_ f lag is synchronised between all the threads within a thread block.One thread in each thread block updates the global f lag variable if the value of d_ f lag in this thread block is f alse.Next, the program exits the kernel and copies the value of f lag back to the host, which is used by the host program to determine whether another kernel should be launched (i.e., continuing to perform the computation).This technique is outlined Algorithms 2 and 3. Clearly, in order to decide whether to launch the next iteration, the value of f lag needs to be exchanged between host and GPU frequently.These operations incur a significant overhead.If the number of iterations needed to complete the graph computation is known beforehand, the exchange of f lag between host and GPU can be eliminated, which can potentially improve the performance.
Algorithm 2 Iterative processing kernel
We conduct the experiments to investigate the extra overhead caused by exchanging the f lag variable.Four real world graphs, which are listed in Table 1, are used in this experiment.We determine the diameter of the graph using the BFS algorithm implementated on CuSha framework [5].We record the computation time and the number of iterations it takes to complete the algorithm.We can prove that the number of iterations the iterative graph computation has to perform must be less than or equal to the graph diameter.Instead of using the f lag value to determine the termination condition of the graph computation, we terminate the computation when the number of iterations equals to the graph diameter.We re-run the modified program and record the computation time.The results are listed in Table 2.
As can be seen from Table 2, the computation time of modified program is much faster than the original version.We also noticed that the original program takes a fewer number of iterations than the modified version in some cases.This is because the computation converges fast in those cases.Therefore, using the graph diameter as the termination condition in those cases causes the extra overhead of performing unnecessary computations.In order to compare these two types of overhead, we mea- sured the average time for performing data copying and the average computation time per iteration in the experiments.The results are listed in Table 3: The results from Table 3 show that the data copy time is greater than the computation time by 90-1500 times.These results indicate that it is worth performing the excessive iterations, comparing with copying data.We observe another interesting point in Table 3.No matter which graph the algorithm is processing, only one integer (i.e., the value of the f lag variable) is copied between GPU and host.However, the average memory copying time per iteration is different for different graphs.This is because the synchronisation cost between thread blocks is different for different graphs.More threads are involved in synchronisation, the longer the synchronization takes.
All these results support our proposed strategy.Namely, it can improve performance to use the maximum number of iterations as the termination condition so as to eliminate the need of data copying between GPU and CPU.However, a question arises from the strategy: how to know the number of iterations needed for different graphs before the graph processing algorithm starts?This question motivates us to design a new graph representation that help determine the graph diameter and further develop the novel and GPU-friendly graph processing methods.
Graph Representation in WolfPath
In this section, we present the graph model to help determine the number of iterations when processing a graph on GPU iteratively.We also present the detailed data structure of our in-memory graph representation, which is designed in the way that can improve the coalescence in-memory access so as to achieve the higher performance.
Graph Modelling in WolfPath
The experimental results shown in last section suggest that using the number of iterations can improve the performance in graph processing.However, because graph processing algorithms are data driven, it is difficult to determine the number of iterations for different graph inputs before the program starts.Much research [14,15,18,28] have been conducted to tackle this problem.The research shows that when processing graph algorithms iteratively, the upper bound of the number of iterations is the diameter of a graph, i.e., the number of the nodes on the path corresponding to the graph diameter [31].
In WolfPath, we model the graph as a layered tree structure.That is, we first represent the graph as a tree-like structure, then group the vertices that in the same depth into one layer.By modelling the graph this way, the diameter of the graph is the distance from the root vertex to the deepest leaf vertex.
If some vertices in the same layer are connected, we duplicate these vertices in the next level.By duplicating these vertices, the vertex value updated in the current level can be sent to the next level.The reason for this design is as follow.The vertices in the current level and the next level form a group of edges.If a vertex is both source and destination of different edges, the calculated values of their neighbouring vertices may not settle (i.e., the calculated values are not final values of the vertices) after one iteration.Therefore, by duplicating these vertices in the next level, the updated value of their neighbouring vertices will be recomputed.
Based on the above description, given a graph G = (V, E), a layered tree T = (V t , E t ) is defined as follows.V t ⊆ V and E t ⊆ E. The root vertex of the tree, denoted by v rt ∈ V t , is the vertex which does not have in-edges.degree in (v) denotes the in-degree of vertex v. Then degree in (v rt ) = 0. 1b gives the tree structure of the graph shown in Fig. 1a.
Graph Data Structure in WolfPath
As shown in the previous research [16], one of the main factors that limits the performance of graph processing on GPU is the non-coalesced memory access.This is because most graphs have highly irregular structures.Hence, it is important to design a data structure for the graph storing in the GPU global memory so as to achieve the coalesced global memory access.Based on the layered tree structure, we design a layered edge list structure to store the graph in the memory.In this design, each level in the layered tree is represented by two arrays, source array and destination array, which are used to store the source and destination vertexes of each edge in that level respectively.The i-th entry in the source array and the i-th entry in the destination array form an edge in the level.We also create an array for each layer to store the updated values of the destination vertices in that layer.An example is shown in Fig. 2.
It provides the following benefits to use this structure to store a graph in memory.First, the consecutive threads can read/write the consecutive elements from/to the source and destination arrays in the global memory.Therefore the coalesced global memory access can be guaranteed.Second, because all the edges in each layer are independent to each other, we can process them in parallel.In addition, if one layer is too big to fit in the GPU global memory, we can split it into multiple smaller edge lists.Third, multiple layers can be combined together to fully utilise the computation power of GPU.We will discuss the last two advantages in more detail in a later section.Fig. 2 An example of layered edge list structure for graph in Fig. 1a memory and converts it into the CSR format.Next, it uses Algorithm 4 to build the layered tree, and then writes the layered tree back to a file stored in the disk.It is worth noting that this program is only run once for a graph.When processing a graph, WolfPath will first check if there exists a corresponding layered tree file.If the file exists, WolfPath will use it as the input.Otherwise, it will convert the graph into this new format.Algorithm 4 is used to build the layered tree.
Algorithm 4 Building layered tree for all neighbor u of v do 10:
16:
Q.enqueue(u) 17: Algorithm 4 is based on the breadth-first algorithm (BFS).The algorithm constructs an layered tree T for graph G (Line 2).It also creates Queue Q (Line 3).The algorithm starts with adding the vertex v rt into Q (Line 4).In order to quickly retrieve the level information of each node, the algorithm also maintains an array called node_level (Line 5), which is used to store the level information of each node.The size of this array is equal to the number of vertices (denoted by V ) in the graph G.This array is indexed by vertex id.The algorithm initialises all vertices in this array to 0 (Line 5).Then the algorithm performs the following steps iteratively (Line 6): it pops out the first vertex v from the queue (Line 7), and reads its level information from node_level (Line 8).∀e : v → u ∈ E (Line 9), if u / ∈ T (Line 10), or u has already been added into T but is in the same level as v (Line 14), the algorithm adds edge v, u in T (Line 11, 15).Next, the algorithm puts u in the queue by performing enqueue(u) (Line 12, 16), sets the level of u to the current level plus 1 (Line 13 and 17).This process repeats until Q becomes empty.
The Computation Model of WolfPath
The computation process of WolfPath is as follows.In WolfPath, the graph is processed layer by layer.For each layer, three operations are performed by GPU in parallel: the read, compute and write operations.For i-th level in the graph, the read operation reads the updated vertex value from the global vertex array.The compute operation acts on each edge and uses the data gathered from the read operation to compute the value for its edge/vertices.The write operation writes the updated value to the global vertex value array.So the updated values can be used in next iteration by the read operation.Hence, The computation model in WolfPath is synchronous and guarantees that all updates from a previous compute phase are seen only after the write operation is completed and before the next read operation starts.The whole process terminates when all the levels have been processed, that is, the number of iterations is equal to the number of levels of the graph.This process is outlined in Algorithm 5. for all vertexes in parallel in level i do 11: write vertexes value to update value array
Combined Edge List
By representing the graph in the layered tree format, we can gain the knowledge about how many layers there are in the graph and use it to determine the number of iterations needed for most graph algorithms.However, because WolfPath processes the graph layer by layer and the graphs have the nature of irregular data structure, such representation may cause the under-utilisation of GPU.
Each layer in the graph may have different numbers of edges.This number varies dramatically between levels.For instance, consider the graph shown in Fig. 1.The first layer has 2 edges only.on the other hand, the second and third layer have 4 and 6 edges respectively.Hence, the number of threads required to process the first level is far less than the computing power (i.e., the number of processing cores) of GPU.
To overcome this problem, we propose the combined edge list, which is a large edge list that is constructed from multiple layers of edge lists.The combined edge list is constructed in the following way.We first define a number M E, which is the minimum amount of edges to be processed by GPU.Then we add the number of edges level by level starting from the first level of the layered tree.Once the total number of edges is greater or equal to M E, we group these levels together and then re-count the edges from the next level.This process repeats until all levels have been processed.
The way of building the combined edge list ensures that each combined edge list is formed by consecutive edge lists from the layered tree.Hence, each combined edge list can be treated as a sub-graph of the original graph.Therefore, the number of iterations required to process a combined edge list is equal to the number of levels used to form this tree.So Algorithm 5 is re-designed as Algorithm 6.
8: parallel process all edges in C E L[i]
It is very important that we group the consecutive levels together to form a combined edge list.Because the result of the vertices in level i depends on those in level i − 1, the results from the previous iteration need to be passed to the next iteration, which can be easily achieved by grouping the consecutive levels together.If we group the non-consecutive levels into a list, passing results between different levels requires lot of data transferring between host and GPU memory, which will harm the performance.
There remains one question: how do we choose the value for M E? If the number of M E is too small, the resulting edge list may not fully occupy the GPU.On the contrary, if it is too large, the size of the edge list may exceed the size of the GPU memory.Since it is desired that the GPU is fully occupied, the maximum active threads can be used to determine the minimum number of edges per combined edge list.The maximum number of active threads is the number of threads that a GPU can run simultaneously in theory, which can be found by Eq. 1, where N sm is the number of multiprocessors (SMs), M W psm is the maximum number of resident warps per SM and the T pw is the threads per warp.
Out of GPU Memory Processing
Comparing with the host platform, the GPU has a limited memory space.The size of real world graphs may be from few gigabytes to terabytes, which are too large to fit in the GPU global memory.Therefore, in this section, we develop an out-of-GPUmemory engine that can process such large scale graphs.The general process of developing an out-of-GPU-memory engine is to first partition the graph into sub-graphs that can fit into GPU memory, and then process these subgraphs in GPU one at a time.Therefore, our first objective in designing such an engine is to achieve good performance in graph partitioning.Ideally, the performance of this process should be as close as possible to the performance of loading the graph into memory.The second objective is that after partitioning the graph, the graph framework has to deal with the frequent data exchange between GPU and host memory.Otherwise, the performance will take hit.
Based on the design of Layered Edge List, these two goals can be achieved.The process of partitioning the graph is similar to building the combined edge list.We start with the first layer and accumulate the vertices in the layers until the size of accumulated vertices becomes larger than the GPU globa memory.We group all the accumulated vertices as a sub-graph and start the accumulating process again from the current layer.The partitioning process is complete when we have processed all layers in the graph.
The complexity of such partitioning method is O N , where N is the number of layers in the graph.Given the fact that most real world graphs, especially the social network graphs, do not have a big diameter, the number of N will not be very large.
After partitioning the graph, each sub-graph is processed in order based on their positions in the graph.That is, the processing starts with the sub-graph that contains the first layer.The next sub-graph to be processed is the one that follows the first sub-graph.In addition, the updated values of the vertices in the last layer of the current sub-graph need to be passed to the next sub-graph.This can be achieved by retaining the updated vertex values in the global memory after the computation is finished.Therefore, when next sub-graph is loaded into the GPU global memory, the data needed by the subgraph is in the global memory.To process each sub-graph, we use the same method as that for in-GPU-memory processing.Combining multiple layers into a sub-graph enables us to fully utilise the GPU.
It is possible that the size of one layer is larger than the GPU memory. in this case, we can split this layer into multiple parts and compute one part at a time.This works because all the edges in a layer are independent to each other and it is therefore safe to partition a layer into multiple small chunks and process them separately.
Optimisation for GPU
When implementing WolfPath, we take advantage of the hardware architecture of GPU to optimise the performance.Specifically, WolfPath uses the shared memory to improve the performance of random access to the memory and performs computation and communication asynchronously.
Shared Memory
Writing the updated value into the updated value array will cause the random access to the global memory.To address this issue, we use the shared memory in GPU to store the vertex array, which can enhance the performance of random access, since the access to the shared memory is much faster than global memory.
The following method is used in WolfPath to use the shared memory effectively.The shared memory in GPU is shared between all the threads in a thread block.Therefore, the input edge list is split into small blocks in WolfPath, and a vertex array is constructed for each block.Each edge block is processed by one thread block.Multiple thread blocks are processed in parallel.In each thread block, edges are processed in parallel by the threads.
During the execution, the threads first fetch the updated vertex values into the shared memory of the block.The consecutive threads of the block read consecutive elements of the local vertex value array.Hence, reading requests are coalesced to the minimum number of memory transactions.After the computation, the threads first write the newly computed values into the shared memory, and then perform synchronisation between different thread blocks by writing the data from the shared memory to the global memory.
Asynchronous Execution
WolfPath asynchronously performs computation and communication.Specifically, it leverages the CUDA Streams and the hardware support such as Hyper-Qs provided by NVIDIAs Kepler to enable data streaming and computation in parallel.WolfPath creates multiple CUDA Streams to launch multiple kernels and overlap memory copying and computation in order to transfer data asynchronously.This is motivated by the fact that an edge list in WolfPath can be split into many sub-arrays.Each of these sub-arrays is independent to each other.WolfPath exploits this fact and does not move the entire edge list in a single copy performed by a CUDA stream.Instead, WolfPath creates multiple CUDA Streams to move these sub-arrays to the GPU.As the result, many hardware queues in GPU are used concurrently, which improves the overall throughput.
Experimental Evaluation
In this section, we evaluate the performance of WolfPath using two types of graph dataset: small sized graphs that can fit into the GPU global memory (called in-memory graphs in the experiments) and large scale graphs that do not fit (out-of-memory graphs).The size of a graph is defined as the amount of memory required to store the edges, vertices, and edge/vertices values in user-defined datatypes.
Small graphs are used to evaluate WolfPath's performance in in-memory graph processing against other state-of-art in-memory graph processing systems (e.g., CuSha [16] and Virtual-Warp-Centric [13], and the large graphs are used to compare WolfPath with other out-of-core frameworks that can process large graphs on a single PC (e.g., GraphChi and X-Stream).
The eight graphs listed in Tables 1 and 4 are publicly available.They cover a broad range of sizes and sparsity and come from different real-world origins.For example, Live-Journal is directed social networks graph.RoadNetCA is the California road network.orkut is an undirected social network graph.uk-2002 is a large crawl of the .ukdomains, in which vertices are the pages and edges are links.
We choose two widely used searching algorithms to evaluate the performance, namely Breadth First Search (BFS) and Single Source Shortest Paths (SSSP).
The experiments were conducted on a system with a Nvidia GeForce GTX 780Ti graphic card, which has 12 SMX multiprocessors and 3 GB GDDR5 RAM.On the host side, we use the Intel Core i5-3570 CPU operating at 3.4 GHZ with 32 GB DDR3 RAM.The benchmarks were evaluated using CUDA 6.5 on Fedora 21.All the programs were compiled with the highest optimisation level (-O3).
Comparison with Existing In-Memory Graph Processing Frameworks
In this section, we compare WolfPath with the state-of-art in-memory processing solutions such as CuSha [16] and Virtual Warp Centric [13].In the experiments, we use the CuSha-CW method, because this strategy provides the best performance in all CuSha strategies.Both CuSha and Virtual Warp Centric apply multi-level optimisations to the in-memory workloads.
We first compare the computation times among WolfPath, CuSha and VWC.We also list the breakdown performances in Fig. 5.In these experiments, the Data Transfer is time taken to move data from the host to GPU, the computation refers to the time taken for actual execution of the algorithm.We also recorded the number of iterations that each algorithm takes in different systems, these results are listed in Table 5.
As can be seen from these results, WolfPath outperforms CuSha and Virtual Warp Centric.Although WolfPath requires more iteration to complete the algorithm, the average speedup of WolfPath over CuSha is more than 100×, and 400× over VWC.This is due to the elimination of memory copy operations.Also, the performance of VWC is the worst among 3 systems, because VWC does not guarantee the coa- lesced memory access.On the other hand, with carefully designed data structures, both WolfPath and CuSha can access graph edge in a sequential manner, hence memory performance is much better.We also noticed that WolfPath takes more iterations than other implementations.This is because in iterative computation model, the computation can be converged very fast, but WolfPath is bounded by the graph diameter, which is the upper bound of the iteration count.However, as discussed in the previous section, compare to memory copy operation, computation is much faster.Therefore, WolfPath still outperforms other systems.
Comparison with GPU Out-of-Memory Frameworks
The results shown in the last section demonstrate WolfPath's performance in processing in-memory graphs.However, many real-world graphs are too large to fit in GPU memory.In this section, we examine the WolfPath's ability to process large graphs which cannot fit into GPU memory.To the best of our knowledge, the state-of-art GPU-based graph processing frameworks [6,16,47] assume that the input graphs can fit in the GPU memory.Therefore, in this work, we compare WolfPath (WP) with two CPU-based, out-of-memory graph processing framework: GraphChi (GC) [17] and X-Stream (XS) [33].To avoid disk I/O overhead in systems such as GraphChi and X-Stream, the dataset selected in the experiments can fit in host memory but not in GPU memory.
As shown in Figs. 6 and 7, WolfPath achieves an average speedup of 3000× and 4000× over GraphChi and X-Stream (running with 4 threads), respectively, despite its need to move the data between GPU and CPU.We also list the computation time and iteration counts of three systems in As can be seen from the Table 6, although WolfPath performs more iterations than GraphChi and X-stream, it still outperforms them.This performance improvement is due to the massive parallel processing power provided by GPU, while GraphChi and X-Stream are CPU-based and their degrees of parallelism are limited. 123
Memory Occupied by Different Graph Representation
From Fig. 5, we can see that VWC has the shortest data transfer time.This is because it represents the graph in the CSR format, which is memory efficient.However, in order to have sequential access to the edges, both WolfPath and CuSha represent graphs with edges, which consume more memory space than CSR.In this section, we evaluate the cost of using the Layered Edge list representation in terms of required memory space against CSR and CuSha's CW representation.Figure 8 shows the memory consumed by WolfPath's Layered Edge List, CuSha-CW and CSR.The Layered Edge List and CuSha-CW need 1.37× and 2.81× more space on average than CSR.CuSha uses 2.05× more memory than WolfPath, because it represents each edge with 4 arrays.
Preprocessing Time
Table 7 shows the preprocessing time of WolfPath, CuSha, and GraphChi.The preprocessing time refers to the time taken to convert the graph from the raw data to the framework specified format (e.g, layered tree in WolfPath or Shard in GraphChi).It consists of the graph traversing time and the time to write the data into the storage.Because CuSha is unable to process the graph larger than GPU memory, the corresponding cells in the table are marked as NA.
The first observation from the table is that (1) for in-memory graphs, CuSha preprocesses the data faster than other two systems, and (2) WolfPath is the slowest system.This is because CuSha does not write the processed data back to the disk.GraphChi OP-WP WP Fig. 9 Performance comparison with and without optimisation will only write a copy of data into shard.In contrast, WolfPath traverses the data using the BFS-based algorithm and then writes the data into a temporary buffer before it writes the data to the hard disk.Therefore, the workload of WolfPath is heavier than two other systems.For graphs larger than GPU memory, WolfPath performs better than GraphChi when processing uk2002 and arabic2005.This is because GraphChi generates many shard files for these two graphs, and hence it takes longer to write to the disk.
From this experiment, we argue that in WolfPath, although the preprocessing is time-consuming, the pre-processing is worthwhile because of the following reasons: First, for each graph, WolfPath only needs to convert it once.Second, the resultant format provides better locality and performance for iterative graph computations.
Effect of Optimisation Techniques
As shown in previous experimental results, the memory accessing operations are the dominant factor that affect the performance, and therefore are our primary target of optimization.Figure 9 shows that the performance improves significantly thanks to the 2 optimization techniques discussed in Sect.4.5, including asynchronous execution and shared memory synchronisation.For example, without these optimization techniques, the execution time of the BFS algorithm is 71ms with the Livejournal graph.With the optimisation the execution time drops to 48ms.
Related Work
Using GPUs for graph processing was first introduced by Harish and Narayanan [12].Since then, the CSR format has become the mainstream representation to store graphs on GPU.Merrill et al. [29] present a work efficient BFS algorithm.They use different approaches to minimize the workload imbalance.Virtual Warp Centric was proposed in [13] to tackle the workload imbalance problem and reduce the intra-warp divergence.
Medusa [47] is a GPU-based graph processing framework that focuses on abstractions for easy programming.MapGraph [6] implements the runtime-based optimisation to deliver good performance.Based on the size of the frontier and the size of the adjacency lists for the vertices in the frontier, MapGraph can choose from different scheduling strategies.
The graph processing solutions described above use the CSR format to represent the graph, hence suffering from the random access to the graph data.CuSha [16] addresses the inefficient memory access by introducing the G-Shard and Concatenated Windows.However, as shown in this paper, CuSha requires frequent data exchange between host and GPU, which leads to long overall execution time.
All of the approaches above make the fundamental assumption that the input graphs fit into GPU memory, which limits the usage of these solutions.However, WolfPath does not suffer from such restriction.
Most existing Out-of-Memory graph processing frameworks are CPU based, these frameworks are aiming to process graphs that do not fit into host memory.For instance, GraphChi [17] is the first disk-based graph processing framework and designed to run on a single machine with limited memory.X-Stream graph processing framework [33] uses the edge-centric processing model that takes as input a binary formatted edge-list.It does not require preprocessing, but requires frequent disk I/O to fetch data.
Totem [7,8] is a hybrid platform that uses both GPU and CPU.It statically partitions a graph into the parts of residing in GPU and host memories based on the degree of vertices.However, as the size of the graph increases, only a fixed portion of the graph is able to fit in the GPU memory, resulting in GPU underutilization.GraphReduce [34] aims to process the graphs that exceed the capacity of the GPU memory.It partitions the graph into shards and loads one or more shards into the GPU memory at a time.In GraphReduce, each shard contains a disjoint subset of vertices.The edges in each shard are sorted in a specific order.
Conclusion
In this paper, we develop WolfPath: a GPU-based graph processing framework for processing iterative traversing-based graph processing algorithms efficiently.A new data structure called Layered Edge List is introduced to represent the graph.This structure helps eliminate the frequent data exchange between host and GPU.We also propose a graph preprocessing algorithm that can convert an arbitrary graph into the layered structure.The experimental results show that WolfPath achieves the significant speedup over the state-of-art in-GPU-memory and out-of-memory graph processing frameworks.
Fig. 1
Fig. 1 An example graph and its layered tree representation.a Example graph and b Layered tree representation
1 : 6 : 8 :
function processing(Layered Tree T , root vertex v rt ) 2: i ← 0 3: v rt = DE F AU LT _V ALU E 4: while i < T.level do 5: if i! = 0 then for all vertexes in parallel in level i do 7: read vertex value from level i − 1 for all Edges in parallel in level i do
Fig- ures 3
and 4 show the speedup of WolfPath over CuSha and VWC.
Fig. 5
Fig. 5 Execution time breakdown of WolfPath, CuSha and VWC on different algorithms and graphs.The time unit is milliseconds.a BFS and b SSSP
Table 1
Real world graphs used in the experiments
Table 2
The computation time comparison of Original CuSha and Modified CuSha
Table 6 .
Since X-Stream does not require any
Table 5
Number of iterations executed by different systems and total execution times (ms)
Table 6
Execution times of WolfPath, GraphChi and X-Stream on different algorithms and graphs.The time is in seconds CSR Wolf CuSha-CWFig.8 Memory occupied by each graph using CSR, CuSha-CW, WolfPath | 9,258.2 | 2017-11-14T00:00:00.000 | [
"Computer Science"
] |
Evaluation of the Acaricidal Effectiveness of Fipronil and Phoxim in Field Populations of Dermanyssus gallinae (De Geer, 1778) from Ornamental Poultry Farms in Italy
Simple Summary The poultry red mite Dermanyssus gallinae is a blood-sucking ectoparasite responsible for serious animal health and welfare concerns in egg-laying hen facilities, with impacts on productivity and public health. Traditionally, its control is based on the use of synthetic acaricides. Their extensive use has resulted in the development of acaricide resistance. While industrial farms are under strict legislative control, amateur breeders tend to use cheaper pesticides such as phoxim (licensed in poultry) but potentially also unauthorized pesticides, such as fipronil. The aim of this study was to evaluate the in vitro acaricidal activity of different concentration of these two molecules on field populations of D. gallinae, collected from ornamental chicken farms in Italy. Their effectiveness was significantly associated with the dose used, but a great variability of lethality rate was observed for fipronil with the increase in dilution. For phoxim, some outliers, with apparently lower sensitivity, were observed particularly in one farm, suggesting that a resistance phenomenon was triggered in this mite population. For this reason, it is necessary to underline the importance of the utilization of authorized products at correct dosages and times of treatment and the need for alternative molecules to avoid the onset of drug resistance phenomena. Abstract The poultry red mite Dermanyssus gallinae is the most important blood-sucking ectoparasite in egg-laying hen facilities. The aim of this study was to evaluate the in vitro acaricidal activity of different concentration of authorized (phoxim, ByeMite®, 500 mg/mL) and unauthorized (fipronil, Frontline® 250 mg/100 mL spray) molecules on 14 field isolates of D. gallinae collected from different ornamental poultry farms from different Italian regions. The sensitivity test was performed by contact exposure to four concentrations of each insecticide diluted at 1:5 (10,000-2000-400-80 ppm for phoxim, 500-100-20-4 ppm for fipronil) on a filter paper. The effectiveness of the treatment was significantly (p < 0.0001) associated with the dose of the pesticide used. Considering the mean lethality, phoxim has greater efficacy compared to fipronil (p < 0.001). A great variability of lethality rate was observed with the increase in fipronil dilution; conversely, for phoxim, some outliers were observed, particularly in one farm, suggesting the hypothesis that a certain degree of resistance in the mite population could occur possibly as a consequence of the continual contact with the molecule. This underlines the importance of the use of licensed products administered at correct dosages and the need for alternative molecules to avoid the onset of drug resistance phenomena.
Introduction
The poultry red mite Dermanyssus gallinae (De Geer, 1778) is the most important cosmopolitan blood-sucking ectoparasite in egg-laying hen facilities [1]. This mite poses serious animal health and welfare concerns, adversely affecting productivity and impacting public health [2]. The red mite is a nocturnal obligate ectoparasite with a high proliferation rate, causing severe stress, irritation, eggs quality and quantity losses [3,4], anemia, and, in the most serious cases, may lead to death [5]. In addition, it may act as a vector of pathogens, including Salmonella gallinarum and Salmonella enteritidis [6], Erysipelothrix rhusiopathiae, Pasteurella multocida, Borrelia anserina, Chlamydia spp., Coxiella burnetii, avian poxvirus, Newcastle Disease virus, and St. Louis encephalitis virus [6,7]. In the absence of avian hosts, D. gallinae may infest mammals [8,9], including humans, where it can cause a typical dermatitis known as gamasoidosis or dermanyssosis, especially in workers that work in highly infested egg-laying hen aviaries [10][11][12]. This problem can commonly occur for amateur breeders as well [13]. Less intensive farming systems such as barns and freerange and organic farms show higher prevalence rates than cage systems [14]. However, in Italy, the percentage of infested poultry industrial farms reaches 90% [3]. Anyway, few data are available on the infestation rate in backyard poultry farms. The red mite is almost ubiquitous and challenging to eradicate in rural or in ornamental breeding [15] due to the multitude of farmed species present, potential contacts with wild hosts, and un modern breeding structures, which provide more hiding places for mites. The presence and distribution of D. gallinae are not only greatly influenced by the conditions of the rearing environment but also by the efficiency of products and methods used for its control. Traditionally, the control of D. gallinae is based on the use of both synthetic acaricides, such as organophosphates, carbamates, pyrethroids and spinosyns [16,17], and natural acaricides such as essential oils [14,18]. The efficacy of synthetic molecules is well-documented, showing their acaricidal effect at each mite's stage level. However, increasing legal restraints on the use of many active compounds and the scarcity of authorized products force farmers, especially rural farmers, to use illegal or off-label products on free sale for agricultural use or products that are registered for other farm-animal species, with the risk of residues in the food chain [2,3]. The extensive use of synthetic chemicals and the inherent features of the life history of red mites (short life-cycle and high fecundity) resulted in the development of acaricide resistance [19,20]. Drug resistance phenomena relative to molecules approved and unapproved for use on poultry, including alpha-cypermethrin, bifentrin, carbamates, carbaryl, cypermethrin, deltamethrin, dichlorodiphenylitricloroethane (DDT), dichlorvos, phenitrothion, fipronil, flumethrin, flurathiocarb, permathion, phenothrin, tetramethrin, and trichlorfon, have been reported by many countries, including Czechoslovakia, Korea, France, Serbia, Italy, and Sweden [21,22]. Furthermore, the indiscriminate use in poultry promoted the presence of residues in eggs [23][24][25] and the environment, with possible and continued pesticide exposure relative to the workers [3].
In 2017, the isoxazolinic compound fluralaner was approved for the control of D. gallinae in egg-laying hens by administration via drinking water and with a withdrawal period of 0-day for eggs relative to human consumption [26]. This next-generation product, already known as acaricide in dogs, has revolutionised the control plans of the parasite in breeding, providing valid and irreplaceable assistance. However, its use is prohibitive for many amateur breeders, both in terms of quantity and cost, without neglecting complications deriving from its veterinary medical prescription. Therefore, farmers in this category tend to turn to cheaper pesticides such as phoxim (licensed in poultry) for applications in poultry houses and in the presence of the animals, but these are potentially unauthorized, including easily accessible molecules that are on free sale, such as fipronil. In fact, the use of fipronil in food-producing animals, including poultry, is not only prohibited in Europe, but also in the United States or any other country [27]. However, this pesticide remains easily accessible for applications on other animal species (dogs and cats), increasing the risk of illicit use on farms. In fact, in these farms, the presence of fipronil was recently detected in eggs and in feathers, confirming that the use of fipronil as a pesticide may be common [28].
Given that an extensive and incorrect application of pesticides against D. gallinae could promote the selection of resistant mite populations, the aim of this study was to evaluate the in vitro acaricidal activity of different concentrations of fipronil and phoxim on field populations of D. gallinae, collected from ornamental chicken farms in Italy.
Ornamental Poultry Farms Tested
In the ornamental poultry farms selected for the study, fewer than 250 ornamentals pure breeds chickens were raised (between 40 and 200 chickens) and raised with the free-range method for beauty competitions or for the self-consumption of meat and eggs. Chicken houses were represented by wooden and/or metal structures, with nests for eggs' deposition and perches for resting at night. With this method of breeding, during the day, animals usually have paddocks for scratching at their disposal. These breeds are not subject to light or temperature conditioning and they preserve their natural biological reproductive cycle. Subjects that do not respect the expected breed standard and are not suitable for beauty competitions are intended for self-consumption, while eggs are incubated, used for home consumption, or sold.
The farms selected for this study were involved thanks to the collaboration of the farmers, who had been asked if the presence of D. gallinae was common in their farm and, in a second step, asked what products were used and the periodicity of treatments against D. gallinae. In all farms considered in this study, no chemical acaricidal treatments were used for at least 3 months prior to mite collection. The characteristics of the ornamental poultry farms, the breeds raised, and the pesticides used (if known) for treatments against D. gallinae are summarized in Table 1. * The number of chickens indicated refers to the average number or range of animals present on the farm during the course of the year (roosters and hens). ** The pesticides indicated refer to those declared by the farmers; }nr: not reported., the farmer does not report the use of pesticides. The possible use of fipronil is unknown.
Populations of Dermanyssus gallinae Tested
A total of 14 field populations of D. gallinae were tested, each of which was collected from one of the ornamental poultry farms described above and that were located in different Italian regions ( Figure 1); each one identified with a unique code. The samplings were carried out between July and September 2020. The mites, both adults and juvenile stages, were collected by the farmers from 4 to 5 different nests of mites located in different points of the poultry houses (in nests, perches, and ravines) or directly on the chickens at night. During mites' feeding, using small brushes, directly place 150 mL airtight containers. The containers were subsequently sent to the laboratory. The identification of the mites was performed morphologically, according to the key suggested by Di Palma et al. [29]. For 2 mites samples (Cod_1f and Cod_2d) from 2 different region, Sardinia and Lazio, the susceptibility test was not immediately performed, and mites were maintained in vitro for a maximum of 3 days, following the method described by Nunn et al. [30].
Products Tested
Commercial solutions of fipronil and phoxim were used and diluted to produce a range of 4 concentrations. In particular for fipronil, starting from the commercial product purchased at a pet store, at a concentration of 250 mg/100 mL (Frontline ® spray, Merial Italia S.p.A.), 1:5 serial dilutions were made obtaining 4 final concentrations: 500, 100, 20, and 4 ppm. For phoxim solutions, from commercial products purchased with a veterinary prescription, at a concentration of 500 mg/mL (ByeMite ® , Bayer S.p.A.), serial 1:5 dilutions were carried out starting from a dilution of 10 mg/mL (concentration used 10,000, 2000, 400, and 80 ppm). According to the manufacturer's instructions, the concentration considered effective for causing mites' death is 2000 ppm. For dilutions of both products, sterile distilled water was used. The water was also used as a negative control.
Sensitivity Assay
The test was performed as phenotypic assay by evaluating the acaricidal effect by contact exposure to different concentrations of the products distributed on a filter paper [31]. Petri dishes (6 cm Ø) were used, and a filter paper disc of the same diameter was placed on the bottom. Two-hundred microliters of each dilution of the products tested were pipetted on each filter with a spiral motion to ensure the uniform distribution of the pesticide. Each test was carried out in triplicate; i.e., for each mite sample, 3 plates were prepared for each dilution of each molecule to be tested and for the negative control. Each plate was identified with a date, product, and dilution tested. Subsequently, a maximum number of 20-85 mites [1,32], taken directly from the airtight containers, were placed onto the filter paper in each plate. The mites introduced included both adults and nymph. Immediately after the introduction of mites, the Petri dishes were hermetically sealed with a membrane (Parafilm ® ) to prevent mites from escaping and incubated at 27 ± 2 • C for 24 h, keeping them upside down to prevent condensation on the lid. The reading of the plates and the relative mite count was carried out macroscopically using a magnifying glass. For each plate, including the control, the total number of live and dead mites present was counted, and mites that did not show any movement for a continuous observation of a few seconds were considered dead. If present, moribund mites were counted as dead. Nymphal and adult stages were not differentiated.
Statistic Analysis
Data collected during the study were processed using STATA 15 (StataCorp LLC, College Station, TA, USA) software. Farm of origin, the active compound tested and its dilution, the number of live and dead mites and the total mites counted for each plate, and the lethality rate computed as the number of dead mites/total number of counted mites were the grouped variables. The non-parametric Mann-Whitney test was used to evaluate the lethality rate distribution at the different treatment and dilution. The Pearson's χ 2 test was used to compare lethality rates and the categorical variables. The results were considered to be significant when p ≤ 0.05. Raw data are available in Table S1.
Results
On average the number of mites present in plates does not change in relation to the treatment received, indicating a homogeneous distribution within the plates (p = 0.791). Physiological mortality, deduced from the mean lethality of the control plates, was 22.2%, while the mean post-treatment lethality was 77.3% for fipronil and 92.7% for phoxim. Table 2 shows the average lethality related to the concentration of the pesticide to which the mites were exposed. Comparing the physiological mortality with the ones obtained at the minimum dosage used for each product, the difference was significant (χ 2 = 119; p < 0.0001 for fipronil and χ 2 = 844; p < 0.0001 for phoxim); i.e., the effect due to the molecule is still present even at the lowest dosage. The ratio of dead to alive mites at the different tested concentrations, considering the treatment with fipronil, shows a significant difference (χ 2 = 1200; p < 0.0001); the same applies to phoxim (χ 2 = 248.54; p < 0.0001); thus the effectiveness of the treatment is significantly associated with the dose of the pesticide used. Considering the mean lethality as a continuous variable, it appeared that phoxim has greater efficacy compared to fipronil (p < 0.001). While fipronil and phoxim do not significantly differ in the first dilutions (p = 0.148), the other three dilutions significantly differed to each other ( Table 3). The mean lethality of the mites collected from the different farms at different treatment dosages of the two tested products is graphically represented by a boxplot (Figure 2). A great variability in lethality rates in different mite populations can be noted with the increase in fipronil dilution, whereas this variability is considerably lower for phoxim. However, for the latter, there were some outliers, particularly for mites from the farm "Cod_1d", evident for concentrations at 2000, 400, and 80 ppm.
Discussion
The poultry red mite D. gallinae is a major concern in poultry farms such that a great variety of acaricides, lawful or illegal, are used by farmers throughout Europe, and several cases of finding residues of acaricides in eggs have been reported by the national and international press, with particular reference to fipronil [33,34]. As previously mentioned, the indiscriminate and incorrect use of acaricidal products can stimulate the selection of resistant mite populations. In the present study, the in vitro acaricidal efficacy of two commercial products (one authorized for use in chickens and the other is unauthorized) was tested against different field populations of poultry red mites from various ornamental chicken farms. In the tested farms, the anamnesis reports the occasional use of phoxim, but the possible illicit use of fipronil is unknown. The active compounds (phoxim and fipronil) were tested at different dilution with a 1:5 ratio. In particular, the product containing phoxim is a concentrated emulsion, and according to the manufacturer, 2000 ppm is indicated as the used concentration against D. gallinae, while for fipronil, unauthorized for the scope, the series started from a 1:5 dilution of the commercial product. Lethality rate and, consequently, the effectiveness of the treatment were significantly associated with the concentration of the active ingredient used. For fipronil, the highest lethality was found at the highest concentration tested, while when increasing the dilution of the product, there was a significant decrease in lethality, which is associated with an increasing variability of effectiveness on the different populations of mites from different farms. The use of fipronil, a progenitor of the phenylpyrazole class, against Dermanyssus in addition to being prohibited is rather empirical. In fact, even if used as an insecticide in agriculture and against ectoparasites, including ticks, in pets such as dog and cats [35,36], there is little evidence in the literature regarding its in vitro effectiveness against Dermanyssus, and there are conflicting data. In an in vitro study, Kim et al. [37] tested fipronil as a pure substance by calculating how many milligrams of molecule were present in one cm 2 of filter paper in contact with the mites. They found that fipronil was active with an LC 50 at >5 mg/cm 2 . Another more recent study conducted by Wang et al. [31] showed an LC 50 between 0.17 and 2.10 µg/cm 2 . With the maximum dose used in the present study (500 ppm), we tested the efficacy of 7.6 µg/cm 2 (0.007 mg/cm 2 ) and obtained 99.8% mortality. By comparing these data, it is therefore not surprising that we found a wide range of inefficacy at lower dosages. Probably, there may be a variable efficacy of the product on different mite populations or this may be due to a different intrinsic sensitivity of the mites or due to resistance phenomena for an illegal use of fipronil or, in any case, for its substantial diffusion in the environment. Theoretically speaking, being an unusable molecule in poultry farming, it should have never met field mites' populations, resulting in a development in resistance. Nevertheless, in some of the same farm types, the presence of fipronil was detected in eggs and feathers, thus confirming that the use of fipronil as a pesticide may be common [28]. The pharma-toxicological profile of phoxim, in relation to D. gallinae mites tested, resulted in adequate control (i.e., very high lethality was detected), exerting an acaricidal effect consistent with the concentrations indicated by the manufacturer (2000 ppm). This is in agreement with Zdybel et al. [32] who, testing phoxim-based products, showed lethality rates of 95-100% at concentrations ranging from 4000 to 6000 ppm. For phoxim, in the present study, the decrease in efficacy with increasing dilutions was present but to a lesser extent compared to fipronil. This behavior has also been shown for another poultry hematophagus ectoparasite, Ornithonyssus sylviarum, when exposed to decreasing concentrations of phoxim [38]. Moreover, contrary to what observed for fipronil, in phoxim, there is a poor variability of response between mites coming from different farms at decreasing concentrations. It should be emphasized that for six mite populations, highlighted as outliers in Figure 2, the lethality rate at some concentrations was decidedly lower than the others. The outliers can be interpreted as a decrease in the effectiveness of the treatment in some field isolates; it may be due to various reasons, such as possible resistance to the active ingredient as a result of its incorrect use (dosage and treatment periodicity). An interpretation of this type could be applied, in particular, to the pattern of the mite coming from farm "Cod_1d" (Figure 1). In this farm, located in the Basilicata region, the presence of Dermanyssus was reported as constant over time despite the implementation of monthly treatments with phoxim. This supports the hypothesis that a certain degree of resistance to the product could be established in this population of mites acquired generation after generation, as is already observed in populations of other species, such as Tetranychus cinnabarinus [39]. Moreover, it is crucial to associate in vitro evaluations to molecular tests in order to identify resistance genes or their alterations. However, our method used for the evaluation of in vitro acaricidal activity is fast and easy to apply, allowing the obtainment of rapid preliminary results for highlighting potential new resistances. The situation evidenced in our study points to a hypothetical use-misuse of these molecules and the need for a radically different approach to control D. gallinae. One of these could be the use of organic pesticides that are generally highly appreciated by breeders, both amateur and not, who favorably view the acaricidal effect associated with a poor environmental impact and, possibly, result in eggs and meat without residue. This could be the case of the use of the phytoextracts with acaricidal activities. As an example, the use of different plant extracts, essential oils, and related compounds derived from plants such as Eugenia caryophyllata, Cinnamomum camphora, Asarum heterotropoides, and Cnidium officinale or some vegetable oils, including Neem oil, Cassia and Cinnamon essential oil, containing the active ingredient cinnamaldehyde, were effective against D. gallinae [40,41]. However, efficacy, low toxicity, and the absence of residues in food of natural products must be further tested. This type of approach and choices will be the research area of the future.
Conclusions
Although this is a preliminary study, the results obtained, particularly the presence of outliers observed in phoxim, focus our attention on the possible emergence of resistance spots in natural populations of Dermanyssus to molecules belonging to the organophosphate family, which are often used in farms. On the other hand, the considerable variabilities in responses to fipronil, particularly at the lowest concentrations, suggest a widespread decrease in susceptibility underlining the possible adaptation of different mite populations for the illicit use of the molecule. This points out the importance of the use of licensed products administered at correct dosages and the need for alternative molecules to avoid the onset of drug resistance phenomena. | 5,080.6 | 2022-09-01T00:00:00.000 | [
"Agricultural And Food Sciences",
"Biology"
] |